Opening the hood on a Toyota Corolla after the hood release cable breaks

This post is to document how I managed to replace a headlight after the hood release cable broke when I tried to open the hood. There are no great underlying lessons here other than the value of perseverance.

When the hood release cable breaks, the hood is locked down. While apparently the way to release it depends on the automobile in question, this method worked for my 2004 Corolla and should be applicable to similar models.

First lesson: when you feel the hood release go slack (probably after hearing a snap), don’t close the hood until you have fixed what you opened it to repair.

Second lesson: study the situation from various angles. Lights help!

After peering through the grill from the front, and looking up at the latch from underneath the front of the vehicle, I discovered that there was a plastic shroud covering the latch mechanism. To remove it, I had to poke a long screwdriver through the grill to pop it off at both ends. Then I had to go back underneath the car to pull it out from a metal bracket that was preventing it from dropping down any further.

While still under the car, I used my long screwdriver again to poke around in the latch mechanism, entering through the passenger side (since the release cable pulled from the driver’s side, it made sense that I’d need to push something inside from the passenger side). After a little poking, I heard the hood release.

From there it was just a matter of releasing the secondary latch as per normal, replace the light, and top up the windshield washer fluid (since I am not sure when I’m going to open the hood again).

While I was working, I decided to take off the latch mechanism to examine it. This is a matter of unscrewing 3 bolts (each with a 10mm head) – one on each side and one below. Unfortunately the hood release cable prevented me from taking a close look at it, so I just put it back until I get around to fixing the cable.

The entire operation took me about an hour with most of the time spent figuring out what to do. Hopefully this post may save others some time.

Advertisements
Posted in Home repair | Leave a comment

Bad Design

Would an omnipotent God create a bad design? That’s a question that creationists who know something about biology should ponder. After all, there are no end of design flaws in biological organisms.

However I’m going to write about bad design decisions made by humans. I’m going to talk about technology because that’s where we see examples every day.

I build my own computers so one classic example of a bad design was the way USB motherboard headers were laid out. Even today motherboards have header pins on them so that builders can connect extra ports to the front or back of computer cases.

USB headers have always been arranged in pairs. One USB port has 4 connections: 5V. signal, return & ground. Two of them can be either set up in parallel, which makes board design easier (you only need to route the signal connections separately), or in opposition (i.e. the 5V on one is next to the ground on the other and vice versa), which makes it impossible to connect the port incorrectly (no matter which way you insert the plug, it is correct). Fortunately no one ever set them up in series (the two headers in a straight line rather than side by side)!

In the first case, which was distressingly common, putting the plug in the wrong way would damage the motherboard. At the very least you would burn out the USB circuit. The second design was clearly superior but it was rarely used.

The first case could have been corrected by putting a plastic shroud around the header pins, then putting a “key” on the plug that would fit into a notch on the shroud. This would have added perhaps a penny or so to the manufacturing cost, so you only saw it on high end motherboards.

Because both designs existed, case designers had to allow for either. There were two methods of handling this. The better, and more common, way was to separate the individual USB plugs, so you could plug them in either way. Less common, but sometimes used, was to simply put a plug on each individual wire, so the user had to plug 8 individual wires onto the USB header. WTF!

It took a very long time but eventually industry came up with a solution. Board designers added a second ground pin to their designs so that the two rows of headers were asymmetrical. One side was one pin longer than the other. Case manufacturers switched to using a 10 hole plug with one hole plugged, so the plug could only fit over the header one way.

I doubt this was significantly cheaper to design or manufacture than the 8-pin methods used earlier, but it allowed everyone to adopt a new standard without causing either of the older designs to be criticized. It “saved face” for everyone.

A slightly different problem arose when video cards started requiring more power than could be provided through the motherboard slots. The original solution was to use the increasingly underused floppy disk power connector. This connector was on every power supply (there were almost always 2 of them) and floppy drives were becoming obsolete so there was no real downside to using one.

This was a short-lived solution as the power demands of video cards climbed. Floppy power connectors were soon replaced by a shortened version of the motherboard power connector. Instead of 20 lines, it had only 6 – 3 power and 3 ground. Thanks to the design of the motherboard connector, it couldn’t be plugged in the wrong way. Like the motherboard connector, the +12V connections were away from the locking clip.

Over time this has increased in size to an 8-pin version.

Unfortunately, the demands for power on the motherboards were also climbing. This was met by extending the motherboard power connector by 4 pins and also by adding a separate 4-pin power connector (usually plugged in near the CPU). Like the video card power connector, this also was shortened version of the motherboard power connector. Over time it also was extended to an 8-pin version.

Unlike the other two connectors, however, this new connector had the +12v connections on the same side as the locking clip.

You can see where this is going – two connectors with the same physical size and shape, capable of being plugged into two different sockets with different electrical expectations. Plug the wrong one in and you fry the circuitry.

This was, in effect, a monumentally stupid design decision. Moreover there was no reason for it. The original connectors were simply providing power and had no sensing capability. Had they been delivering power the same way, no harm would be done to the video card or motherboard.

You can see examples of the various power connectors at http://www.playtool.com/pages/psuconnectors/connectors.html. The yellow wires are +12V. You can do a web search for explanations of the various connections.

Let’s turn our attention now to something more familiar – television boxes. The first high definition (HD) televisions were rolled out back in 1996. They used 1080i (interlaced) which is still the broadcast standard for HD. Prior to that, the standard was 480i (SD) , which had been set back in the 1940s. While there are some 4k transmissions today, any HD TV has to be able to handle 1080i.

DVDs have been available since 1995 and they specify a data rate of 480i – the same as SD television, although they allow for a wide-screen format of 1.5:1 – slightly more square than the HD ratio of 1.77:1.

There is also 1080p (progressive scan) which delivers twice the data rate, painting a completed screen 60 times per second instead of the 30 times per second of 1080i. Few people can detect the difference but when BluRay became the de-facto HD disc standard, it allowed 1080p.

Recapping, every TV ever made must be able to handle 480i because that is what is output from DVD players and what is still used in a minority of television broadcasts. Every HD TV must be able to handle 1080i because that is still the broadcast standard.

So let’s look at some “smart TV” adapters – small boxes with an internet capability (wired or wifi). A lot of them give you two output options – 720p and 1080p. Some even add 4k (2160p). This is another WTF moment. Only the 4k option even matches a required data rate and that is only for very new TVs. The vast bulk of TVs in use aren’t 4k.

So owners of standard and HD TVs frequently can’t use these boxes to turn their televisions into smart TVs. Conversely, almost all TVs sold today already have smart capability so they don’t need these boxes. In particular, finding a 4k TV without smart capability is hard to do.

It leaves me scratching my head about the market research the manufacturers conducted. They seem to have deliberately relegated themselves to a niche market -adding smart capability to those few TVs that handle 720p or 1080p but don’t already have smart capability. The larger number of 480i and 1080i TVs are ignored.

Modern TVs have adopted a standard developed for computer monitors where the monitor notifies the device it is attached to as to the video formats it can handle. This doesn’t always work however, but when it does, it makes life easier for the end user.

Let’s look at what happens when it doesn’t work. I recently acquired a used PVR to connect to my home theatre system. Most of the system is far from state of the art. My main speakers are around 40 years old, while my AV receiver pre-dates HDMI and uses component video instead (component video is as good a HDMI for HD material, but uses a more complicated cable that can be connected wrong if you can’t follow colour coding. It also doesn’t include audio so you need a stereo audio cable as well).

Fortunately the PVR box has component video output. In theory I should just be able to plug it in and it should work (after getting it authorized from my cable TV provider). Unfortunately the designers of the box decided to default to having the box negotiate with TV to set the output resolution. The result was that I got nothing useful on my screen because, when it couldn’t negotiate, it defaulted to a non-standard mode that my 10 year old television didn’t support.

Neither did a newer, small TV that I had set up in my exercise room.

Both of these TVs were from quality manufacturers (Panasonic and Sony) so the problem isn’t that they were made on the cheap. The problem is that auto-negotiation isn’t reliable. The connecting device needs to be able to fall back to something that will display a reliable signal.

I’m not sure if the smart TV adapter or the PVR fell back to 720p or 1080p but neither is a standard definition supported by all TVs.

The obvious solution would be to fall back to 480i, a data rate supported (as I mentioned earlier) by every TV made since the DVD standard was set in 1995 and by every North American TV ever sold. So why drop back to a mode that cannot be guaranteed to be available?

In fact, why go through auto-negotiation at all? Why not simply display a 480i picture and ask users to set the output mode? You could even have that default to an “auto” setting that falls back to 480i if you don’t get a confirmation within 15 seconds, like some computer operating systems do.

In the end, I had to connect my pvr to a TV that handled 1080p so I could change the display setting from auto to 1080i. Fortunately the pvr kept that setting between power outages while I moved it between TVs.

Modern automobiles are another case in point for bad design. In recent years they have replaced the myriad of knobs, buttons and switches with a small touch screen control console – usually in the centre of the dash.

It doesn’t take a genius to realize that the last thing you want is for the driver to take their eyes off the road to adjust something. Yet that is what is required for any function controlled by a touch screen. While I can switch the A/C on or off or change its temperature in my car by the push of a button or turn of a knob, requiring a fraction of a second to ensure my finger is touching the right control, modern cars require you navigate menus before finding the right control.

Moreover there is no tactile feedback to let you know you succeeded. You need to keep your eyes on the screen to verify that the setting was properly changed.

Star Trek fans will recognize this. The captains and navigators had control buttons and levers. Only the people not responsible for the speed and direction of the ship used touch displays. Apparently SciFi writers did a better job of designing controls than automotive designers despite their decisions not having potentially fatal consequences.

Intelligent design of technology requires giving a little thought to your design decisions then testing them before embedding those decisions in products. Apparently human intelligent designers aren’t too different from the cosmic intelligent designer creationists believe made all those design errors in biological organisms.

Posted in Economics, Home repair, Science and Mathematics | Leave a comment

Consciousness

I keep reading a lot of what I consider to be nonsense about consciousness, artificial intelligence and similar topics. Typically this phrased as “can an AI be conscious?”, “will AI ever achieve consciousness?” or “what is consciousness?”. The purveyors of such questions then proceed to engage in a series of arguments to back a position they have taken.

My own take is that the answer is 42. Like in Douglas Adams’ badly labeled trilogy, the question is wrong. Consciousness is not a single thing or even a single quantity of things. It has many aspects.

One of the most basic aspects of what is frequently called consciousness is an awareness of self as distinct from other non-self things. In humans, we generally believe that we are our body (except for the rare cases where people believe one or more appendages are not part of their body). As our body changes over time, we still view it as our self. These are my hands that are typing this. This is my brain that is putting these thoughts into words.

I believe we see our bodies as ourselves mainly because of our senses. We have nerves throughout our bodies that signal our conscious mind when they feel something. Moreover, we can react to sensory input to pull our hands away from something hot or to catch a ball coming toward us. Even though these actions may not involve any conscious decisions on our part, they help reassure us that we are our bodies.

Clearly this is not limited to humans. Even amoebae seem to be able to respond to their environment. However at some point as organisms increase in complexity, they start becoming aware that they are their body. A cat, for example, will clean itself and sharpen its claws.

Conversely, even advanced robots usually get very little in the way of feedback from their bodies. A robotic arm in a manufacturing plant doesn’t usually have internal sensors to determine its position. It simply moves to specific places based on what its controller tells it.

Even a device with sensors rarely gets the exquisite level of feedback that a vertebrate’s body provides. Your phone, for example, might have a GPS locator and position/orientation sensors but wouldn’t have a clue as to the type of surface it is resting on.

The next level toward what we call consciousness is the development of a complex brain. The brain synthesizes the sensory inputs into an internal model of the body it is within and the world around it. Sufficiently complex brains can even recognize representations of its body, such as identifying its own reflection.

The key capability here, I believe, is that the brain can monitor its own activity. I know that I am thinking about consciousness while I am typing this sentence. This is, I believe, what gives us the feeling that we are conscious beings. Even though neuroscientists will tell us that we make decisions before our conscious mind is aware that we have done so, the fact that our mental monitor notices that we have made a decision is the key part of consciousness.

Once our self-monitoring mind notices, it can reflect on why we made that decision. Like most mental process, this mainly happens at a level below our mental monitor. It’s only when the gears have finished churning that we come become aware of why we believe we decided something.

Whether that belief is true or not is actually irrelevant. Because our mental monitor thinks it is true, we believe that we made the decision consciously. This also gives us the capability to re-evaluate that decision based on further information – we can change our minds.

So the problems I see with current AI are twofold. Firstly, without a sense of self created by a network of internal and external sensors, we won’t get anything like human consciousness.

Secondly, while we building vast neural networks crammed with facts, we so far are not building self monitoring mechanisms into our artificial brains. We’re not asking them to think about why they are coming to the decisions they do.

Without these two factors, I doubt that machine consciousness, no matter how advanced AI gets, will ever be similar to human consciousness. We may be able to build AIs that can out-play GO masters but we won’t be able to recognize them as conscious and they will be unable to dispute that point.

Posted in Education, History, Religion, Science and Mathematics | Leave a comment

The more things change…

“The more things change, the more they stay the same” is a very old adage that self-referentially proves itself over and over again. The Athenians didn’t trust elections because they favoured the oligarchy and preferred to draw lots to determine representatives – an early example of proportional representation.

In ancient Rome, to be elected to office, you needed to get the “mob” on you side. This was done by putting on shows, giving out food or favours and other expensive ways to convince the people that you were on their side. Unless you were wealthy, this required borrowing a lot of money that the lenders expected back with interest. To do that, you had milk the office for every penny you could squeeze out of it.

And even if you were wealthy, you still expected that your election would bring back more than it cost. Roman politics were the classic example of institutionalized corruption.

Fast forward to the Trump administration and you see the same things happening. Trump’s election was expensive and he’s been paying back his major backers with contracts, tax cuts and deregulation of industries. He’s also blocked multiple attempts to impose sanctions against Russia, one of his major backers.

In Ontario, we’re seeing Doug Ford following this same strategy. Despite being born wealthy and displaying an absurd sense of entitlement, his family’s annual picnics seem to convince his supporters that he’s a “man of the people”. Not having any real employment, he also gets to “meet and greet” people while other party leaders have to be in the legislature.

He knows that many people have precarious employment and are struggling to make ends meet. Rather than addressing the underlying problems of employment rules and tax laws, he’s scapegoating people whose unionized public service positions give them some security. They, not he and his rich buddies who are causing the problems, are the ones we should deplore.

One reason his charges stick is because the Liberals have a long history in Ontario and across Canada of doing too little to make any real change. Indeed, they’ve played tag-team with the (Progressive) Conservatives to implement the employment and tax laws that have created the current problems.

In the decades since Clarence Gillis and Tommy Douglas popularized the Mouseland tale, we’ve seen many examples of it in action. However this is just another example of how “elections favour the oligarchy” that the Athenians noted.

The problem isn’t elections per se, but rather their winner-take-all nature. Electing a single representative per district cannot be proportional no matter how you count the votes in that district. Most people’s votes aren’t going to count toward getting their sincere first choice of representative.

The Athenian system of drawing lots worked because it took money and influence out of the equation. Yet this same system has been attacked in Canada as undemocratic when used to pick representatives for citizens assemblies, for example. People today conflate democracy with voting.

Proportional elections keep the Athenian ideal that the assembly should be fully representative while still minimizing the impact of money and influence on the elections. As Abraham Lincoln observed, not everyone is going to be fooled by politicians who put on shows or give out food and favours. When their votes count, the influence of the oligarchy is muted. It won’t go disappear, but at least it isn’t amplified by a winner-take-all election.

Posted in Economics, Electoral Reform, History, Politics | Leave a comment

Ontario PC leadership contest

The Toronto Star recently tried to compare the Ontario PC leadership race to the American Electoral College (EC) system. While there are some similarities, the comparison is mostly a bad one.

The American EC has members from each state based on their Congressional seats. The number of seats each state has in the House of Representatives depends on its population (i.e. they are assigned proportionally) while the number of Senate seats is always 2.  This gives the smaller states a slightly greater number of EC seats than they should have.

With the exception of Maine and Nebraska, each state’s EC seats are winner-take-all. Maine and Nebraska split theirs 2 – 1 with the the 2 going to the first place finisher and the 1 going to the second place finisher.

The Ontario PCs give each riding up to 100 votes. If more than 100 members vote, their votes are divided proportionally. If fewer than 100 members from a riding vote, their votes are counted directly. For example, in a riding with a small membership, 25 people could vote for A and another 12 vote for B. These votes would be counted directly.

However if a larger riding had 250 people voting for A and 120 voting for B, the votes would be reduced to a fraction of 100. A would get 100*250/370 or 68 votes while B would get 100*120/370 or 32 votes.

While the votes for each candidate are assigned proportionally within a riding, they are not equal between ridings. A vote from a member in a small riding is worth more than a vote from a member in a large riding.

In effect the American EC system is the inverse of the Ontario PC’s system. The EC votes are mostly dependent on the population of each state but are almost always assigned on a winner-take-all basis. The Ontario PCs, on the other hand, treat each riding as equal but allocate the votes proportional to each riding’s actual votes.

The other difference is that the Ontario PCs use a preferential ballot so that the eventual winner requires half of the (weighted) votes. American voters only get to vote for one candidate. If the EC fails to elect a President with an absolute majority of their votes, the decision goes to the House of Representatives.

Neither process is very good. While there is no good way to elect a person to a position of extraordinary power, these are two of the worst. I say that because they violate an inherent principle of representative democracy: that all voters should have equal votes (the power of your vote should not depend on where you live or who you vote for).

In the American EC system, your vote could be counted equally regardless of where you live but because the EC is elected using a winner-take-all system, excess votes for the winning candidate and votes for the losing candidates have no power.

Conversely, the Ontario PC system diminishes your vote if you live in a riding with more than 100 members. The larger the local riding association, the less your vote is worth.

I’m not going to recommend ways to fix these processes because, as I said above, there is no good way to elect someone to a position with extraordinary power. Electing a King doesn’t make you a democracy.

Posted in Electoral Reform, History, Politics | Leave a comment

Wages and Prices

Some people are predicting gloom & doom over Ontario’s upcoming minimum wage increase. On January 1, 2018 the minimum rate for most workers rises from $11.60 to $14.00. Some people claim this will lead to massive job losses and increased costs for everyone.

Simple math should dispel these notions but most people don’t seem to want to do it. So here’s my basic take on the situation:

The minimum wage is currently $11.60 and will increase in 2018 to $14. That is an extra $2.40 per hour. If a store is open 16 hours per day, that’s an increased cost of $38.40 per day or $1,152 per month.

While that sounds like a lot, please note that it only applies to businesses that hire employees. Mom & Pop shops won’t see any increase since they have no employees. While bigger operations will have to pay more, the payroll costs will be lost in the other business expenses.

For example, the local coffee shop is pulling in hundreds of dollars per employee hour while paying out thousands monthly in rent, heating, lighting and supplies. Since those costs aren’t changing, neither should the cost of a coffee. Even if an employee only sells 20 cups of coffee per hour, that would fully covered by a 12 cent increase in the cost of a cup (and no increase in the price of other items).

In the local no-frills, each cashier process thousands of dollars of groceries per hour. Their wages are lost in the other costs so you shouldn’t see any price increase. The same thing holds true for the people responsible for receiving and stocking the shelves. The costs of staff handling the merchandise is far less than a penny an item.

Other staff, such as supervisors and accountants, aren’t earning minimum wage so their costs remain the same.

Those two jobs represent the extremes of retail – a low volume coffee shop versus a high volume grocery. At the one end, labour costs are a significant factor while at the other, they are negligible so far as the consumer is concerned.

People predicting increased costs are simply wrong. Some businesses may need to raise their prices marginally but most won’t need to raise any prices.

So why are business owners so opposed to raising the minimum wage? Generally it’s because they want to keep all costs down, whether they are significant or not. Every dollar they pay out is one less dollar they make, so far as their vision extends.

What they don’t see is that a lack of customers also limits their income. People earning the minimum wage don’t make discretionary purchases, like a cup of coffee, very often. They only buy food that is on sale and get some of their sustenance from the food banks. They aren’t fixing up their homes or apartments. They aren’t buying new clothes.

In Ontario almost 3 out of 10 workers earns close to the minimum wage. Increasing their buying power brings a lot of new consumers to the marketplace. They’re not going to be traveling abroad to spend their money. They’re going to be spending it locally.

They’re not going to be hiding their income in offshore accounts either. It will be taxed, as will their spending, with the revenues going back to our governments. This will help keep taxes down – something that the people arguing against increasing the minimum wage seem to value.

Of course you don’t have to just argue mathematics over this. There is ample evidence that raising the minimum wage doesn’t raise prices or increase unemployment. Indeed, one need only look at European nations where the lowest wages are far higher than what Ontario pays, to see that high minimum wages are beneficial.

What this really is about is that business owners see things in terms of competition. If you win, I lose. Their minds are generally not thinking in terms of societies or long term benefits. Indeed capitalism is envisioned as a purely reactive system with any ideas about planning considered anathema.

Like any ideology however, purity is at odds with reality. America in the 19th century, when capitalism was at its purist, was no utopia. Attempts to emulate it have never led to good results.

Raising the minimum wage also won’t bring about utopia. However it is a step in the right direction that is welcome after decades of regression.

Posted in Economics, History, Labour, Politics, social programs | Leave a comment

Witch Hunting

I’m getting fed up with all the people coming forward to denounce people over alleged sexual wrongdoings. They are demeaning the very real complaints being made against sexual predators. And they are destroying the reputations and careers of people who have done little to deserve it.

The case of George Takei is what pushed me over the edge. The complainant makes allegations about something that happened 36 years when he was drunk to the point of passing out. Presumably Takei was in a similar state. The complainant alleges that Takei sexually assaulted him. It might have happened or he might have misinterpreted or misremembered what happened or he could just be making it up. I don’t know.

However it’s clear that Takei isn’t a Bill Cosby, drugging his victims. Nor is he a Weinstein or Ailes threatening careers if they don’t go along. This also isn’t a male versus female situation where the assaulter has a presumed strength advantage.

At worst Takei once attempted to take advantage of someone who had consumed the same amount of alcohol as him. Or perhaps he didn’t. I don’t care. If it did happen as reported, Takei learned from the experience and didn’t repeat it. Many people have done much worse when they’ve had a few yet not learned anything. This includes Takei’s accuser who drove shortly after the incident despite having apparently consumed too much alcohol.

Woody Allen was lambasted for saying that he hoped that Weinstein affair didn’t lead to a witch hunt. Sadly it appears that his fears were justified.

George Takei has done a lot of good with his life. Even it the allegations are true, he shouldn’t be judged by one decades-old mistake. Let’s save our condemnations for people like Cosby, Ailes and Weinstein and others who abuse their positions to do significant harm to people. Let he who is without sin cast the first stone.

Posted in Arts, History, Religion, social programs | Leave a comment