human contact

I’m getting fed up with all the nonsense about Joe Biden, including the latest cartoon it today’s Toronto Star. Some people believe that all contact between people needs to be preceded by explicit consent. I suggest this is rubbish. It is an overreaction to the way sexual assault and rape have been ignored and downplayed.

I counter with this short article from Skeptic magazine that argues that touch is something we all need. There is a vast gulf between Trump’s “grab them by the pussy” and Biden’s reassuring touches that should not be minimized by people trying to show that “we criticize our side just as much as we do the other side”.

I will also point out that activities that don’t involve physical contact can still be horrible. When Harvey Weinstein masturbated in front of a woman, there was no physical contact but the woman was still used for his sexual desires without her consent.

Intent matters. A friendly pat on the back or arm around the shoulders is not the same as a pat on the ass or a hand on the thigh.

The situation also matters. When young people are engaging in their first romantic relationships, verbal consent while engaging in passionate kissing would be bizarre, Instead both parties use physical cues to set/determine how far to go.

Some women have complained that Biden made them feel uncomfortable. To which I reply “so?” There is no right to never feel uncomfortable. I once reported to a manager who, while a nice person, somehow always made me feel uncomfortable. That was my problem, not theirs.

And that person was hardly unique. Lots of people have made me uncomfortable. And I’ve probably made a lot of people uncomfortable too. Welcome to society.

I often feel uncomfortable when my views are challenged or when I’m undertaking new things. Being uncomfortable means that I’m in position to become better.

I was running for office in Toronto back in 2003 during the SARS outbreak. Each morning my team would go to transit stops to shake hands, talk to people and hand out literature. In theory, shaking someone’s hand, or even being on the same bus with them, could be a death sentence.

However people still went to work and we still campaigned because at that time, there was no evidence that SARS hadn’t been restricted to a few cases. It turns out no one we talked to gave us SARS or vice versa.

That didn’t mean that the possibility didn’t cross my mind. However I had to dismiss it because treating it as a West-Africa Ebola outbreak wasn’t called for. Different degrees of threat call for different responses.

Joe Biden is not Paul Bernardo nor Donald Trump. He doesn’t have a Weinstein-type group protecting him from criminal charges. He’s just someone who doesn’t shy away from physical contact but instead uses it to express himself.

The media should stop piling on Joe Biden and instead start looking at the science behind the human need for physical contact. Go after the predators but leave the huggers and shoulder rubbers alone.

Posted in Education, Health & Healthcare, History, Politics, Religion | Leave a comment

AV is less proportional than FPTP

In a representative democracy, the most important aspect of any voting system is “how proportional is it?” That is, “how well do people’s votes translate into representation?” After all, we are talking about representative democracies. If the elected assembly doesn’t reflect the way people voted, how democratic can it claim to be?

While there are various methods to measure proportionality, they all come down to how well the election outcomes match the voters’ choices. If 20% of the voters voted one way, that should translate into 20% of the representatives.

In the case of a ranked ballot, that could be their first choice. In this respect then, clearly Alternative Vote (AV) is less proportional than First Past the Post (FPTP) because you get either the FPTP result or someone who was the first choice of fewer voters wins.

AV advocates might argue that their system allows people to vote sincerely and that by taking secondary preferences into account, it does a better job of reflecting voter preference. There are numerous problems with that argument however.

Firstly, AV doesn’t always allow people to vote sincerely because you only get your secondary choices counted if they stay on the ballot longer than your first choice. Voting sincerely could elect someone you don’t want while voting tactically for a secondary choice under FPTP would prevent that.

Of course, given that Canada’s parliamentary traditions allow the party with the most seats first crack at forming the government, regardless of which voting system is in place, winner-take-all elections (e.g. AV, FPTP) require voters to vote strategically to prevent the party they most dislike from winning the most seats.

The deeper problem, and my second argument against the AV position, looks at the overall assembly that is elected. In any given election, the assembly elected using AV will be virtually identical to the one elected if FPTP had been used. This is confirmed by the data from AV elections in Australia, Canada and the U.S.. At least 19 times out of 20 the first round leader wins.

So what happens in the fewer than 1 time in 20 that the first round leader doesn’t win? While AV advocates cheer this as an example of their system working, in actuality it demonstrates how their system is even less representative than FPTP.

The issue comes down to the representation of smaller groups. Proportional systems use the concept of a quota to determine how many votes it takes to elect a candidate.  The two quotas in use are the Hare and the Droop, named after their inventors. The Hare quota uses the formula 1+T/S, where T is the Total number of votes and S is the number of seats to be elected. The Droop quota uses 1+T/(S+1).

If you are electing 1 candidate, Hare would require that candidate to get all the votes plus 1 while Droop only requires that they get half the votes plus 1. This brings us to the issue of electing candidates who don’t make the quota. Despite various techniques, it boils down to the winner being the candidate with the most votes.

The two systems are discussed more fully at but the basic point is that, because Hare requires more votes than Droop, it favours smaller groups while Droop favours larger ones. With Hare, more of the votes for the larger groups go to electing the candidates who make quota, leaving fewer to pass on to candidates who don’t make the quota.

You can think of AV as being the Droop quota system while FPTP is the Hare. Since no candidate can make the Hare quota in any round, it selects the best sub-quota candidate. AV continues to try to elect a candidate with quota.

While the Droop quota works to ensure that the winners best represent the district, when the district magnitude is 1, this means that only the larger groups get represented.

However, winner-take-all elections already favour the larger groups. Even if the fewer than 1 time in 20 come from behind candidate may be more representative of their district than the first round leader, FPTP elects more candidates from smaller groups.  This makes it more proportional overall.

The real problem, and one resolved by the Athenians over 2,000 years ago, is that you cannot guarantee to simultaneously elect the “best” representative in multiple single member districts and elect an assembly that properly represents the way voters across the jurisdiction voted.

The Athenians resolved the dilemma by drawing winners at random, so their Council of 500 statistically represented the population while abandoning the idea of having a “best” local representative. Every winner-take-all election tries to elect a “best” local representative, which prevents the elected assembly from statistically representing the population that elected it. The harder it tries to elect the best local representative, the less proportional it becomes.

Proportional representation bypasses the issue by abandoning the idea of a single local representative. Local voters are represented according to their votes in multimember districts. To the degree that the various districts elect representatives proportional to their votes, the overall assembly has both local and overall representation.

Lacking proportional representation, FPTP is the most proportional winner-take-all system. This still doesn’t make it good. It is simply the least awful.

A third objection to AV’s argument is that AV strengthens the influence of money in our elections. Aristotle favoured drawing representatives by lot because even in his time, money and power were known to have undue influence in elections.

FPTP limits the power of money because moneyed interests can only back one candidate lest they split the vote. This allows candidate spending limits to be effective.

AV, on the other hand, allows moneyed interests to back multiple candidates on the basis that their votes eventually fold in to those of their preferred candidate. As we see in the American elections, there is a lot of money that could be spent on elections if our voting system and spending controls allowed it. Change the voting system to remove vote splitting and you unleash the floodgates.

This becomes even more critical in Ontario municipal elections where political parties are not allowed. While political parties give voters an idea of what candidates stand for and allow election coverage to focus on policy issues, in elections without parties voters are left to try to figure out what candidates actually stand for by reading campaign literature and interviews, or by attending debates. When moneyed interests back multiple candidates it becomes harder to figure out who really cares about the issues you care about.

To make matters worse, with more candidates on the ballot, name recognition becomes critical. Voters are likely to rank someone they’ve heard of over someone they’ve not. Or they may, as happens in Australia, resort to drone cards, where the candidates tell them how to rank candidates on the ballot.

Even if AV could potentially elect a “better” local candidate, it seems that it is unlikely to do so. Instead it favours candidates backed by interests with deep pockets.

Finally, I will point out that the secondary choices of the supporters of the most popular candidates are never considered under AV. Your secondary preferences only come into play if your preferred candidate is dropped. The vast bulk of votes have secondary choices that do little more than increase the likelihood of spoiling the ballot.

The most important job of an election in a representative democracy is to create an assembly that truly represents the voters. Winner-take-all elections only do this accidentally if at all. People who try to improve upon FPTP by replacing it with another winner-take-all system are wasting their time. Every supposed improvement actually makes the election even less representative.

To make matters worse, this truth is usually quickly discovered, leading to the abandonment not just of the “improvement” but of the very notion of electoral reform as a worthwhile endeavour. This makes it much harder for real reform to take place.

Posted in Electoral Reform, Science and Mathematics | Leave a comment

M-Disc “marketing”

WTF is with optical drive manufacturers these days? I know that optical drives for computers are considered passe by some people, but they are still very useful and are being developed to do some very impressive things that other technologies cannot match.

In particular, there is a disc technology called M-Disc that has a 1,000 year life expectancy – similar to what a pressed CD, DVD or Blu ray has but far longer than what normal writable CD, DVD or Blu ray disc will last (typically measured in years before they go bad). It even outlasts the USB sticks that might, if properly stored, keep your data for decades.

And while USB sticks can hold large amounts of data, so can M-Discs. The current (quad-layer) discs can hold 128G of data. This is far more than most people would be willing to purchase as “cloud storage” and, being offline most of the time, it’s also hard to “hack”.

Because M-Discs have the same reflectivity as pressed CDs, DVDs, and Blu rays, they can be read by any drive that can handle those formats. You can make a backup copy of your favourite movie that will last as long as the original.

If you want to back up your home movies, photos, or even your music, M-Disc is the way to go.

Being able to burn M-Discs sounds like a killer feature – essential for anyone interested in optical storage – so you’d expect that optical drive manufacturers would be letting people know that their drives can do it. However you’d be wrong.

While there is an M-Disc logo, few drives sport it. Even drives that supposedly can burn M-Discs might not. I discovered this because I have been looking for a Blu ray burner that can handle M-Discs. Try searching for it on a manufacturer’s site – let me know if you can find it.

What I have found are some vendors bundling M-Discs with products that otherwise have no indication that they can burn them. The vendors don’t say the discs can be used by the burner, but the implication is clear. Am I supposed to take a chance?

I have no idea what the various manufacturers are thinking, but by not telling people whether or not their products can support M-Disc, they are hastening the demise of a useful technology.

Posted in Arts, History, Internet, Science and Mathematics | Leave a comment

Opening the hood on a Toyota Corolla after the hood release cable breaks

This post is to document how I managed to replace a headlight after the hood release cable broke when I tried to open the hood. There are no great underlying lessons here other than the value of perseverance.

When the hood release cable breaks, the hood is locked down. While apparently the way to release it depends on the automobile in question, this method worked for my 2004 Corolla and should be applicable to similar models.

First lesson: when you feel the hood release go slack (probably after hearing a snap), don’t close the hood until you have fixed what you opened it to repair.

Second lesson: study the situation from various angles. Lights help!

After peering through the grill from the front, and looking up at the latch from underneath the front of the vehicle, I discovered that there was a plastic shroud covering the latch mechanism. To remove it, I had to poke a long screwdriver through the grill to pop it off at both ends. Then I had to go back underneath the car to pull it out from a metal bracket that was preventing it from dropping down any further.

While still under the car, I used my long screwdriver again to poke around in the latch mechanism, entering through the passenger side (since the release cable pulled from the driver’s side, it made sense that I’d need to push something inside from the passenger side). After a little poking, I heard the hood release.

From there it was just a matter of releasing the secondary latch as per normal, replace the light, and top up the windshield washer fluid (since I am not sure when I’m going to open the hood again).

While I was working, I decided to take off the latch mechanism to examine it. This is a matter of unscrewing 3 bolts (each with a 10mm head) – one on each side and one below. Unfortunately the hood release cable prevented me from taking a close look at it, so I just put it back until I get around to fixing the cable.

The entire operation took me about an hour with most of the time spent figuring out what to do. Hopefully this post may save others some time.

Posted in Home repair | Leave a comment

Bad Design

Would an omnipotent God create a bad design? That’s a question that creationists who know something about biology should ponder. After all, there are no end of design flaws in biological organisms.

However I’m going to write about bad design decisions made by humans. I’m going to talk about technology because that’s where we see examples every day.

I build my own computers so one classic example of a bad design was the way USB motherboard headers were laid out. Even today motherboards have header pins on them so that builders can connect extra ports to the front or back of computer cases.

USB headers have always been arranged in pairs. One USB port has 4 connections: 5V. signal, return & ground. Two of them can be either set up in parallel, which makes board design easier (you only need to route the signal connections separately), or in opposition (i.e. the 5V on one is next to the ground on the other and vice versa), which makes it impossible to connect the port incorrectly (no matter which way you insert the plug, it is correct). Fortunately no one ever set them up in series (the two headers in a straight line rather than side by side)!

In the first case, which was distressingly common, putting the plug in the wrong way would damage the motherboard. At the very least you would burn out the USB circuit. The second design was clearly superior but it was rarely used.

The first case could have been corrected by putting a plastic shroud around the header pins, then putting a “key” on the plug that would fit into a notch on the shroud. This would have added perhaps a penny or so to the manufacturing cost, so you only saw it on high end motherboards.

Because both designs existed, case designers had to allow for either. There were two methods of handling this. The better, and more common, way was to separate the individual USB plugs, so you could plug them in either way. Less common, but sometimes used, was to simply put a plug on each individual wire, so the user had to plug 8 individual wires onto the USB header. WTF!

It took a very long time but eventually industry came up with a solution. Board designers added a second ground pin to their designs so that the two rows of headers were asymmetrical. One side was one pin longer than the other. Case manufacturers switched to using a 10 hole plug with one hole plugged, so the plug could only fit over the header one way.

I doubt this was significantly cheaper to design or manufacture than the 8-pin methods used earlier, but it allowed everyone to adopt a new standard without causing either of the older designs to be criticized. It “saved face” for everyone.

A slightly different problem arose when video cards started requiring more power than could be provided through the motherboard slots. The original solution was to use the increasingly underused floppy disk power connector. This connector was on every power supply (there were almost always 2 of them) and floppy drives were becoming obsolete so there was no real downside to using one.

This was a short-lived solution as the power demands of video cards climbed. Floppy power connectors were soon replaced by a shortened version of the motherboard power connector. Instead of 20 lines, it had only 6 – 3 power and 3 ground. Thanks to the design of the motherboard connector, it couldn’t be plugged in the wrong way. Like the motherboard connector, the +12V connections were away from the locking clip.

Over time this has increased in size to an 8-pin version.

Unfortunately, the demands for power on the motherboards were also climbing. This was met by extending the motherboard power connector by 4 pins and also by adding a separate 4-pin power connector (usually plugged in near the CPU). Like the video card power connector, this also was shortened version of the motherboard power connector. Over time it also was extended to an 8-pin version.

Unlike the other two connectors, however, this new connector had the +12v connections on the same side as the locking clip.

You can see where this is going – two connectors with the same physical size and shape, capable of being plugged into two different sockets with different electrical expectations. Plug the wrong one in and you fry the circuitry.

This was, in effect, a monumentally stupid design decision. Moreover there was no reason for it. The original connectors were simply providing power and had no sensing capability. Had they been delivering power the same way, no harm would be done to the video card or motherboard.

You can see examples of the various power connectors at The yellow wires are +12V. You can do a web search for explanations of the various connections.

Let’s turn our attention now to something more familiar – television boxes. The first high definition (HD) televisions were rolled out back in 1996. They used 1080i (interlaced) which is still the broadcast standard for HD. Prior to that, the standard was 480i (SD) , which had been set back in the 1940s. While there are some 4k transmissions today, any HD TV has to be able to handle 1080i.

DVDs have been available since 1995 and they specify a data rate of 480i – the same as SD television, although they allow for a wide-screen format of 1.5:1 – slightly more square than the HD ratio of 1.77:1.

There is also 1080p (progressive scan) which delivers twice the data rate, painting a completed screen 60 times per second instead of the 30 times per second of 1080i. Few people can detect the difference but when BluRay became the de-facto HD disc standard, it allowed 1080p.

Recapping, every TV ever made must be able to handle 480i because that is what is output from DVD players and what is still used in a minority of television broadcasts. Every HD TV must be able to handle 1080i because that is still the broadcast standard.

So let’s look at some “smart TV” adapters – small boxes with an internet capability (wired or wifi). A lot of them give you two output options – 720p and 1080p. Some even add 4k (2160p). This is another WTF moment. Only the 4k option even matches a required data rate and that is only for very new TVs. The vast bulk of TVs in use aren’t 4k.

So owners of standard and HD TVs frequently can’t use these boxes to turn their televisions into smart TVs. Conversely, almost all TVs sold today already have smart capability so they don’t need these boxes. In particular, finding a 4k TV without smart capability is hard to do.

It leaves me scratching my head about the market research the manufacturers conducted. They seem to have deliberately relegated themselves to a niche market -adding smart capability to those few TVs that handle 720p or 1080p but don’t already have smart capability. The larger number of 480i and 1080i TVs are ignored.

Modern TVs have adopted a standard developed for computer monitors where the monitor notifies the device it is attached to as to the video formats it can handle. This doesn’t always work however, but when it does, it makes life easier for the end user.

Let’s look at what happens when it doesn’t work. I recently acquired a used PVR to connect to my home theatre system. Most of the system is far from state of the art. My main speakers are around 40 years old, while my AV receiver pre-dates HDMI and uses component video instead (component video is as good a HDMI for HD material, but uses a more complicated cable that can be connected wrong if you can’t follow colour coding. It also doesn’t include audio so you need a stereo audio cable as well).

Fortunately the PVR box has component video output. In theory I should just be able to plug it in and it should work (after getting it authorized from my cable TV provider). Unfortunately the designers of the box decided to default to having the box negotiate with TV to set the output resolution. The result was that I got nothing useful on my screen because, when it couldn’t negotiate, it defaulted to a non-standard mode that my 10 year old television didn’t support.

Neither did a newer, small TV that I had set up in my exercise room.

Both of these TVs were from quality manufacturers (Panasonic and Sony) so the problem isn’t that they were made on the cheap. The problem is that auto-negotiation isn’t reliable. The connecting device needs to be able to fall back to something that will display a reliable signal.

I’m not sure if the smart TV adapter or the PVR fell back to 720p or 1080p but neither is a standard definition supported by all TVs.

The obvious solution would be to fall back to 480i, a data rate supported (as I mentioned earlier) by every TV made since the DVD standard was set in 1995 and by every North American TV ever sold. So why drop back to a mode that cannot be guaranteed to be available?

In fact, why go through auto-negotiation at all? Why not simply display a 480i picture and ask users to set the output mode? You could even have that default to an “auto” setting that falls back to 480i if you don’t get a confirmation within 15 seconds, like some computer operating systems do.

In the end, I had to connect my pvr to a TV that handled 1080p so I could change the display setting from auto to 1080i. Fortunately the pvr kept that setting between power outages while I moved it between TVs.

Modern automobiles are another case in point for bad design. In recent years they have replaced the myriad of knobs, buttons and switches with a small touch screen control console – usually in the centre of the dash.

It doesn’t take a genius to realize that the last thing you want is for the driver to take their eyes off the road to adjust something. Yet that is what is required for any function controlled by a touch screen. While I can switch the A/C on or off or change its temperature in my car by the push of a button or turn of a knob, requiring a fraction of a second to ensure my finger is touching the right control, modern cars require you navigate menus before finding the right control.

Moreover there is no tactile feedback to let you know you succeeded. You need to keep your eyes on the screen to verify that the setting was properly changed.

Star Trek fans will recognize this. The captains and navigators had control buttons and levers. Only the people not responsible for the speed and direction of the ship used touch displays. Apparently SciFi writers did a better job of designing controls than automotive designers despite their decisions not having potentially fatal consequences.

Intelligent design of technology requires giving a little thought to your design decisions then testing them before embedding those decisions in products. Apparently human intelligent designers aren’t too different from the cosmic intelligent designer creationists believe made all those design errors in biological organisms.

Posted in Economics, Home repair, Science and Mathematics | Leave a comment


I keep reading a lot of what I consider to be nonsense about consciousness, artificial intelligence and similar topics. Typically this phrased as “can an AI be conscious?”, “will AI ever achieve consciousness?” or “what is consciousness?”. The purveyors of such questions then proceed to engage in a series of arguments to back a position they have taken.

My own take is that the answer is 42. Like in Douglas Adams’ badly labeled trilogy, the question is wrong. Consciousness is not a single thing or even a single quantity of things. It has many aspects.

One of the most basic aspects of what is frequently called consciousness is an awareness of self as distinct from other non-self things. In humans, we generally believe that we are our body (except for the rare cases where people believe one or more appendages are not part of their body). As our body changes over time, we still view it as our self. These are my hands that are typing this. This is my brain that is putting these thoughts into words.

I believe we see our bodies as ourselves mainly because of our senses. We have nerves throughout our bodies that signal our conscious mind when they feel something. Moreover, we can react to sensory input to pull our hands away from something hot or to catch a ball coming toward us. Even though these actions may not involve any conscious decisions on our part, they help reassure us that we are our bodies.

Clearly this is not limited to humans. Even amoebae seem to be able to respond to their environment. However at some point as organisms increase in complexity, they start becoming aware that they are their body. A cat, for example, will clean itself and sharpen its claws.

Conversely, even advanced robots usually get very little in the way of feedback from their bodies. A robotic arm in a manufacturing plant doesn’t usually have internal sensors to determine its position. It simply moves to specific places based on what its controller tells it.

Even a device with sensors rarely gets the exquisite level of feedback that a vertebrate’s body provides. Your phone, for example, might have a GPS locator and position/orientation sensors but wouldn’t have a clue as to the type of surface it is resting on.

The next level toward what we call consciousness is the development of a complex brain. The brain synthesizes the sensory inputs into an internal model of the body it is within and the world around it. Sufficiently complex brains can even recognize representations of its body, such as identifying its own reflection.

The key capability here, I believe, is that the brain can monitor its own activity. I know that I am thinking about consciousness while I am typing this sentence. This is, I believe, what gives us the feeling that we are conscious beings. Even though neuroscientists will tell us that we make decisions before our conscious mind is aware that we have done so, the fact that our mental monitor notices that we have made a decision is the key part of consciousness.

Once our self-monitoring mind notices, it can reflect on why we made that decision. Like most mental process, this mainly happens at a level below our mental monitor. It’s only when the gears have finished churning that we come become aware of why we believe we decided something.

Whether that belief is true or not is actually irrelevant. Because our mental monitor thinks it is true, we believe that we made the decision consciously. This also gives us the capability to re-evaluate that decision based on further information – we can change our minds.

So the problems I see with current AI are twofold. Firstly, without a sense of self created by a network of internal and external sensors, we won’t get anything like human consciousness.

Secondly, while we building vast neural networks crammed with facts, we so far are not building self monitoring mechanisms into our artificial brains. We’re not asking them to think about why they are coming to the decisions they do.

Without these two factors, I doubt that machine consciousness, no matter how advanced AI gets, will ever be similar to human consciousness. We may be able to build AIs that can out-play GO masters but we won’t be able to recognize them as conscious and they will be unable to dispute that point.

Posted in Education, History, Religion, Science and Mathematics | Leave a comment

The more things change…

“The more things change, the more they stay the same” is a very old adage that self-referentially proves itself over and over again. The Athenians didn’t trust elections because they favoured the oligarchy and preferred to draw lots to determine representatives – an early example of proportional representation.

In ancient Rome, to be elected to office, you needed to get the “mob” on you side. This was done by putting on shows, giving out food or favours and other expensive ways to convince the people that you were on their side. Unless you were wealthy, this required borrowing a lot of money that the lenders expected back with interest. To do that, you had milk the office for every penny you could squeeze out of it.

And even if you were wealthy, you still expected that your election would bring back more than it cost. Roman politics were the classic example of institutionalized corruption.

Fast forward to the Trump administration and you see the same things happening. Trump’s election was expensive and he’s been paying back his major backers with contracts, tax cuts and deregulation of industries. He’s also blocked multiple attempts to impose sanctions against Russia, one of his major backers.

In Ontario, we’re seeing Doug Ford following this same strategy. Despite being born wealthy and displaying an absurd sense of entitlement, his family’s annual picnics seem to convince his supporters that he’s a “man of the people”. Not having any real employment, he also gets to “meet and greet” people while other party leaders have to be in the legislature.

He knows that many people have precarious employment and are struggling to make ends meet. Rather than addressing the underlying problems of employment rules and tax laws, he’s scapegoating people whose unionized public service positions give them some security. They, not he and his rich buddies who are causing the problems, are the ones we should deplore.

One reason his charges stick is because the Liberals have a long history in Ontario and across Canada of doing too little to make any real change. Indeed, they’ve played tag-team with the (Progressive) Conservatives to implement the employment and tax laws that have created the current problems.

In the decades since Clarence Gillis and Tommy Douglas popularized the Mouseland tale, we’ve seen many examples of it in action. However this is just another example of how “elections favour the oligarchy” that the Athenians noted.

The problem isn’t elections per se, but rather their winner-take-all nature. Electing a single representative per district cannot be proportional no matter how you count the votes in that district. Most people’s votes aren’t going to count toward getting their sincere first choice of representative.

The Athenian system of drawing lots worked because it took money and influence out of the equation. Yet this same system has been attacked in Canada as undemocratic when used to pick representatives for citizens assemblies, for example. People today conflate democracy with voting.

Proportional elections keep the Athenian ideal that the assembly should be fully representative while still minimizing the impact of money and influence on the elections. As Abraham Lincoln observed, not everyone is going to be fooled by politicians who put on shows or give out food and favours. When their votes count, the influence of the oligarchy is muted. It won’t go disappear, but at least it isn’t amplified by a winner-take-all election.

Posted in Economics, Electoral Reform, History, Politics | Leave a comment