Should We Demand Humanity from Our Autonomous Vehicles?

Like it or not, we’re on the cusp of self-driving cars. There’s still plenty to sort out from a liability and insurance perspective, but this is surmountable. The steeper challenge may come from wresting control of the steering wheel from the paws of us fallible humans.

The public looks on aghast when a death occurs from an autonomous car crash that is a no-brainer for humans. When the Tesla on auto-pilot crashed and killed the driver because it couldn’t recognize a white truck, the public became convinced that the move toward autonomous cars is fraught with peril.

Statistically speaking, the jury is still out on the safety of autonomous cars versus human-driven cars. We do know, however, that in the United States, car crashes are the sixth highest cause of death at 37,000 people per year. We also know that autonomous cars don’t get drunk, don’t get distracted, don’t speed, don’t get drowsy, don’t get angry… Still, even the denizens in the Silicon Valley neighborhoods who are leading the Artificial Intelligence (AI) charge are reluctant to have autonomous cars on their streets.

We humans are an irrational bunch. Many who fear flying don’t give a second thought to hopping in the car for a quart of milk, even though the chances of dying to or from the 7-Eleven are much higher than dying in a plane crash. Still, the reality of driving is that unexpected things occur all the time. Human drivers have a wealth of experience to draw upon so that when we’re surprised by the unexpected, we’re mostly capable of handling it safely. AI researchers claim that if autonomous cars absorb enough training data and log enough supervised miles, they will amass sufficient experience to transport humans safely. There are events that will occur that an autonomous car has never experienced, regardless of training. We tend to cut human drivers a lot more slack than we do autonomous cars. Although we don’t like thinking about human-caused car crashes that result in death, we know that they occur in large numbers; we consider this a small risk for the convenience of quickly getting where we want to go. However, we consider a single autonomous car death unacceptable.

I’m reminded of when I first became a bicycle commuter in Washington, DC during the years when protected bike lanes were rare. One night early in my bike commuting days, I was freaking out on a busy street in Adams Morgan (a busy DC neighborhood) during rush hour in the dark. There was so much going on that it exceeded my capacity to absorb it all and feel safe. Remarkably, without doing anything other than keeping at it, I developed a degree of confidence after a few weeks. Although this is speculation because I really don’t understand how my brain works, I think I learned to filter out the non-threatening activity on the street so that I could focus on the true risks. At first, everything would catch my eye and I had to learn what I could safely discard. I also suspect that even though I developed some early confidence after a few weeks, it was probably false confidence because I hadn’t yet learned well enough about how to react to human error.

Autonomous vehicles may “see” these children as adorable ducks crossing the street

I’ve been riding in city streets for more than ten years now and have never been hit nor have I had an altercation with a driver. For the most part, it has been harmonious. The most challenging part of bike commuting is making inferences from erratic drivers and knowing how best to respond — when to move aggressively and when to hold back. Although my success on the road can be chalked up to learning and a good amount of luck, I think there’s also an element of having an understanding of human behavior … because I’m a human. The essential problem that we have with self-driving cars is their coexistence with humanity when they’re inhuman; will they ever understand enough about the foibles of humans to keep us safe?