Should Self Driving Cars Kill Us?

self driving cars tunnel problem

It’s not driving, it’s selecting its targets

Imagine yourself getting into a new self-driving car, non-branded because I don’t want to be sued, but you know exactly which one I’m talking about. You’re excited about how much you’ll be able to accomplish on your commute now that you don’t have to worry about that pesky thing called driving.

You boot up the car, you start typing in your information, and you get to the last question. It reads: “In the case of an emergency involving a pedestrian, should the vehicle value your life over that of the pedestrian?”

That kind of puts a downer on the whole event. 

So today we’re looking at the surprisingly old history of automated vehicles, the philosophical thought experiment known as the tunnel problem, and what we’re currently doing to help solve these types of robotics issues in the future.

A History of Self-Driving Stuff

The idea of self-driving cars may appear to be in its infancy today but in reality, the idea and testing of self-driving vehicles in some form or another have been around for almost a hundred years.

Back in 1925, a radio-controlled 1926 Chandler vehicle nicknamed “American Wonder” was driven via a remote control up Broadway in New York City. Despite the awful nickname, it seemed to be a success. In Milwaukee, another demonstration of a radio-controlled car nicknamed “Phantom Auto”, not great but better, took place in 1926. 

By the way, the first toy radio-controlled car didn’t begin selling till around the mid-1960s so while kids were still playing with wooden blocks, adults were having all the fun.

So back in those days, it wasn’t a crazy thought to assume driverless cars would be the norm just a few decades later and you wouldn’t be wrong. In 1940 Norman Bel Geddes, an industrial designer wrote a book called Magic Motorways where he predicted we would be able to remove humans from the driving equation by the 1960s.

Since we’re not all hopping into driverless Ubers these days, clearly he was a little off on the date. But advancements have been happening since. In the 1960s both Ohio State University and the United Kingdom’s Transport and Road Research Laboratory tested driverless cars.

What’s the Hold-Up?

During the 1970’s Stanford and the Coordinated Science Laboratory of the University of Illinois started research into the automated logic the vehicles would need. Little by little throughout the decades, we have been advancing… but why so slow?

Well, those remote control cars were hindered by the fact the person who controlled the remote-controlled car had to be in another car directly behind it. It kind of defeats the purpose if you want your car to take you on a leisurely drive but you have to ask a friend, that was probably just minding their own business, to drive behind you and control your car.

The cars tested between the 1930s to 1990s needed some kind of external power source or something to push them along, for example, some driverless vehicles required some kind of circuit embedded in the roadway to provide an electromagnetic field in order to propel the car forward.

In the 1950’s General Motors released several prototypes that had a guidance system that would work with circuits or wire embedded in the road. These were the original “Firebirds”, props on the name but Jesus Christ that design, the “Firebird I” legit looks like a rocket on wheels.

 
GM firebird self driving car

“Yea soooo we just put some wheels on a rocket”

 

It wasn’t until 2006 that the first driverless car was unveiled in the Netherlands. The ParkShuttle which acts like a horizontal elevator is a completely driverless vehicle going between several stops at the choice of its riders. Despite the ParkShuttle only being able to go in the forward direction, it’s still a long way from the 1920s.

This makes fully automated vehicles with the ability to drive on any road in any direction a real possibility in the next few decades. Today we are closer than ever before, Tesla, Honda, and other manufacturers are testing self-driving vehicles, some already have vehicles that require a human driver in some areas and many are working to get past that pesky human hurdle.

The Tunnel Problem

So all of this got engineer and philosopher, Dr. Jason Millar, thinking in 2014 that if a robot car has to kill someone, who picks the victim?

In an article for RoboHub, Dr. Millar describes the tunnel problem which is a modification of the trolley problem for our near future. It states: 

“You are traveling along a single lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. How should the car react?”

Now of course driverless cars are being made with the intention of reducing accidents on the road but this scenario provided, although it may be rare, is a real possibility. But Dr. Millar isn’t looking for us to think about this one scenario, his paper alludes to the much bigger issue at hand, who decides what happens; us, lawmakers, or the designers of these vehicles.

It’s easy to find a piece of technology that won’t do what we want it to do because that’s just how it was designed. Back in my office job days, many systems I worked with had limitations based on their design no matter how much I thought it should or could be updated.

You also have pieces of technology like phones or video games where the consumers want to change or create modifications in order to have a better experience. And then you have lawmakers who have decided this can’t be done in some cases.

Now extend that to a life. Who decides? And should we be putting self-driving cars out on the road before this has been decided?

Laws of Robotics

Of course, Dr. Jason Millar isn’t the first to think of the morality or ethics of these situations. Some of this goes back to 1942 when Isaac Asimov began unveiling his laws of robotics. 

I’ll sort of paraphrase them here since Asimov would modify the rules a bit throughout his stories:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

Yes, this is from the 2004 film I, Robot.

 
I have not been programmed with the rules :)

I have not been programmed with the rules :)

 

Now despite testing on automated vehicles happening in the 1920s and Isaac Asimov having his rules in 1942, the First International Symposium of Roboethics didn’t actually happen until 2004 which I think is a little late to be starting.

But since then the movement has been put into high gear, that’s a car pun because this article is about cars and robots. Is high gear a thing? Is it not a thing?

Anyway since 2004 many committees, gatherings, workshops, and conferences have met around the world to attack these ethical problems head-on. In 2018 the AI Now Institute at NYU unveiled a framework for assessing the use of AI. 

And we’ll definitely have an article on a later date with some of the failures outside of vehicles that AI has been a part of in recent years.

But here's to hoping we have a general consensus of what would happen in a scenario like the tunnel problem in the next few years.

And just to go back to that tunnel problem. It became the focus of a poll by the Open Roboethics Initiative where 64 percent of participants said the car should continue straight and kill the child and 36 percent said the car should swerve and kill the driver.

In a follow-up question asking who should be in charge of making the decision, 12 percent felt the designer of the vehicle should, 44 percent felt the driver, and 33 percent the lawmakers.

So as you can see there seems to be a pretty big divide in some places. Regarding the 3 laws of robotics Asimov has said he believes “the Three laws are the only way in which rational Human beings can deal with robots - or with anything else.” He followed this up with “But when I say that, I always remember sadly that human beings are not always rational.”


Sources


Previous
Previous

The Facts on The Birthday Paradox

Next
Next

Why Lie Detectors Don’t Work