21 Aug, 2017 By Wayne Wang
Share

When it comes to road transport – we’re heading into some exciting times. Many of us are already aware of the recent push for driverless cars, but did you know Tesla CEO and founder, Elon Musk, has plans to test driverless trucks as soon as September this year?

On the surface, most of us will see this as great progress – especially when we think about the commercial benefits of electric driverless vehicles. They’ll be more environmentally friendly, they promise to remove the element of human error and best of all – they’ll be far cheaper to run without human drivers. Sure – driverless trucks promise to revolutionise transport-based industries such as courier services.

But while we’re dreaming about all the positive impacts driverless vehicles will have, there’s a whole other (and very important) debate going on in the background which, perhaps, isn’t getting quite so much airtime. We’re talking about the ethics of driverless vehicles.

What does ethics have to do with driverless vehicles?

So, what do ethics and driverless vehicles have to do with one another? Removing the human element from road transport might be considered a great step forward (after all, we’re prone to mistakes which can lead to devastating consequences) – but that means it’ll fall at the hands of a computer to make the many judgment calls humans usually make. And of course, computers need to be programmed.

The runaway “tram scenario”

Manufacturers need to work out how self-driving vehicles should react and behave in several tricky scenarios and these decisions are the subject of much debate. First let’s look at the “tram scenario”. This imagines a runaway tram hurtling down a track which has several people in the way. Do you switch to another track which only has one person in the way, or do you allow the tram to carry on killing several? Driverless vehicles will need to make similar calculations and it will be down to their programmers to decide which approach is best.

Whose safety comes first?

There are other questions too, such as, should these vehicles optimise for overall human welfare or should they prioritise passenger safety or others on the road? Imagine a driverless car on a collision course with a school bus on a mountain road; should the car swerve sending its passenger over the cliff, or carry on risking the lives of many children?

There are also concerns over how these driverless vehicles will behave around pedestrians, cyclists or animals who take unpredictable actions. As humans, we can easily make case-by-case judgements on how much space to give when passing a pedestrian or cyclist, or even, how to adjust speed appropriately. But driverless vehicles won’t find it so easy to read these individual scenarios.

Optimising for the best crash outcome

And when a crash becomes inevitable, how should driverless vehicles be optimised for the best and most ethical outcome? For example, should a car move away from a truck and elect to hit a smaller vehicle in another lane simply because it’s safer to crash into smaller objects?

There are even ethical questions, about the ethical questions, with some arguing the public has the right to know how driverless vehicles will behave in scenarios and what factors they take into consideration. Otherwise – manufacturers will be free to program these cars to limit their own liability without considering widely-accepted ethical customs.

Clearly, while we have the technology to get driverless vehicles on the road already, there’s still a lot that needs to be worked out before their wider use can become a reality. Even if they are said to be far safer than human drivers!