The Trolley problem is a common challenge for self-driving vehicles – “Who will the vehicle be programmed to kill in the event of an unavoidable collision?”
It is one of the most common questions posed at self-driving technology events. Imagine I come to you with a revolutionary new transport technology that would get you from A to B cheaper, faster, and more enjoyably than you dreamt possible. But, I add, there is a downside – when this new technology is fully deployed around the world it will kill around 1.35 million people each year – with probably ten times that number suffering serious, life-changing injuries.
Would you buy it? Would you trust it with the lives of your family and friends?
And yet that is the world we live in today. The World Health Organisation estimates that each year, 1.35 million people are killed on world’s roads. Even here in the UK, an average of five people are killed and 75 are seriously injured every day – and our roads are some of the safest in the world.
In the UK, around 85% of road collisions that result in personal injury involve human error.
We wouldn’t accept these fatality rates in rail or air travel. Why do we accept them for road transport? If we knew then (at the advent of the automobile) what we know now, would we today be pining for a faster horse?
From this perspective, automating the driving task would seem to be a no-brainer. The most advanced self-driving technologies today scan their surroundings in 360 degrees numerous times per second. They don’t speed, tail-gate, get distracted, tired, or drunk; they don’t check their phone, argue with their children, or suffer road rage.
They know indicators are more than just Christmas decorations for occasional use.
But self-driving vehicles will still be involved in occasional collisions; they will never be infallible. And this fact has generated a controversial debate stemming from a classic ethics challenge familiar to philosophy students around the world – “The Trolley Problem”.
Introducing The Trolley Problem: An academic dilemma applied to real life
The Trolley Problem was a thought experiment in ethics, usually attributed to Philippa Foot in 1967. In its most basic incarnation, it entails a scenario in which a runaway trolley is hurtling down the tracks towards a group of five people, but you can pull a lever, switch tracks, and save them. But the cost of your decision is the life of a person on the other track. By your intervention, you have saved five lives but killed a person who would otherwise have been unharmed. What do you do?
The underlying assumption is that the vehicle developers will have to programme a response to such scenarios into the software’s decision-making algorithms. In other words, it is assumed that programmers will need to decide in advance whom to kill in a trolley scenario.
The debate reached new heights late last year with the publication of MIT’s “Moral Machine” experiment, which explored a wide range of different formulations of The Trolley Problem to understand how groups of different ages and from different countries would differ in their decisions about how an automated vehicle could or should behave.
This dilemma risks distracting people from more relevant and important questions.
Although the study produced some interesting insights, there is an inherent risk in stretching The Trolley Problem inappropriately to the important, complex, real-life problem of road collisions. Obsessing about trolley problems could do real damage to the enormous potential safety improvements these technologies have to offer because it risks distracting industry, regulators, and the public from more relevant and important questions.
For instance, The Trolley Problem assumes the vehicle’s system will achieve and maintain a kind of omniscience about everything and everyone around it and that the traffic scenarios are limited only to two discrete outcomes. In practice, an action taken by a vehicle in attempting to kill one person instead of five might make a bad situation a lot worse, given the complexity of traffic systems (although, as an American philosophy lecturer discovered, people’s capacity to respond in unpredictable ways illustrates the limitations of this artificial dilemma).
Self-driving vehicles are unlikely ever to have – or need – this level of understanding. It would require vast amounts of (wasted) computing power that would be better allocated on planning the safest, most efficient route and behaving– in such a way as to minimise the likelihood of encountering ‘no win’ trolley scenarios.
To deliver automated driving, such vehicles will be equipped with a range of sensors and recording equipment to monitor and analyse driving situations. This means that even when collision or near-crash situations are encountered, the data from the vehicle sensors can be analysed to understand whether the vehicle behaved as it was programmed and whether we, as a society, are comfortable with such behaviour.
“How do we as humans respond to these situations today?”
The answer is typically along the lines of “the best we can”. Several law firms exploring the implications of a self-driving future have suggested that perhaps programming vehicles to adhere to the rules of the road is a useful starting point rather than risk injecting greater complexity into a highly dynamic environment by chasing spurious solutions to trolley dilemmas.
And here is where the constantly vigilant and never distracted self-driving system comes into its own – developers expect these systems will identify potential collisions sooner, react more quickly and with optimal interventions (not losing control of the vehicle by slamming on the brakes or steering incorrectly, for instance).
These vehicles will also be connected into a collaborative system so if, for example, a child was running towards a row of parked cars, one vehicle could signal to those around it an alert of increased risk in that area, and to adjust speed accordingly.
The safety of self-driving vehicles and the public’s trust in them are of central importance for unlocking the significant social and economic benefits they offer, and rightly so. How safe is safe enough to allow these vehicles on the road? This is the sort of question that each community and society should be discussing, openly considering the trade-offs, the pros and cons, of if, when, and how to deploy these technologies, balanced against the imperfect road safety situation that we face today.
We know that “as safe as a human” isn’t going to be good enough to secure public acceptance of these vehicles. But we also know there is already some good early evidence of improved safety from new technologies like automated emergency braking. My concern – shared by many in the sector whom I speak to – is that the apparent obsession with The Trolley Problem could be distracting people from the real value of the technologies and what they can deliver, delaying the safety benefits that might otherwise be achieved by the use of these technologies. We depend on road transportation to achieve a productive and prosperous society – and in doing so we accept a level of risk. By engaging with the genuine issues about the safe development and deployment of automated vehicles, we can manage this risk and not avoid it.
Michael Talbot, Head of Strategy
Some other useful perspectives on The Trolley Problem can be found in these articles:
The Trolley Problem: Explained here in 90 seconds
Robocar Engineers Prefer To Solve The ‘Runaway Trolley Problem’ By Fixing The Trolley’s Brakes