AI Cars: Where Computer Science, Ethics and Philosophy Meet

May 16, 2024

AI Cars: Where Computer Science, Ethics and Philosophy Meet

Beren Arslan


Crash. Screams filling every ounce of space. Bang. Tears streaming. Boom. Small splatters of blood decorating the steering wheel. As ambulance sirens race down the motorway getting closer and closer, so does the impending feeling of death. Lives are on the line. Families are about to be torn apart. But no one chose for this to happen, right?


In the 1980s, the first autonomous, AI based car was introduced by Ernst Dickmanns. These cars essentially make every single decision themselves, including which lives to prioritise in a crash. So how do they decide which lives are worthy enough to save? This is a discussion worth having, as per every million vehicle miles driven with conventional cars, there are 4.2 crashes, whereas with autonomous cars there is over double this at 9.1. Would you feel comfortable with a machine making this decision, in short, entrusting moral decisions to a machine?


A commonplace assumption is that morals are deep-rooted within us as humans, a reassuring thought - but even if we do believe this (and many don’t), how do we then programme these same human values into computers, into brutal metal machines running on binary numbers, in place of love and emotions? Perhaps the more pertinent question is what the programming of these autonomous cars reveal about the mores, beliefs and vested interests of the cars’ producers. As of today, the programmers’ own morals are reflected into these soulless machines to mimic a conscience into them. But how are we to say these are the right morals upon which these machines should run? 


A good place to start is with a good old, classic philosophical problem: the trolley problem, by Philippa Foot, which challenges people with the ethical dilemma of whether to intentionally kill one person to save multiple or to let a larger number die and not get involved. Both situations leave you with a burden. So is there really any correct choice?


There are three main arguments for who to prioritise in the event of a crash: 


  • Save the pedestrians
    : this keeps innocent bystanders safe. However, it may mean more people in the car die. But should the program intentionally end one life to save multiple, ultimately penalising the person for taking a more environmentally-friendly mode of transport?

  • Save the most people
    : this may be the most logical solution. Less lives are lost, less families are impacted. However, would people want to be driving a car with their family in it knowing that they would be sacrificed to save a larger amount of people.

  • Save the occupants
    : it is certain that no one would want to buy a car that would put them at risk; reliability and safety are prioritised over all else. On the other hand, saving the occupants and allowing several other people to die is rather selfish and not morally right.


So the problem not only boils down to who the programmers plan to save, but also what consumers are likely to purchase. In different areas of the world, different people are prioritised. Circling back to the trolley theory, what if the person being sacrificed were very old and had lived their life to its fullest. How important would they be? There is a big debate between the East and the West on which lives are deemed more important. In the West, young people are prioritised due to them being active-members of society who are the future. Whereas in the East, in countries such as China, there is more compassion for the elderly due to cultural respect. Therefore, regional differences in the manufacturing of cars have been suggested to represent the ethics and values of the country the car is being sold in. 


This leaves us with the questions: are morals really the issue here, or are they overshadowed by the profit driven ‘values’ of capitalism? Do the manufacturers genuinely want to make the right decision or just appeal to the consumer to make as much profit as possible? The decision that will be made is ultimately based on what the majority wants and not what is actually right. Although morals may have been around for centuries, do we actually know what really is right and wrong? With so much variety in beliefs globally, this may be impossible to ever know. It may be impossible for us to ever make the right choice. 


Everything. 

Will be questioned eternally.

July 22, 2024
Inspired by the work of George Orwell
July 22, 2024
Written by Muhana Hussein from London Academy of Excellence Tottenham - London, UK
Share by: