Self-driving vehicles operated by intelligent algorithms, carrying people to their destinations safely and with ease, have long been a desirable innovation. Today, from a technological perspective, this dream is perhaps only several years away from coming true. Nevertheless, it has to be stressed that driverless cars might also cause serious problems that cannot be adequately assessed at this point of time. Technical data for long-term operations in complicated environments such as cities are still lacking. A sad precedent was set in mid-March in the state of Arizona, where a pedestrian was killed by a self-driving car.
In Tempe, Arizona on the night of 18 March, 49-year-old Elaine Herzberg was crossing a busy road outside of the crosswalk area with a bicycle when she was struck by a self-driving Uber vehicle (a Volvo XC 90 equipped with sensors and cameras). Later that day, Herzberg passed away in hospital as a result of her injuries.
As prescribed by law, a so-called safety driver was behind the steering wheel of the car, but did not stop the fatal accident. Moreover, the police investigation suggests that the Volvo XC 90 did not show any signs of slowing down before it hit Herzberg. Moreover, the self-driving car was exceeding the speed limit: the car was traveling around 38 mph to 40 mph in a 35 mph zone, when it struck Herzberg.
Some days later, video footage released from the Uber car cameras showed that the safety driver did not have his hands on the steering wheel and was not even looking at the road. Also revealing is that Herzberg is clearly visible on the video footage 1.5 seconds before the deadly collision took place. This leads to the question why the Uber car did not slow down or at least try to make any evasive manoeuvres.
This article however, does not deal with questions of Herzberg´s liability. Rather, the aim is to address the interaction between AI and humans, the implications, and to start a debate about how human well-being can be safeguarded and even thrive in an era of ever more sophisticated intelligent machines.
The police statement
Being the first to gain access to the video footage in the course of preliminary investigations, Tempe Chief of Police Sylvia Moir was quick to announce that, “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway…” Moir added “I suspect preliminarily it appears that Uber would likely not be at fault in this accident, either…”. In another press conference, Tempe Police media relations officer, Roland Lcock, emphasised that in order to avoid such fatal incidents, pedestrians must use crosswalks. However, this initial police assessment would soon be rendered premature.
In a broader context
Not only international competition between countries, but also competition at the local level between states and municipalities, are key features of globalised capitalism. Value chains and the interests of large corporations cut across nation borders and pit jurisdictions against each other in search for profit maximisation. Countries, states, municipalities, etc. are engaging in often devastating competition to attract companies – especially global corporations – to move the location of their operations in order to fill empty state coffers and create employment opportunities for their constituencies. With regard to regulations and tax collection this competition is often a ‘race to the bottom’.
Arizona has the reputation of being the most permissive US-state for self-driving cars and is in fierce competition with neighbouring California, where many car makers and software start-ups have their headquarters (e.g. Uber and Waymo). But Arizona has been successful in its attempt to attract global corporations such as Toyota, Uber, GM, Intel, and Waymo for testing their self-propelled cars on its roads, because of a “more or less regulation-free” approach. In the beginning of March – only weeks before the fatal accident in Tempe – the Arizona state government announced it would change legislation to allow driverless cars without a safety driver behind the wheel to operate on public roads. This statement was made weeks after California declared that driverless vehicles will be allowed to operate soon. When announcing the introduction of driverless cars, Doug Ducey, Governor of Arizona, declared that: “As technology advances, our policies and priorities must adapt to remain competitive in today’s economy…”
Self-driving cars and humanity
Proponents of driverless cars are convinced that roads will be safer and that the number of people suffering from fatal injuries will decrease, as human error is one of the most common causes of vehicular fatalities. In addition, they argue that cars will be used more efficiently and thus lessen congestion and bring about a substantial decrease in Co2 emissions.
Opponents argue that the technology is not fully mature and needs several years of testing to be entirely safe. Moreover, many researchers state that in a transition period, when human and robot drivers are operating on the roads together, additional safety risks occur.
Shortly after the video footage of the fatal accident was made public, experts stated that the sensors should have detected the pedestrian. The software should have initiated evasive manoeuvres and also the safety driver should have reacted. Bryant Walker Smith, a University of South Carolina law professor who studies autonomous vehicles stated that the video footage was: “strongly suggestive of multiple failures of Uber and its system, its automated system, and its safety driver”. Sam Abuelsmaid, an analyst for Navigant Research explained, that: “laser and radar systems can see in the dark much better than humans or cameras and that Ms Herzberg was well within the range”.
It has also been emphasised by multiple experts, that the video footage released by Uber does not show the actual illumination conditions at the scene. All of these arguments beg the question why police authorities in Tempe jumped to conclusions and deemed themselves capable of accessing the particularities of self-driving cars, for example with regard to the illumination condition and reaction time. The initial statement of the police seems hasty, given the fact that the Uber car was clearly violating speed limits. It must be noted though, that in practice US police rarely stop drivers who exceed speed limits by low margins. But this assessment could be fundamentally different, when instead of a human driver, AI is behind the steering wheel. An algorithm is only capable of following explicit rules. In this case, the rule for Uber’s AI could have read something like “exceed speed limits by 14.28% on empty streets”. Local drivers would have slowed down and relied on their tacit knowledge to inform them that this particular area pedestrians are often crossing the road outside the crosswalks. The explicit overriding of traffic rules, the non-reaction of the sensors and software as well as the presence of a fatality could prescribe partial liability to Uber, the safety driver, or even Volvo. To put it simply, a few miles per hour can mean life or death.
This leads to an intriguing conclusion. The software most likely programmed to allow the car to exceed speed limits if no other error (for example a mapping error) was involved. The AI driving the vehicle did not take any actions to prevent the collision, although it is visible in the video that there was time to react. Strangely enough, in their initial press statement Tempe police were quick to dismiss this fact and found the liability solely with the victim, Elaine Herzberg. Given the importance of this case, the police statement can only be regarded as an extremely premature judgement. Simultaneously, it reveals the difficulties of the local police to adequately estimate the consequences and functions of new technologies in a complex environment.
Other tech companies too, were critical of Uber’s introduction of their technology. A competitor – Waymo (a subsidiary of Google) – which also tests self-propelled vehicles in Arizona announced that their technology: “…would have handled that.” Even more astonishing is that after running the dashboard camera video through its advanced driver assistance system Volvo´s sensor software supplier, Mobileye, confirmed that their sensors would have detected Mrs. Herzberg. Mobileye´s CEO also criticised “new entrants” in the self-driving field that have not gone through the years of development necessary to ensure safety in the vehicles”. What is more, recently it was confirmed that the Volvo emergency brake assistance, based on Mobileye´s software, was deactivated by Uber, which raises serious questions about the observation of basic safety rules by Uber.
Apart from the more obvious technical issues, this precedent begs two fundamental questions, both ethical in nature. The first concerns the legal liability of juridical persons and the second questions relates to the rules guiding AI decision-making.
In the case of an accident with injuries or even fatalities, in which the driver bears liability, he or she will be subject to legal proceedings that might result in a prison sentence. Clearly in the Arizona case, partial liability is borne by one of the three other parties involved (Uber, Volvo, or the safety driver). In the case that Uber is held liable and that the liability is so severe that a prison sentence is the only option, the juridical system faces a problem. Today companies (‘judicial persons’) are held liable for work-related accidents and obliged to pay damages. Managers (‘natural persons’) might also face criminal charges (which happens rarely). But self-driving cars (and AI in general) present a very different case, because of their visibility on the streets, their great number, the immediacy of their involvement, and the human-programmed decision-making they follow.
The question of liability is indeed a crucial one. Who is to blame: the safety driver, Uber, or Volvo – or all three together? If it is Uber, the question arises whether or not criminal charges could be made against Uber. And if yes, who is held liable: the CEO, the head of Uber self-driving, or somebody else? If criminal charges will not be taken, this might lead to a public outcry against the injustice of such a ruling. If a human driver is liable for a fatal accident, he or she potentially faces a prison sentence, whereas the CEO of a self-driving car company, gets away with paying damages to surviving relatives, using venture capital money that is not even his own.
The second major question involves decision-making. Humans have to take decisions that are difficult to change via external factors sometimes. In contrast, when it comes to algorithms, human programmers make these decisions accordingly. A good example of life or death decisions taken by a robot (and the algorithms that operate them) is the Hollywood movie “I, robot” starring Will Smith. A rescue robot has to make the choice of whom to save. He ultimately chooses to save the adult from a sinking car, because of his higher survival probability, while a young child is left for dead in the sea.
A real world example comes from France, where a bus driver saved the life of his passengers. Facing a main brake failure, he decided to drive directly into a mountainside (which meant certain death for him), instead of taking on a risky hairpin bend, that would have increased his survival probability, but decreased that of the passengers. Reviewing these events, it becomes clear, that transparent and strict ethical values have to guide the decision-making of algorithms, as to serve society in the best possible way.
Therefore, an informed and public discussion about decision-making algorithms has to take place, because consent is needed for a technology that can have such serious repercussions. Clearly, there is a lot of potential for intelligent algorithms to improve human life, with regard to less overall fatalities in traffic, or state of the art diagnoses in health care. However, there has to be a public discussion and a consensus about which values robot interventions shall be based on. These kinds of decisions should in no case be left to private companies or intelligent machines alone, but subject to public scrutiny and open debate.
Suggestions for a society supported by AI
Throughout history, discussions about proper moral and ethical value systems, especially with regards to the interplay of communitarian and individual identities, have always been critical. Apart from these debates, many moral rules and ethical values have in previous times been rather implicit or even left to chance. In the face of ‘quick learning’ machines, that base their ‘decision-making’ on algorithms, implicit or tacit knowledge is not yet an option and therefore needs to be codified. To make these rules and values explicit can at times seem ugly and uncomfortable but is a vital and necessary exercise to determine the kind of society we envision for our future.
Human liability in the face of criminal charges is still necessary for the sake of justice, but also to guarantee that only the safest and soundest technology is allowed to operate in a human environment. To simply let CEOs and companies get away with paying damages, will not solve the complex moral and ethical question around the issue of liability. In a laissez faire jurisdiction, this could lead to more casualties as companies might opt for paying damages, while keep on testing not truly safe technologies to win over their competitors.
Algorithms are today the centrepiece of many tech companies and are regarded as business secrets that are not subject to public scrutiny. As algorithms become more prominent in people´s everyday lives, monitoring authorities must get access to these algorithms, in order to prevent wrong-doing. Needless to say, there have to be rigid rules in place to prevent governments from abusing this information. Also, there should always be the possibility to override a decision regarding AI, as they might be incorrect or lacking important contextual information.
Some initial suggestions for decision-making regarding AI in emergencies, when there is no time for a human decision, are as followed:
- Persons, who have a longer life ahead of them (e.g. children) should be prioritised in emergencies (unless their survival probability is extremely low). A second exception applies, when a mentally stable adult prefers death for various reasons if this is known or can be established within the short time available.
- In a life or dead situation, where AI is responsible (and if technically feasible) for determining whether to save the driver of a vehicle or a higher number of ´strangers´, the strangers should be saved. A rule of thumb should be that, the number of casualties should be kept as low as possible.
- Never should personal wealth, influence, or celebrity status play a role in emergency situations. Nor shall attributes such as race, gender, ability, or age be of any relevance in this case. The general rule should be that a life is a life and every human has the same value. Exceptions to this rule are derived from the second point, made above.
We have already seen, that fierce competition (also between governments) for profit, market shares, tax revenues, and a reputation as an innovation leader can lead to the hasty and uncontrolled introduction of technologies or lax handling of safety concerns that put human lifes at risk. In every discussion about AI, the safety and well-being of human lives should be the first priority. Business interests should never trump human life. It remains to be seen, whether driverless cars are a premature technology or if an upstart such as Uber neglected vital safety measures in order to gain a competitive edge and maximise future profits.
 Today there is evidence that e.g facial recognition technologies are better at recognizing white, male faces than black/brown female faces, which could have serious implications for the future. That means that algorithms written by human programmers may as well reproduce biases and inequalities present in society.