We frequently hear about self-driving cars that make mistakes and cause accidents. Some of those accidents are deadly. However, the assumption that self-driving cars are the wrong idea and should be – both by definition and by law – forbidden misses the larger picture. Let’s look at it from a historical perspective.
Between 1913 and 2021 in the USA, 4,090,285 people died in vehicle accidents (source NSC). This fatality figure is comparable to the entire population of Los Angeles being wiped out. While exact figures for Europe and other regions are elusive, a rough estimate—based on population density and vehicle adoption rates—suggests a similar death toll in Europe and perhaps half that number in other countries, cumulatively. That amounts to a staggering estimated total of 10 million traffic deaths in vehicle accidents since automobiles became a popular means of personal transportation. While this figure only accounts for fatalities, the number of injuries—likely in the hundreds of millions—paints an even grimmer picture of automobile safety.
What are the reasons for these horrific accidents? Most cases stem from human errors such as drunk driving and speeding (source: Forbes). Other causes include distracted driving, reckless driving, and bad road conditions. In other words, in most cases, the risks are associated with drivers.
As a thought experiment, imagine a world where automobiles have not yet been invented. We all get around by walking or riding horses or bicycles. Imagine a scenario where a progressive engineer creates a car and seeks approval. Government agencies conduct a long-term estimate, revealing a sobering prediction: 10 million people will likely die worldwide over the next century due to accidents involving these horrific machines.
Do you seriously believe that any politician would approve the introduction of such deadly, unpredictable, and emission-spewing 2-ton vehicles onto our streets today?
I think I can speak for most of us: we are concerned about the safety of self-driving cars. However, based on the above insights, we should also be concerned about the safety of cars operated by humans. Appealing to the moral obligation to drive safely is a poor substitute for a real solution, as we’ve learned that many of us won’t comply with such appeals. I recognize that this discussion is a statistical deliberation that may appear cold or even unethical. Yet, dismissing self-driving cars for not being safe enough misses the broader point. I agree that self-driving cars are in an experimental phase, but let’s not forget that the automobile itself is a 130-year-old experiment, too. The lethal potential of human-operated cars is an issue we have yet to address–a proverbial can that we keep kicking down the road, hoping for the best.
I firmly believe that self-driving cars can save millions of lives, even if we can never be sure what the side effects will be. Besides, once the new technology becomes a ball in political and industrial politics, the outcome of such a game may produce unintended results. Nevertheless, it is our moral obligation to kick off this game because waiting for another 100 years and counting the deaths is not morally acceptable.
I am a project manager (Project Manager Professional, PMP), a Project Coach, a management consultant, and a book author. I have worked in the software industry since 1992 and as a manager consultant since 1998. Please visit my United Mentors home page for more details. Contact me on LinkedIn for direct feedback on my articles.