Written by
Thomas Clapper
Thomas Clapper
Category
Book Club
May
30

Moral AI and Responsibility – Book Reivew

About once a week, I see an article showcasing the latest and greatest AI model or feature that is coming out. Similarly, about once a week there is a doom-and-gloom article warning society that we are just 1 bad AI model away from the end of humanity. Moral AI, however, aims to bridge this gap, offering a balanced perspective that lies in the middle.

Moral AI And How We Get There, a book by Jana Schaich Borg, Vincent Conitzer, and Walter Sinnott-Armstrong goes beyond the headlines by taking an academic approach to how morality fits in the artificial intelligence conversation.

Starting with a clear definition of AI, the authors of Moral AI dispel the mystery surrounding this technology. This chapter not only provides an overview of AI's current state but also demonstrates its practical applications.

The remainder of the book systematically discusses concerns about AI, including safety, privacy, and fairness. Each section includes relevant stories of when AI was misused and delved into more philosophical questions of how it *ought* to be used.

Responsibility and AI

For me, the most crucial chapter was about responsibility. In a world where AI products are often built by large teams inside a company led by a visionary with many interests — financial, humanitarian, technological, etc.- the question of responsibility becomes paramount.

The crux of moral AI is asking *who will be responsible?* If everyone feels that someone else is responsible, it isn't easy to see how anyone will purposefully ensure their AI product improves the world.  

It is important to note that the authors operate on the fundamental assumption that there is a moral code to follow. I agree with the authors that there are distinct right and wrong actions that should be followed.

Whether you agree with the authors or disagree, most would agree that products should be generally safe, people deserve some privacy, and products should produce fair results—even if they only deal with legal issues.

Example of Responsibility

The authors highlight the tragic story of Elaine Herzberg's death from an Uber self-driving car in 2018. The authors explore the different angles from which someone might be deemed responsible for the incident – including legal and moral responsibility.  

Some parties include –

  • The driver who was meant to be monitoring the car
  • Uber, who asked the driver to do distracting tasks
  • Herzberg who was jaywalking and had methamphetamine in her system
  • Those who developed the self-driving system including those
    • Who knew the cars were in frequent accidents
    • Who failed to code for jaywalking and those walking bikes across the street
    • Who coded the car breaks to be delayed if the system didn't recognize the object
    • Those who turned off the auto-breaking system of the car
  • The executives who must have been aware of the risks but continued with testing
  • Arizona's government allowed Uber to test despite knowing Uber had left California because the testing was too rigorous
  • A governor who seemed to purposefully hide the facts about the car testing program to circumvent concerned citizens
  • Finally, regarding AI itself, can an AI model be held responsible?

The question of responsibility is vastly complex and often spans multiple parties.

No silver bullet

I appreciate that the authors don't land on some silver bullet that fixes such a complex issue. Instead, a *next steps* approach best explores how we might mitigate risk.

The best approach comes down to intentionality. The authors fully recognize we will only figure out a final solution after the issue of AI causing harm. Yet, there is a highly effective way to increase the fairness, safety, and privacy of AI – intentionality.

When looking at the potentially responsible parties above, corners were cut at every step. What if the driver had been paying attention? If Uber hadn't asked her to check her Slack channels? If the jaywalker had used the crosswalk 350ft away? If Uber had delayed their real-world testing and simulated more events? The list goes on.

Larger companies could introduce teams that consider the product's moral interest. KPIs could be set to encourage better AI behavior. Sole developers could stop and ask how their new AI feature might cause harm in the world.

Slowing down, intentionally asking for advice, and developing objectives that enhance the morality of the AI product is an excellent first step in creating safe, private, and fair products.

This doesn't solve everything, and the authors explore much more in the book about how morality might be built into AI and how we can respond as a society.

An Intentional Approach to AI  

Steps like reading this book and considering how AI is and will affect our lives are significant first steps in intentionally interacting with AI.

A fearful attitude toward AI, where we either ignore it or avoid it altogether, is unhealthy. Similarly, having an overly optimistic view that AI will solve all the world's problems is not healthy.

Instead, we should take a balanced approach to the benefits and costs of AI whenever it is implemented and do our part to develop and demand equitable, safe, and privacy-centered systems that improve our world.