Scientists teach autonomous vehicles tackle real-world moral decisions

Credit: Unsplash+.

In the world of self-driving cars, making ethical decisions is a complex challenge.

Researchers are now moving beyond the classic ‘trolley problem’ to understand how autonomous vehicles should make moral choices in everyday driving scenarios.

This new approach aims to train self-driving cars to make decisions that reflect human values in more realistic traffic situations.

A recent study, published in the journal AI & Society, sheds light on this innovative research.

The paper, titled “Moral judgment in realistic traffic scenarios: Moving beyond the trolley paradigm for ethics of autonomous vehicles,” was led by Dario Cecchini, a postdoctoral researcher at North Carolina State University, and Veljko Dubljević, an associate professor at NC State.

The traditional ‘trolley problem’ presents a dilemma where one must choose between intentionally harming one person to save several others.

This problem has been a standard model for studying moral judgment in traffic, especially for autonomous vehicles.

The typical scenario involves a self-driving car deciding between swerving and hitting an obstacle or moving forward and hitting a pedestrian. However, real-life driving involves more nuanced moral decisions, such as speeding, running red lights, or yielding for ambulances.

These everyday choices can escalate into critical situations.

For example, a driver speeding and ignoring a red light might suddenly face a choice between swerving into traffic or crashing. Currently, there’s limited data on how we morally judge these more common driving decisions.

To fill this gap, the researchers developed a series of experiments focusing on everyday traffic situations. They crafted seven different driving scenarios, like a parent deciding whether to run a red light to get their child to school on time. These scenarios were brought to life in a virtual reality environment, giving participants a realistic experience of the decisions drivers make.

The research is grounded in the Agent Deed Consequence (ADC) model. This model suggests that moral judgments consider three elements: the agent (the character or intent of the person acting), the deed (the action taken), and the consequence (the outcome of the action).

The team created eight variations of each traffic scenario, altering the agent, deed, and consequence. For instance, one scenario might feature a caring parent who stops at a yellow light and safely gets their child to school on time.

Another scenario could involve an abusive parent who runs a red light, causing an accident. Each of these versions changes the nature of the parent, their decision at the traffic signal, and the outcome.

The aim is for participants to rate the morality of the driver’s behavior in each scenario on a scale from 1 to 10. This approach will provide valuable insights into what behaviors are considered moral in the context of driving. This data is crucial for developing AI algorithms for moral decision-making in autonomous vehicles.

The researchers have already conducted pilot tests to refine the scenarios, ensuring they are realistic and understandable.

The next phase involves large-scale data collection, with thousands of participants. This data will help develop more interactive experiments to deepen the understanding of moral decision-making.

Ultimately, this research will contribute to creating algorithms for use in autonomous vehicles. Further testing will be required to evaluate how these algorithms perform in real-world situations.

This pioneering work marks a significant step towards equipping self-driving cars with the ability to make morally sound decisions, reflecting human values in everyday driving scenarios.