The Moral Dilemma of Self-Driving Cars

A large truck speeding in the opposite direction suddenly veers into your lane. Jerk the wheel left and smash into a bicyclist? Swerve right toward a family on foot? Slam the brakes and brace for head-on impact?

Drivers make split-second decisions based on instinct and a limited view of the dangers around them. The cars of the future -- those that can drive themselves thanks to an array of sensors and computing power -- will have near-perfect perception and react based on preprogrammed logic.

While cars that do most or even all of the driving may be much safer, accidents happen.

It's relatively easy to write computer code that directs the car how to respond to a sudden dilemma. The hard part is deciding what that response should be.

"The problem is, who's determining what we want?" asks Jeffrey Miller, a University of Southern California professor who develops driverless vehicle software. "You're not going to have 100 percent buy-in that says, 'Hit the guy on the right.'"

Companies that are testing driverless cars are not focusing on these moral questions.

The company most aggressively developing self-driving cars isn't a carmaker at all. Google has invested heavily in the technology, driving hundreds of thousands of miles on roads and highways in tricked-out Priuses and Lexus SUVs. Leaders at the Silicon Valley giant have said they want to get the technology to the public by 2017.

For now, Google is focused on mastering the most common driving scenarios, programming the cars to drive defensively in hopes of avoiding the rare instances when an accident is truly unavoidable.

"People are philosophizing about it, but the question about real-world capability and real-world events that can affect us, we really haven't studied that issue," said Ron Medford, the director of safety for Google's self-driving car project.

One of those philosophers is Patrick Lin, a...

Comments are closed.