Plans for Self-Driving Cars Have Pitfall: The Human Brain

Experts say the development of self-driving cars over the coming decade depends on an unreliable assumption by many automakers: that the humans in them will be ready to step in and take control if the car's systems fail.

Instead, experience with automation in other modes of transportation like aviation and rail suggests that the strategy will lead to more deaths like that of a Florida Tesla driver in May.

Decades of research shows that people have a difficult time keeping their minds on boring tasks like monitoring systems that rarely fail and hardly ever require them to take action. The human brain continually seeks stimulation. If the mind isn't engaged, it will wander until it finds something more interesting to think about. The more reliable the system, the more likely it is that attention will wane.

Automakers are in the process of adding increasingly automated systems that effectively drive cars in some or most circumstances, but still require the driver as a backup in case the vehicle encounters a situation unanticipated by its engineers.

Tesla's Autopilot, for example, can steer itself within a lane and speed up or slow down based on surrounding traffic or on the driver's set speed. It can change lanes with a flip of its signal, automatically apply brakes, or scan for parking spaces and parallel park on command.

Joshua Brown, a 40-year-old tech company owner from Canton, Ohio, who was an enthusiastic fan of the technology, was killed when neither he nor his Tesla Model S sedan's Autopilot braked for a truck making a left turn on a highway near Gainsville, according to federal investigators and the automaker.

Tesla warns drivers to keep their hands on the wheel even though Autopilot is driving, or the vehicle will automatically slow to a stop. A self-driving system Audi plans to introduce in its...

Comments are closed.