Tesla Motors says the Autopilot system for its Model S sedan “relieves drivers of the most tedious and potentially dangerous aspects of road travel.” The second part of that promise was put in doubt by the fatal crash of a Model S earlier this year, when its Autopilot system failed to recognize a tractor-trailer turning in front of the vehicle. Tesla says the driver, Joshua Brown, also failed to notice the trailer in time to prevent a collision. The result? In Tesla’s own words, “the brake was not applied”—and the car plowed under the trailer at full speed, killing Brown.
Stephen Casner, a research psychologist in NASA’s Human Systems Integration Division, puts it more bluntly: “News flash: Cars in 2017 equal airplanes in 1983.”
Casner is not just referring to basic mechanisms that keep the nose of the plane level, similar to cruise control in a car. He means, in his words, “the full package”: true autonomous flight, from just after takeoff up to (and even including) landing. “The first Madonna album had not come out yet when we had this technology,” Casner says. “And we are, 33 years later, having this very same conversation about cars.”
According to some researchers, this potentially dangerous contradiction is baked into the demand for self-driving cars themselves. “No one is going to buy a partially-automated car [like Tesla’s Model S] just so they can monitor the automation,” says Edwin Hutchins, a MacArthur Fellow and cognitive scientist who recently co-authored a paper on self-driving carswith Casner and design expert Donald Norman. “People are already eating, applying makeup, talking on the phone and fiddling with the entertainment system when they should be paying attention to the road,” Hutchins explains. “They’re going to buy [self-driving cars] so that they can do more of that stuff, not less.”
Read the whole story: Scientific AmericanMore of our Members in the Media >