1702831830 Elon Musk39s Big Lie About Tesla Is Finally

Elon Musk's Big Lie About Tesla Is Finally Exposed – Rolling Stone

FREMONT, CA – SEPTEMBER 29: Tesla CEO Elon Musk speaks during an event to launch the new Tesla Model X crossover SUV on September 29, 2015 in Fremont, California.  After several production delays, Elon Musk officially launched the highly anticipated Tesla Model X crossover SUV.  (Photo by Justin Sullivan/Getty Images)

Justin Sullivan/Getty Images

Back in 2016, Elon Musk claimed that Tesla cars “could drive autonomously and more safely than a human.” Right now.” It was a lie that drove up Tesla's stock price – and made Musk one of the richest people in the world. That lie is now falling apart in the face of a new recall of 2 million Teslas. It also reveals to the general public, What close observers of Tesla have always known (and the company itself admits in the fine print of its legal agreements): Tesla's so-called “self-driving” technology works perfectly – as long as there is a human behind the steering wheel, vigilant at all times.

Of all the scandals surrounding venture capital-fueled excess in the last decade, Tesla's dangerous and over-the-top approach to promoting automation technology was one of the most important, but also one of the most hidden. Just like the 1770 Mechanical Turk, everyone was so focused on the technology itself that they missed the human factors that drive the entire spectacle. Equally worrying, regulators have overlooked that forcing people to babysit incomplete systems creates entirely new risks for public roads.

If you read the official announcement about Tesla's recall of more than two million Autopilot-equipped vehicles, you'll notice that it's not really a defect in the Autopilot technology itself. At least not in the sense that the system's cameras break, or that its software recognizes red lights as green lights, or that its AI makes troubling decisions during “trolley problem” exercises, or anything like that. The problem, oddly enough, has everything to do with people.

Regulatory technology chatter shows that people sometimes do the strangest things. It turns out that a person sometimes stops paying attention to the road when he uses a “driving assistance system” that steers, brakes and accelerates for him. This wouldn't be a problem if Teslas could actually drive safely, and the company accepts legal liability for the actions of its software when navigating vehicles weighing 5,000 pounds on public roads. However, since none of this is true, users must be prepared to save Autopilot from itself at any time or expect it to drive them into an object – perhaps a semi-truck turning across the lane – at high speed it has already happened several times.

Editor favorites

In short, when humans stop paying attention, it is as big a problem as if a camera or radar sensor were disconnected from the computer running the code. This makes a lot of sense when you dig even deeper into Tesla's fine print and realize that the owner has all legal responsibility for everything the system ever does. By telling customers that its cars are nearly self-driving and designing them without guardrails, Tesla attracts inattention and only shifts blame to the victim. (The company did not respond to a request for comment for this article.)

To be clear, if humans were a manufactured part of the Autopilot system, its designers would have taken into account one of our well-known flaws: when we get bored, we stop paying attention. A 1983 paper pointing out the “irony of automation” highlighted a problem that dates back to early 20th century behavioral science: when automation takes on too many tasks, humans become inattentive and may overlook the crucial part of a task the task they are needed for, especially if it is time sensitive, such as: B. taking over to prevent a crash. It's not about being a bad driver or a bad person. No human can monitor a boring task forever without eventually becoming inattentive and unable to perform a complex rescue maneuver within a second.

Of course, all of this has been well understood for years in the specific context of Autopilot. After the first publicly reported autopilot deaths — back in 2016, when Musk said they were already autonomous and safer than humans — the National Transportation Safety Board began investigating autopilot accidents. In three fatal accidents, two of them under nearly identical circumstances, drivers died because they weren't paying attention when their Tesla hit them at high speed into an unexpected obstacle. In the two nearly identical accidents in Florida, the system was active on a road for which it was not designed.

Related

What the NTSB found in these three accidents was not a single defect in Autopilot's self-driving system per se, because from a legal perspective, Autopilot was not technically an autopilot. By calling Autopilot a so-called “Level 2” driver assistance system (using the Society for Automotive Engineering's arcane automation levels taxonomy), Tesla created a technology that automates the car's key controls but legally leaves the human driver in control. A key shortcoming was the lack of driver monitoring and systems to place legal and ultimate safety responsibilities on humans. Combine that with the ability to activate the system anywhere, even on roads that Tesla says it isn't designed for, and you have the bizarre new horror of people looking away while the automation they trust too much pulls them in easily avoidable (albeit unexpected) objects.

Due to a peculiarity of regulatory design, the NTSB has the gold standard of accident investigation capabilities, but has no authority to do more than make recommendations based on its findings. After investigating three fatal crashes, the board implored the agency with primary regulatory power, the National Highway Traffic Safety Administration, to take action, but there was no response. Both NHTSA and Tesla ignored evidence from three in-depth investigations that pointed to this fatal combination of flaws in Autopilot's design.

At least until 2021, the new recall notice says, when NHTSA opened an investigation into as many as 11 Autopilot crashes involving emergency vehicles. By this point, Musk had created numerous hype events around the technology that drove up the stock price, and had been collecting deposits from customers since late 2016 for a “full self-driving” version of the technology. Despite the reported deaths and clear evidence that the only video of a driverless Tesla was heavily staged, even Musk admits that his hype over self-driving technology was the central factor in the recent growth of his wealth to gigantic proportions.

But of course it all rests on the backs of people behind steering wheels, what Madeline Clare Elish calls “Moral Crumple Zones.” Tesla keeps these paying liability sponges behind the wheel largely through the strength of a statistical lie: that Autopilot is safer than human drivers. Tesla has been making this claim officially in its “Quarterly Safety Reports” since 2018 (although Musk has been making it for a long time), although its comprehensive statistical comparison does not take into account any of the best-known factors influencing road safety. When road safety researcher Noah Goodall, in a peer-reviewed article, adjusted the best publicly available data for factors such as road type and driver age, Tesla's claim of a 43% reduction in crashes turned into an 11% increase in crashes.

If Tesla had developed an Autopilot-like system with the goal of increasing safety, it would combine the strengths of sensor technologies with the incredible cognitive power of a human, creating an augmented “cyborg” system with humans at the center. Instead, a simulacrum of a self-driving system was built, a spectacle for consumers and Wall Street alike, boosting profits and stock prices at the expense of anyone who happened to be looking at their phone when the system made a mistake. Instead of increasing our safety as drivers, Autopilot forces humans to wait attentively and react as soon as something goes wrong – a type of “vigilance task” that humans are notoriously bad at.

Now that Tesla has been caught selling a simulacrum of autonomous driving and overstating its safety benefits, Tesla's response is the usual: It can fix all of this with a software update. Since Tesla can't install infrared eye-tracking cameras or laser map-approved roads like competing systems can with a mere software update, NHTSA has to play along. The only thing Tesla can do through software is constantly bombard drivers with warnings to remind them of the truth they've hidden for so long: you're actually in control here, be careful, the system won't let you protect.

But even with the small victory of forcing a recall based on human factors, NHTSA has made a small contribution to the growing understanding that Tesla's claims about their technology are untrue and unsafe. Musk has argued since 2019 that Tesla's self-driving technology is advancing so quickly that adding driver monitoring would make no sense and any human input would only lead to errors in the system. After four years of giving him the benefit of the doubt, the NHTSA is finally calling his bluff.

While this recall does not represent a heroic effort to protect public streets, it does open the door to more comprehensive action. The Justice Department has been investigating Tesla's “full autonomous driving” for some time, and the tacit admission that humans are still the safety-critical factor in Tesla's automated driving system could be the prelude to more vigorous enforcement. More importantly, it provides ammunition for an army of hungry personal injury lawyers to siphon off Tesla's coffers in a hectic civil litigation process.

Trending

If Tesla's dangerous and treacherous foray into self-driving technology is coming to an end, it can't come soon enough. As long as the world's richest man got there at least in part by taking new risks on public roads, his success is a troubling example for future claimants to massive wealth. For fear of this example alone, we hope that this recall is just the beginning of regulatory action against Autopilot.

Ed Niedermeyer is the author of Ludicrous: The Unvarnished Story of Tesla Motors and co-host of The Autonocast. Since 2008 he has been reporting and commenting on cars and mobility technology for various media.