Tesla Autopilot Was — and Still Might Be — Uniquely Risky

A federal report released today found that Tesla’s Autopilot system was involved in at least 13 fatal crashes in which drivers misused the system in ways that the automaker should have foreseen and done more to prevent. Additionally, the report called Tesla an “industry outlier” because its driver assistance features lacked some of the basic precautions that its competitors had taken. Now regulators are wondering whether a Tesla Autopilot update that aims to fix these fundamental design problems and prevent fatal incidents has gone far enough.

These fatal crashes left 14 people dead and 49 injured, according to data collected and published by the National Highway Traffic Safety Administration, the federal highway safety agency in the United States.

At least half of the 109 “head-on plane crashes” closely examined by government engineers – those in which a Tesla struck a vehicle or obstacle directly in its path – involved hazards that were visible five seconds or more before impact. That’s enough time for an attentive driver to have been able to prevent or at least avoid the worst impact, government engineers concluded.

In one such accident, a March 2023 incident in North Carolina, a Model Y traveling at highway speeds struck a teenager as he exited a school bus. The teenager was flown to a hospital to be treated for serious injuries. The NHTSA concluded that “both the bus and the pedestrian would have been visible to an attentive driver and would have allowed the driver to avoid or minimize the severity of this accident.”

Government engineers wrote that over the course of their investigation, they “observed a trend of preventable accidents involving hazards that would have been visible to an attentive driver.”

Tesla, which dissolved its public affairs department in 2021, did not respond to a request for comment.

Damningly, the report called Tesla “an industry outlier” in its approach to automated driving systems. Unlike other auto companies, the report said, Tesla let Autopilot work in situations it wasn’t designed for and failed to pair it with a driver engagement system that forced its users to pay attention to the road.

Regulators concluded that the Autopilot product name itself was a problem and encouraged drivers to rely on the system rather than cooperate with it. Automotive competitors often use terms like “assistance,” “sense,” or “team,” particularly because these systems are not designed to be fully self-driving, according to the report.

Last year, California state regulators accused Tesla of falsely advertising its Autopilot and Full Self-Driving systems, claiming Tesla misled consumers into believing the cars could drive themselves . In a filing, Tesla said the state’s years-long failure to object to Autopilot branding represented tacit approval of the automaker’s advertising strategy.

NHTSA’s investigation also concluded that Autopilot was more resilient compared to competing products when drivers attempted to steer their vehicles themselves – a design that, the agency wrote in its summary of a nearly two-year investigation into Autopilot, discouraged drivers Participating in Autopilot prevents the work of driving.

A new autopilot probe

These crashes occurred before Tesla recalled its Autopilot software earlier this year and updated it via an over-the-air update. But in addition to completing that investigation, regulators have also launched a new probe into whether Tesla updates announced in February did enough to prevent drivers from abusing Autopilot, misunderstanding when to actually use the feature or using it in locations where it is not designed to operate.

The review comes after a Washington state driver last week said his Tesla Model S was on autopilot – while he was using his phone – when the vehicle struck and killed a motorcyclist.

Sharing Is Caring:

Leave a Comment