WaPo has this story about the “shocking” death toll of Tesla’s Autopilot feature. It misuses the numbers to draw the wrong conclusions. There’s nothing to see in the public numbers, saying neither that Tesla is unsafe or safer.
Misuse of the data
The WaPo story egregiously misuses the data.
The biggest factor is simply that Tesla reports all crashes that meet the criteria, whereas most car makers don’t. Tesla’s immediate phone home to report a collision (and whether Autopilot was enabled), while other car makers only know if their car was in a crash if their customers tell them. The NHTSA makes this clear:
Due to variation in data recording and telemetry capabilities, the summary incident report data should not be assumed to be statistically representative of all crashes.
This is actually a scandal by itself, the way that Tesla invades customer privacy. Tesla knows far more about its customers than competing car makers. Tesla PR didn’t respond back to the journalist. Defending themselves from the safety issue means embroiling themselves in the privacy issue.
In any case, the fact that Tesla engineers have unparalleled access to accident data means that can better improve their system. They can analyze each crash and make sure it won’t happen again. Other car makers don’t get the crash data from their cars, and thus, have no idea why their feature failed (or was responsible at all).
Likewise, the NHTSA makes it clear that the data isn’t normalized, either in terms of market share or number of miles driven with L2 assistance enabled. The WaPo stories report only absolute numbers.
Tesla claims 9 billion miles driven with Autopilot enabled, and that its accident rate is 5 to 10 times lower than normal for American drivers. This suffers the same data normalization problems: Autopilot is used primarily for freeway driving, which is dramatically safer than city roads. Even if Autopilot had no impact on safety, we’d see numbers like this.
The point is that data normalization matters, and that without context, such numbers mean nothing.
The WaPo story claims an uptick in accidents after some release of the FSD software. This is nonsense, there’s a constant uptick of everything with Tesla because their car shipments have been growing 50% every year.
This misuse of data is like the antivaxxers misusing the VAERS vaccine side effect reporting system to claim the covid vaccine was dangerous. It’s not a simple mistake, it’s deliberate misinformation. Those citing the WaPo article are even worse.
Autopilot vs. FSD
Tesla has two different systems, Autopilot and FSD. The WaPo article deliberately confuses the two — while pretending to explain the difference. Most people quoting the article likewise deliberately confuse them.
Autopilot is advanced driver assistance system (ADAS, SAE L2), something that most other car vendors provide. While the driver can take their hands off the wheel (and feet off the pedal), the feature only keeps the car in the same lane. This feature doesn’t recognize stop lights, or stopped school buses.
FSD is a fully automated driving system (ADS, SAE L3), where the car can do everything taking you from point A to point B. The driver is still responsible but usually not in control. The driver needs to still take control occasionally when the car gets confused. People love posting YouTube videos showing how far their car can go before it requires humans taking control.
This link explains the differences, with Tesla Autopilot being SAE L2, and Tesla FSD being (essentially) SAE L3. A fully autonomous car, without a driver behind the wheel, would be SAE L4. Google’s Waymo and GM’s Cruise companies are currently offering such services in Phoenix.
According to some analysts, roughly half of new cars sold in America have SAE L2 enable.
The world is divided into two parts, Tesla fanboys who love FSD, and Tesla/Musk haters who really really hate FSD. That’s why there’s so much confusion between L2 Autopilot and L3 FSD — the lovers and haters just don’t care and mix them altogether, whatever it takes to fit their arguments.
Tesla’s L2 Autopilot is mostly the same as their competitors, except that it relies only upon cameras and AI, instead of using sonar, radar, or lidar (bouncing sound, radio waves, or light off objects). Tesla claims their system is superior, critics (as in the WiPo story) claim Tesla is morally weak for removing it’s sonar and radar. There’s good technical debate here, but it’s wrapped in this nonsense moral debate.
Tesla’s L3 FSD is vastly different than its competitor’s L3 plans. Nobody offers L3 yet in the United States, but several are promising it, like Mercedes-Benz. But their versions really only work in traffic jams, scarcely better than L2 systems. In contrast, Tesla’s uncertified L3 FSD will drive the rest of the time.
The fact that Tesla’s L3 FSD isn’t certified at L3 makes a lot of people angry. They think it’s unsafe because it’s uncertified. There is also the constant debate about sonar, radar, lidar, and other sensors — and Tesla’s lack of them.
Conversely, Tesla fanboys make all sorts of claims about the superiority of Tesla. It’s damn impressive, but for 6 years now it’s failing to achieve the full self driving that’s been promised. Tesla updates the FSD software every couple months, which is hailed as a major new advance by fanboys, but it doesn’t seem to be getting any closer to either certification for use in traffic jams, or L3 driving on city roads.
Conclusion
The statistics are misused data from the NHTSA. There’s no evidence of either safety or danger. Tesla is missing many of the sensors that its competitors use, but at the same time, seems to be doing a better job with just cameras.