
Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Last Updated on: 11th February 2025, 11:58 am
Driving a car is a complex process. Sometimes the people who design and build roads do stupid things, like putting a light pole in the middle of a travel lane. Really? Yes, really. Case in point is the experience of Tesla Cybertruck owner John Challinger, a software developer in Florida, who posted on social media at 6:11 am on February 9, 2025:
Soooooo my @Tesla @cybertruck crashed into a curb and then a light post on v13.2.4.
Thank you @Tesla for engineering the best passive safety in the world. I walked away without a scratch.
It failed to merge out of a lane that was ending (there was no one on my left) and made no attempt to slow down or turn until it had already hit the curb.
Big fail on my part, obviously. Don’t make the same mistake I did. Pay attention. It can happen. I follow Tesla and FSD pretty closely and haven’t heard of any accident on V13 at all before this happened. It is easy to get complacent now – don’t.
@Tesla_AI how do I make sure you have the data you need from this incident? Service center etc has been less than responsive on this. I do have the dashcam footage. I want to get it out there as a PSA that it can happen, even on v13, but I’m hesitant because I don’t want the attention and I don’t want to give the bears/haters any material.
Spread my message and help save others from the same fate or far worse.

For context, here is an image of the light pole from Google Maps as posted by PC Magazine. It has to be one of the stupidest places to put a light pole in the history of street lighting, but there it was and the Cybertruck obviously failed to see it. People always talk about “edge cases” when discussing autonomous driving situations. Cases don’t get much more edgy that this act of outrageous ignorance, but there it is, stuck out in the middle of what is supposed to be a travel lane for all the world to see — except for a Cybertrucik operating on a recent version of Full Self Driving.
For his part, Challinger is pretty laid back about the whole thing and blames himself for not paying attention and getting “complacent.” In a previous post from January, Challinger wrote about his habit of losing focus with FSD enabled: “Sometimes I decide to go somewhere and turn on Tesla FSD and then I forget where I decided to go and then it starts turning into Taco Bell or whatever and I’m like wtf is it doing and then I’m like oh right Taco Bell.”
Tesla’s system is supposed to warn drivers repeatedly if they are not paying attention. Per the Tesla owner’s manual, the vehicle should issue a series of escalated warnings if the driver is not paying attention. They will also be asked to put their hands on the steering wheel. If the driver repeatedly ignores these prompts, FSD disables for the rest of the drive. “I don’t expect [FSD] to be infallible but I definitely didn’t have utility pole in my face while driving slowly on an empty road on my bingo card,” Challinger said after the collision.
The Cybertruck only got the ability to run FSD in September, nine months after the vehicle’s launch. Given its unique size, shape, and software, it required tweaks to the FSD used by other Tesla vehicles. The Cybertruck that crashed was running a relatively recent version of FSD — version 13.2.4 — which Tesla released in January. It mostly focused on “bug fixes,” according to Not a Tesla App, but the release notes also mention an improved system for “collision avoidance.” It looks as though more improvements may be needed.
Tesla & Automation Bias
The most dangerous part of any automobile is the nut behind the wheel, my old Irish grandfather liked to say. Despite multiple protestations by Elon Musk, putting too much trust and faith in computer systems is a problem. Elon Musk doesn’t believe in it. He thinks putting warnings in an owner’s manual that few take the time to read is sufficient, but scientists have a name for it. They call it “automation bias.”
According to Wikipedia, the tendency toward over-reliance on automated aids is known as “automation misuse” which occurs when a user fails to properly monitor an automated system or when the automated system is used when it should not be. Automation bias is directly related to misuse of the automation through too much trust in the abilities of the system. It can lead to lack of monitoring of the automated system or blind agreement with an automation suggestion and can then lead to errors of omission and errors of commission. Errors of commission occur when users follow an automated directive without taking into account other sources of information. Errors of omission occur when automated devices fail to detect or indicate problems and the users do not notice because they are not properly monitoring the system.
Errors of commission occur for three reasons — overt redirection of attention away from the automated aid, diminished attention to the aid, or active discounting of information that counters the aid’s recommendations. Errors of omission occur when the human decision maker fails to notice an automation failure either due to low vigilance or over-trust in the system. Training focused on the reduction of automation bias and related problems has been shown to lower the rate of commission errors, but not of omission errors.
The presence of automatic aids “diminishes the likelihood that decision makers will either make the cognitive effort to seek other diagnostic information or process all available information in cognitively complex ways.” It also renders users more likely to conclude their assessment of a situation too hastily after being prompted by an automatic aid to take a specific course of action. The three main factors that lead to automation bias are the human tendency to choose the least cognitive approach to decision making, the tendency of humans to view automated aids as having an analytical ability superior to their own, and the tendency of humans to reduce their own effort when sharing tasks either with another person or with an automated aid.
“Technology over-trust is an error of staggering proportion,” writes Patricia Hardré of the University of Oklahoma in a book on why we sometimes put too much faith in machines. According to the BBC, she argues that people generally lack the ability to judge how reliable a specific technology is. This can actually go both ways. We might dismiss the help of a computer in situations where it would benefit us or blindly trust such a device, only for it to end up harming us or our livelihoods.
What’s the point? Simply this: Tesla Full Self Driving is not working as it should. Not now, never has. And until Musk gets over his infantile refusal to incorporate radar and lidar into the automated driving hardware package at Tesla, it never will be. End of story.
Chip in a few dollars a month to help support independent cleantech coverage that helps to accelerate the cleantech revolution!
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.
CleanTechnica’s Comment Policy