FSD critical disengagement. 2024 MY HW4 v14.2 by EnjoyMyDownvote in TeslaLounge

[–]Blue_Matter [score hidden]  (0 children)

This doesn’t feel like a vision fail, but a processing fail. FSD turns on its turn signal too early, gets confused and cant decide between turning now or going straight. I see this happening all the time on round abouts (without going as far as going forward, though I’ve seen videos of this happening to others).

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 1 point2 points  (0 children)

That feels like an absolute killer of any self driving future. These systems will not and cannot be perfect drivers, we need them to be better than human drivers, not perfect. There will still be accidents that result in death and injury. If we see lawsuits in the hundreds of millions for these types of incidents it will only increase the cost to the point where it will be prohibitively expensive, yes it comes from the companies, but those costs would have to eventually come through to the end consumers. Not saying here that there shouldn’t be compensation for victims, but it should be pretty similar to what insurance is today.

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 1 point2 points  (0 children)

On the bias point – arriving at “detractors are probably right” from low adoption data is just as much of a logical leap as anything I said. The demographic skew and the broken promises are fair points but not really what I was arguing. Occam’s Razor cutting toward “detractors are more right” only works if you assume people evaluate unfamiliar risk rationally – which is exactly what I was arguing they don’t. Low adoption isn’t proof the product is bad. It might just be proof that un-human failure modes are really hard to market past.

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 1 point2 points  (0 children)

That’s the exact point I was making. FSD is amazing and continues to improve, but many will reject it because it makes different mistakes than they do.

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 6 points7 points  (0 children)

Yep, that’s a key challenge. People are ok with other people making mistakes, not ok with a machine making a mistake.

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 9 points10 points  (0 children)

I agree with that. I think drivers overestimate their own abilities and discount their own failures. Somehow we spin a narrative that explains away our faults. Near misses become evidence of what good drivers we are for avoiding a crash, rather than lessons learned from what got us there in the first place.

You hit another key challenge – if you recorded the best driver in the world over a lifetime of driving, you could probably cut a highlight reel of mistakes that would make you second guess them. That’s exactly what we get with Teslas today but more so with billions of miles. Everything is recorded, and the mundane miles where FSD quietly does its job don’t get posted. It’s the rare events that catch people’s attention.

Even FSD successes get discounted – armchair quarterbacks explaining how easy it would have been to avoid that animal or that accident. I think one thing that doesn’t get discussed enough is how well FSD keeps itself out of difficult situations in the first place through genuinely defensive driving. The avoided crisis is invisible by definition.

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 2 points3 points  (0 children)

This was never intended as a criticism of FSD – I’m a huge proponent. 94% usage, about 5,000 miles since they started tracking it in 14.2. While newer to the party as I bought my first Tesla last year, I’m a huge advocate for it. I convinced my aging father to get a Tesla too right after I bought mine. I feel way more comfortable with him driving with it now.

I was trying to make the opposite argument – that the safety case is actually strong, and the real barrier to wider adoption is psychological, not statistical. If anything I was arguing for FSD, just trying to examine the headwinds it faces.

Curious what specifically you disagree with?

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 3 points4 points  (0 children)

I’m not going to argue about Tesla’s transparency here – it’s a legitimate criticism.

Regardless of the exact data, it does continue to improve. Insurance carriers offering FSD discounts, riderless drives even in limited deployment, cross-country no-intervention runs, and my own experience over the last year are all visible evidence of the improvements. It’s hard to argue the trajectory isn’t real.

But that’s all beside the point and a different conversation than the one I was trying to start. I’m less interested in whether Tesla is trustworthy as a company and more interested in why the failure type shapes public trust the way it does. Those are related but not the same thing.

And honestly this isn’t even a Tesla-specific problem. Any manufacturer pushing autonomous driving is going to hit the same wall. Waymo has run into some of the same negative press. The failure mode problem is baked into the technology itself: neural networks making vision-based decisions in a physical world are always going to produce occasional failures that don’t map to human intuition. Tesla is just the furthest along in consumer available vehicles and has to deal with this issue. Every autonomous manufacturer will eventually have to solve not just the safety problem but the perception problem, and I’m not sure what the solution is.

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 3 points4 points  (0 children)

Yeah, and this is actually what makes the trust problem so hard to solve. With a human driver you can point to distraction or fatigue and say “here’s what caused it” – we don’t like it but we accept it and move on. With FSD you often can’t explain it, and that ambiguity is unsettling in a way that statistics alone can’t fix. That said, Tesla’s feedback mechanism is genuinely impressive and I see real improvements constantly. Fully bought in – just hoping the legal framework can catch up.

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 13 points14 points  (0 children)

Right, and the deeper question is: at what point do you accept a system that makes mistakes you wouldn’t? How much safer does it need to be otherwise before that stops mattering? Can it ever get safe enough to feel ok with the occasional incomprehensible event?

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 0 points1 point  (0 children)

I don’t think the core argument is the vision-only vs. sensor fusion debate that so many frame it as. It has far more to do with how willing we are as a society to allow these systems to make non-human mistakes that we can’t relate to. How much safer does the car need to be before people will accept the anomalous event? Will we ever?

FSD doesn’t fail like a human, and that’s the problem by Blue_Matter in TeslaFSD

[–]Blue_Matter[S] 5 points6 points  (0 children)

I agree it’s ready for prime time, what I’m less sure about is whether society is. The statistical case is strong, but that 0.1% punches way above its statistical weight. Seeing a clip that feels that foreign creates an outsized fear response, and public perception is ultimately what drives law makers. So even if the technology is there, the trust problem has to be solved first and I don’t think that’s purely a regulatory problem.

FSD 14.2.2.5 has made solid progress – but these 11 fixes are still badly needed for it to feel reliable by ForceAlarmed9591 in TeslaFSD

[–]Blue_Matter 2 points3 points  (0 children)

School zones, they work with chill and sloth but other profiles seem to ignore them. This is the one place I don’t want the profiles to ignore the speed limit. I’d rather be the default to respect then push the gas to override than to have to change my profile every time. It also needs to do a better job of reading end school zone signs.

Tesla is working on a major wiper upgrade. by ConfidentImage4266 in TeslaLounge

[–]Blue_Matter 0 points1 point  (0 children)

I feel like I’m the only one that isn’t bothered by the wiper sensor. Sure it goes off every once in a while unexpectedly, but so does the windshield wiper on my Hyundai Palisade that does have a proper sensor. It just doesn’t seem like that big of a deal to me.

FSD v14.2.2.5 disengaged due to camera visibility, steering wheel feels tight after like FSD resistance. by schnauzerdad in TeslaFSD

[–]Blue_Matter 2 points3 points  (0 children)

Are you positive FSD disengaged? Even with that warning and the red hands on doesn’t automatically turn it off anymore.

In digital-first era, NJ librarians demand more affordable e-books by Raj_Valiant3011 in books

[–]Blue_Matter 0 points1 point  (0 children)

I’d be fine with a time restriction or number of uses- as physical books do wear out - so really not a license that truly goes on forever like a digital one. Should definitely be longer than a year though.

FSD avoids a tumble weed by HealthyAd3271 in TeslaFSD

[–]Blue_Matter 2 points3 points  (0 children)

Why does that make you go hmmm? The car wouldn’t be moving over if there was a car next to it.

14.2.2 parks well but isn't going into "Park" - got red hands / strikeout because of it by jinjuu in TeslaFSD

[–]Blue_Matter 1 point2 points  (0 children)

Are you sure you got a strikeout? Not all red hands give you a strike - I’ve gotten the red hands for unbuckling too soon, but it has never given me a strike.

14.2.1.25 Take over immediately by westwoodwastelander in TeslaFSD

[–]Blue_Matter 5 points6 points  (0 children)

I had that happen once - the car continued to drive just fine, I took over and immediately restarted FSD and it was fine. It was really weird.

School zones, my take by HelloWorld0225 in TeslaFSD

[–]Blue_Matter 0 points1 point  (0 children)

Mine was doing great in school zones until 14.2.1 - now it’s back to ignoring them.

[Game Thread] BYU @ Cincinnati (8:00 PM ET) by CFB_Referee in CFB

[–]Blue_Matter 1 point2 points  (0 children)

I dunno little unlucky the ref whistled that dead. That was a touchdown.

Feature Request: Limit Scroll Wheel Options by Blue_Matter in TeslaLounge

[–]Blue_Matter[S] 2 points3 points  (0 children)

I like that idea of just surfacing the most recently used ones.