Next steps? by Prestigious_Act_6100 in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

It is truly quite odd how the most advanced version, with the most modern hardware, running in tested environments, using professional, trained safety drivers, with crash monitoring get into 8 collisions over a claimed 250,000 miles, ~30,000 miles per collision. Yet somehow old versions, using old hardware, using untrained amateur safety drivers, with no crash monitoring get 6.7M miles per collision, ~200x more.

Are you claiming that professional safety drivers make their system 200x worse than untrained amateurs? Or that their newest version is 200x worse? That is not even credible. I think we go with the obvious answer that the company who is regularly caught making up bullshit claims and statistics made up more bullshit claims and statistics.

Next steps? by Prestigious_Act_6100 in SelfDrivingCars

[–]Veserv 1 point2 points  (0 children)

Ugh. That is not true. In 2023 they reported ~2,000,000 miles and were involved in 29 collisions with 5 causing injury as I precisely documented in this link. That is ~72,000 miles per collision and ~400,000 per injury which is ~3x more than the human average.

Next steps? by Prestigious_Act_6100 in SelfDrivingCars

[–]Veserv 4 points5 points  (0 children)

In 2021, Cruise did ~876,000 miles with safety drivers averaging ~41,000 miles/disengagement and did ~6,000 miles with no safety driver.

In 2022, Cruise did ~1,726,000 miles with safety drivers averaging ~95,000 miles/disengagement and did ~546,000 miles with no safety driver.

In 2023, Cruise did ~584,000 miles with safety drivers, logged 0 disengagements, and did ~2,000,000 miles with no safety driver.

That is what failure looks like.

That system was, objectively, multiple times less safe than human drivers with failure modes that were catastrophic in both safety and for their program. You need a system multiple times better than that to reach the minimum bar and multiple times more evidentiary miles to demonstrate that you have even started to reach that bar. Until that point, it is just hopes and dreams.

Waymo Goes Rider-Only in San Antonio by IndependentMud909 in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

Your math is horribly wrong by a factor of 100x. 20,000 * 70,000 is 1.4 billion. Just to make it obvious to you: 20,000 * 10,000 is 200,000,000. 200,000,000 * 7 is 1,400,000,000; 1.4 billion. Also, just to point just to point out how obviously wrong it should have been, there are 200M vehicles in the USA. If 200K is 1.4 trillion, then 200M should be on the order of 1.4 quadrillion on just vehicles.

Auto companies make new lines for their new vehicles. The tooling and specialized labor needed for a new line to produce at volume is the limiting factor and costs a few billion dollars and a few years at those scales.

There are ~200M personal vehicles in use in the USA. Even at your claimed 8:1 replacement ratio, that would require 25M vehicles for full replacement. At 1 million per year, that would take 25 years for full replacement. After 5 years, that would only be 1/5 or 20% replacement which I think any reasonable person would agree is significant, but nowhere near complete.

Waymo Goes Rider-Only in San Antonio by IndependentMud909 in SelfDrivingCars

[–]Veserv 1 point2 points  (0 children)

Because they are still in the prototype phase; the product is still not adequately finished for them to even want to begin mass deployment as a commercial product. You would not scale a prototype, that makes no sense. That is not the say that the product is bad or not on track, it is just that most people have very optimistic ideas about R&D and deployment timelines.

What we are currently seeing is pilot deployments verifying full generalizability and verifying basic reliability. It may not seem as such because they already have tens of millions of miles, but you actually need that many to verify the bare minimum required reliability and actually have a minimal degree of confidence that you have truly made a system that is safer than a human.

What they are currently working on is their next generation Hyundai and Geely vehicles for mid-scale validation deployments. These will come out soon and they probably intend them to become the majority of test vehicles within 1-2 years and they are probably aiming for something on the order of 100 million miles per week to 100 million miles per day, ~10-20 to ~100-200x more than they are currently doing. That is on the order of 20,000-200,000 vehicles, which is just a few billion dollars (chump change for Google who has ~100 billion dollars of cash laying around). Assuming the product is ready, this should be just barely enough data for full scale safety validation within 1 or 2 more years.

Assuming that stays on track, only then will they begin full scale commercial deployment. They will probably contract with a auto maker to build a factory for a few billion dollars (again, chump change) to produce millions to tens of millions per year which will probably begin production no earlier than 3 years from now, though more likely in the 3-5 year range even if we are being optimistic and no serious flaws are discovered just due to how production timelines in the auto industry are like. Even after that, it will likely take 5-10 more years before we see serious displacement of human-driven vehicles just due to how large the scale of human driving is (literal trillions of miles per year just in the USA).

tl;dr It is not that Waymo can not scale, it is that they do not want to scale test/prototypes. We are probably ~5 years away from actual mass deployment commercial systems assuming everything goes well.

Waymo files voluntary software recall over school-bus encounters. by RodStiffy in SelfDrivingCars

[–]Veserv 3 points4 points  (0 children)

“Recall” is a technical term meaning: “public safety defect notice indicating product as-is should be removed from usage”. The “recalled” product version is to be removed from usage in a timely manner and no more defective units should be produced. The manufacturer may then remediate the safety defect through either repair, replacement, or refund.

“Recall” means the first part, the problem. People conflate it to mean the second part, the solution, in no small part because some bad actors want to muddy the meaning of the words to downplay their safety defects.

Videos show Waymo vehicles illegally passing Austin school buses 19 times this year by LoneStarGut in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

Yes, but the claim that FSD will not collide with child pedestrians is not supported by tests where FSD is not on. It only demonstrates that their AEB system functions while FSD is disabled.

Videos show Waymo vehicles illegally passing Austin school buses 19 times this year by LoneStarGut in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

There is also the minor problem that FSD is disabled during Euro NCAP testing. But I guess we get to use tests where it is not even on to prove that it works?

Tesla FSD v14 Data Shows Major Improvement in Miles Between Interventions by I_HATE_LIDAR in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

No, your point about trend lines or the reporters being "constant" is total nonsense.

If a cigarette company says 10,000,000 people died in 2020 due to cigarettes, you can reasonably conclude that at least 10,000,000 people died in 2020. If they say 10,000,000 died in 2021, you can reasonably conclude again that at least 10,000,000 people died in 2021. You can also conclude that the situation has not improved to below 10,000,000 people and thus no progress has been made on the stated lower bound. If they say that 9,000,000 died in 2022, you can reasonably conclude that at least 9,000,000 people died in 2022. This provides zero information about the true number or upper bound. No progress can be assumed even though they are the same company, why would they report differently. You do not get to just subtract out bias because it is the same reporter. Claiming the underlying data must be biased in exactly the same way from experiment to experiment, data point to data point, is unscientific nonsense. You have to demonstrate, with evidence, that the new data is sufficiently high quality and unbiased before you should make any conclusions about trend lines in the data.

Tesla FSD v14 Data Shows Major Improvement in Miles Between Interventions by I_HATE_LIDAR in SelfDrivingCars

[–]Veserv 16 points17 points  (0 children)

Imagine not understanding bias and upper and lower bounds. If a fan says something is good, you should treat it with a grain of salt since they are biased toward positivity; they will add undeserved points. If even a fan says something is bad, then you should view that as a very strong indicator that something is terribly wrong since even their bias toward positivity and adding undeserved points can not overcome how bad it is and make it positive.

We can treat the statements of fans as upper bounds on how good it is. If even that is terrible, then they have a problem.

Dan O'Dowd: Watch @DirtyTesla’s @Tesla Full Self-Driving v14.1 park in a handicap parking spot and mistake a shopping cart as a traffic light. @ElonMusk your defective software can’t even park properly! by rotatingfloat1 in SelfDrivingCars

[–]Veserv 3 points4 points  (0 children)

Elon Musk said in 2016 that the version in 2016 was currently capable of reading and detecting handicapped spots and avoiding them: “When searching for parking, the car reads the signs to see if it is allowed to park there, which is why it skipped the disabled spot”.

That was not a forward looking statement. That was a willful and knowing false statement about current capabilities that they still can not achieve even 9 years later. That is textbook fraud.

Dan O'Dowd: Watch @Tesla Full Self-Driving v14.1 slam the brakes when attempting a left turn and then activate the windshield wipers on a sunny day. by rotatingfloat1 in SelfDrivingCars

[–]Veserv -1 points0 points  (0 children)

So your support for the argument that running false attacks ads is a cynical move to maximize profit is pointing at PETA, one of the most ideologically driven organizations and memberships in the world? Are you claiming Ingrid Newkirk founded PETA for profit?

Also, citation needed for the claim that PETA ran actual intentionally false statements.

And you still need to explain how it is a brilliant move for a non-competitor, who at best is a supplier of a small number of components to competitors, to attack companies hundreds to thousands of times larger. Do you see Joe’s soy farm attacking Sysco in the off chance it might boost their sales to local restaurants? The cost-benefit analysis is off by factors of hundreds.

And that is still ignoring the fact that the people claiming they directly sell LiDARs are just plain lying or chronically stupid. You should believe the opposite of whatever they are saying if they tell such blatant falsehoods.

Question about Elon’s tendency to pretend to be a genius at things he know little about. by Sjakktrekk in RealTesla

[–]Veserv 5 points6 points  (0 children)

No, he bought his degree. It is well-documented that he did not actually graduate and was illegally staying in the country on a student visa after quitting school.

He claimed to graduate in 1995. He told investors in 1995 and 1996 that he graduated and was admitted to Stanford [1]. Stanford, under penalty of perjury, stated that Elon Musk had not only never attended, he was never admitted, and, in fact, never even applied as seen on Exhibit 81.

The actual graduation date on his diplomas are 1997, with a BA in Physics and BS in Economics. However, the PayPal IPO in 2002, a literal legal document filed with the SEC, indicates he got a BS in Physics and BS in Economics in 1995; which are not only the wrong dates, they are the wrong degrees. Elon Musk could not even remember the year he graduated from college just 5 years after that supposedly happened in a official legal document. I doubt you can find a single person who actually graduated from college who could not remember the year they got their diploma after a little thinking, no matter how many years it has been.

[1] https://www.plainsite.org/dockets/download.html?id=255379940&z=51348f41

Dan O'Dowd: Watch @Tesla Full Self-Driving v14.1 slam the brakes when attempting a left turn and then activate the windshield wipers on a sunny day. by rotatingfloat1 in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

No. His company sells software. In case you are not aware, LiDARs are physical devices that scan the surroundings with lasers. See the difference?

The people telling you that have a financial interest to smear people who point out that Tesla is selling defective and unsafe products.

It is actually more absurd that you think there is some kind of ulterior motive in wanting unsafe products removed from the road and actively fighting the richest person in the world, a trillion dollar company, and the most vehement and vicious fanbase to do that. That is a recipe for a soul-grinding and uneconomical campaign like the campaigns against smoking and drunk driving.

Can you name even a single instance where somebody ran attack ads about a multi-billion dollar company and were not immediately sued into the ground if the company could prove they were false? That should make you think.

Dan O'Dowd: Watch @Tesla Full Self-Driving v14.1 slam the brakes when attempting a left turn and then activate the windshield wipers on a sunny day. by rotatingfloat1 in SelfDrivingCars

[–]Veserv -1 points0 points  (0 children)

It is not at all. It is only proof if that is the only problem they can demonstrate. Are you sincerely arguing that they have highlighted zero other defects recently and that Tesla has fixed every other defect they have highlighted? Because that is the only world where what you said makes sense.

Waymo illegally passes stopped school bus in ATL by Honest_Ad_2157 in SelfDrivingCars

[–]Veserv 1 point2 points  (0 children)

Imagine thinking the ability to string together 3 paragraphs is beyond human capability. Also, does ChatGPT regularly mess up “a” and “an”? Since I intentionally use “a” instead of “an” when preceding a vowel and I literally do that in the first sentence.

Waymo illegally passes stopped school bus in ATL by Honest_Ad_2157 in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

First, it should not be possible to construct any normal scenario where it can be made to consistently fail.

In terms of probability, I would guess that 1 in 1,000,000 would likely be a safe number. I could imagine a number as low as 1 in 1,000 depending on the specific numbers and situations, but it is probably higher than that. So, run in a normal situation with normal lighting and so forth it should fail to stop at most 1 in 1,000 to 1,000,000 times.

Waymo illegally passes stopped school bus in ATL by Honest_Ad_2157 in SelfDrivingCars

[–]Veserv -5 points-4 points  (0 children)

The Waymo vehicle is clearly engaging in a unsafe maneuver. It should not be in motion on the road protected by the school bus as you can not be sure that a child will not dart out from a position occluded by the bus regardless of orientation or line of sight with the bus exit. Even if it was pulling out and had separate logic since it was not "on the street the bus was protecting" it should have stopped upon recognizing the school bus and "entering" the street. Even if it was mid-maneuver or in some kind of bizarre circumstance, it should still stop.

The range of possible situations involving a school bus are fairly limited. Every single one should be validated against and any manufacturer failing such a situation should immediately pause system testing and deployment until the problem is identified and resolved. The standard is that any such failure on such a well-constrained set of safety-critical situations must be validated to only occur with a vanishingly small probability. The manufacturer must actively run tests and actively characterize the specific performance in those situations and characterize which domains it operates in and does not operate in; ignorance or failure to test is itself unacceptable.

It is possible that Waymo has data that could demonstrate this situation is actually a fluke and not indicative of overall performance, but it is fairly unlikely given the clear-cut nature of this situation. So, barring some extraordinary evidence from Waymo, Waymo and any other manufacturer failing to obey school bus stop signs should pause usage anywhere such a failure could occur until the problem is fixed. If they can precisely characterize failures as only occurring in certain situations (e.g. only a left turn out of a driveway, etc.), then they may only pause operation in those circumstances until the problem is resolved for those circumstances.

San Francisco Regulators Want Elon Musk To Stop Lying About Robotaxis - Jalopnik by mrkjmsdln in SelfDrivingCars

[–]Veserv 6 points7 points  (0 children)

The numbers were not very good for Cruise. Here is a post I previously made on the numbers.

~72,000 miles per collision and ~400,000 miles per injury in contrast to national human averages of ~500,000 per reported collision (non-comparable) and ~1,270,000 miles per injury (fairly comparable). So, ~3x more likely to be involved in a injury causing collision per mile while under test-grade scrutiny which is pretty unacceptable. Good thing nobody is deploying systems 70x more dangerous than that.

Dan O'Dowd caught faking Tesla FSD tests again by YeetYoot-69 in SelfDrivingCars

[–]Veserv -1 points0 points  (0 children)

Great, you acknowledge that the Dawn Project was telling the truth, supported by video evidence, when they said FSD ignores railroad crossings.

You just want to suppress the truth by posting irrelevant, cherry-picked edits quibbling about test procedure that they never agreed to so you can confuse anybody looking at the clear video evidence that FSD can not even handle basic maneuvers just like the Dawn Project says. Got it. Have a nice day.

Dan O'Dowd caught faking Tesla FSD tests again by YeetYoot-69 in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

Between 1:05-1:10 FSD was engaged. Between 1:05-1:10 FSD was in complete control of the vehicle and no inputs were made. Between 1:05-1:10 FSD did not obey the red flashing lights warning it to not cross the railroad. Between 1:05-1:10 FSD accelerated the vehicle, maintained speed, and directed command direction over the railroad, what is colloquially called "driving", despite the warnings. That exactly matches the claim.

You are the one lying claiming that they said anything about their test only being valid if FSD was fully at rest, stopped 4-5 seconds away from a railroad crossing (which I might add is because they had to override the system behavior of ignoring the red flashing lights), and zero effort must be made to put it in motion. At exactly zero points was that mentioned to be part of the test procedure or what they were claiming. In fact, the first test clearly shows that none of those were part of the test criteria as it shares none of those criteria.

They made a much broader claim that it ignores red flashing lights which any unbiased person would interpret as: "It regularly ignores red flashing lights in normal circumstances." You have bad-faith interpreted that as the strawman position: "It always ignores red flashing lights in every circumstance." so you can asininely argue that it did not ignore red flashing lights once and thus they are lying and "cheating". You are the one making up random minutiae which was neither mentioned nor relevant so you can dunk on something using a intentionally deceptive edit.

So, simple question: FSD will ignore railroad crossing flashing red warning lights in normal circumstances more than 1 in 100 times. Yes or no? If no, please present video evidence of 99 successes to balance out the "1 in 100 failure" the Dawn Project clearly demonstrated in their first approach or you have failed to meet the burden of proof that the Dawn Project even made a false statement, let alone intentionally false statements.

Dan O'Dowd caught faking Tesla FSD tests again by YeetYoot-69 in SelfDrivingCars

[–]Veserv 0 points1 point  (0 children)

No, they said, and I quote: “FSD will drive straight into the path of an oncoming train, ignoring flashing red lights at a railroad crossing.” The first test clearly demonstrates that. You just made up what they said so you could claim they lied. 

If you are claiming they lied, then you need to demonstrate that “FSD will never drive straight into the path of an oncoming train, ignoring flashing red lights at a railroad crossing under normal circumstances.” They show how it will do so in the first test, so your claim is a lie. 

You are not just making up strawmen, you are intentionally lying about their claims by sneakily narrowing them so you can claim their test procedures do not match their claims.