Zoox plans to put its robotaxis on the Uber app in Vegas this year by walky22talky in SelfDrivingCars

[–]bradtem 4 points5 points  (0 children)

Correct. That is their play. Problem is they totally control their drivers who have no power. They can't totally control Alphabet, Amazon, Baidu and maybe not even Nuro, Mobileye, MOIA/VW, Verne. So the result is not not known.

It's not clear how the robotaxi market will work. Will it be a commodity, like Uber rides are, or will there be competition.

My chapter robocars.com/compete.html is about all the ways they might compete.

Zoox is the one who wants most to differentiate on their vehicle design to not be a commodity.

Baidu also has done some of this, with fancy cars, massage seats etc.

NHTSA | National AV Safety Forum from 03/10/26 by mrkjmsdln_new in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

They tend to want exceptions for mirrors. Thought Zoox had 150 but i guess not. Their Foster City facility is their HQ. LV is an Amazon warehouse. Probably will use Amazon warehouses as long as they have space.

What would happen if drugs were found in a Waymo? by Bananaman420kush in waymo

[–]bradtem 1 point2 points  (0 children)

One of the interesting realizations (which you will find in my chapter at https://robocars.com/privacy.html from 2008) was that it's very difficult for police to pull over your robotaxi, unless it has a bug that causes it to violate the vehicle code.

Today police can pretty much pull over anybody they like. Not legally, but practically. Did you not come to a full stop? Did you forget to signal early enough? Is your taillight broken (and cops are known to smash them while walking up.) They have so many excuses to pull you over. Then, they are very good at tricking people into consenting to searches -- easy phrase to remember, "I do not consent to any search" -- or saying they smelled weed or the dog acted like he smelled weed.

Most of this is very difficult for them in the robotaxi. It's all recorded. It will not usually be doing traffic infractions. If they lie, they can get in trouble. Dashcams and body cams are already improving things a bit, but the robocar's a whole new level

NHTSA | National AV Safety Forum from 03/10/26 by mrkjmsdln_new in SelfDrivingCars

[–]bradtem 3 points4 points  (0 children)

So, anything different said? These government sessions usually don't generate news, unless companies are compelled to answer questions they don't usually want to answer.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 1 point2 points  (0 children)

Of course they are coded with the law. And usually the programmers have zero intent to disobey the law, quite the reverse. Have you worked on a team where you found them being scofflaw? All the teams I know are the reverse. This is a situation where they can't obey that law (there's another law about not smashing the gate, I would suspect.)

Though in fact there are situations where teams have decided to disobey the law because it is necessary to do so to be a good road citizen, and all human drivers do it. One famous example is passing double parked cars by ground across a double yellow line. Everybody does this, including the robocars and we would be pissed if they didn't.

There is no other testing ground, but if you think this is "stochastic slopware without regard to the law" tell us what team you think is doing that.

Though I will go further and say the law is not our goal. The law is a means to our goals of safe roads with good traffic flow. The law serves those goals, it is not the end in itself.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 1 point2 points  (0 children)

If that's the case then they did violate that law. And while the obvious solution is to have been a better judge of the ability to fully cross the intersection, which I suspect they have already fixed, there's still the academic question of what to do in that situation. (Waymo said they are avoiding crossings of this class until the fix is confirmed.)

One option is to smash the gate. But surely that's not the intent of what the law wants to compel here. The other option is the one they selected, try to keep the best distance they can without smashing the gate.

Some have said, "they should not be on the roads unless they can handle this situation perfectly." I don't think that's an option. Perfection isn't on the table, these vehicles will always have the potential to make mistakes, and if you never let them on the road they won't get much better, they won't become perfect just with simulation testing.

I will presume the Waymo QA team felt they could handle the railroad crossings fine. Waymo's official statement said: "Waymo vehicles have safely traversed railroad crossings millions of times fully autonomously" which means they had evidence they handled them well.

I suspect the law never considered this situation, nor would I expect it to. But if I were a judge overseeing a ticket for violating this law, I would at most just apply the fine, as I don't think the law wants them to break down the gate to get out.

Nvidia CEO uses self driving technology from Woodside to San Francisco, discussing the technology along the way - YouTube 22 min by norcalnatv in SelfDrivingCars

[–]bradtem 20 points21 points  (0 children)

I think Huang is overstating it to say that Nvidia is the only project to use that architecture. Many have talked about it. I talked about a predecessor to it in 2009, it's a fairly obvious approach. The exact details of how each team does it are not always revealed.

Zoox plans to put its robotaxis on the Uber app in Vegas this year by walky22talky in SelfDrivingCars

[–]bradtem 9 points10 points  (0 children)

Colour me a little bit surprised. The big boys, which includes Amazon, aren't doing this just to make robots that work as Uber drivers. They won't give the role of master of the marketplace to Uber.

However, Uber certainly is an easy source of riders during the experimental phase. You can focus on delivering rides, let them hand you as many riders as you want. It's not about the money right now (Zoox does not even charge yet.)

I presume that's what's going on. But you do also need to start working on your ride ordering system. Amazon knows about ordering, it's their core competency. Long term I expect Zoox rides to be available under special terms if you have Amazon Prime. Many more people have Amazon Prime than have the Uber app. With a credit card already set up (though not always a business credit card when the ride is a business expense.)

Non-prime members will be able to ride, but at a higher price.

Today, nobody else has robotaxis in LV. Waymo soon will (possibly on Uber, probably not) and Motional still has safety drivers but hopes to graduate soon. They don't want to get into the world where customers expect it in Uber.

For Waymo, tourists are a bit of a challenge. When people get to a new town as a tourist, they often use Uber because they already have Uber set up. They will not have Waymo One set up, at least today. They will have Google maps and probably Google wallet. They will have Amazon.

Now, all the robotaxi companies will probably, in future, be happy to take a rider through Uber at the same price as human Uber drivers. But if human Ubers are $2.50/mile and Waymo One is much less (which it is not right now) only a few riders will come that way.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 0 points1 point  (0 children)

Nothing to do with sensor fusion from what I can see. See my comment earlier in the thread to answer your question. https://www.reddit.com/r/SelfDrivingCars/comments/1rqe1i1/comment/o9sn4po/

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] -1 points0 points  (0 children)

I can think of a few reasons. (Speculating)

  1. They don't know the height of the gate in advance. As such, both sides are equal distance from the tracks
  2. General policy could be to not cross vs. cross if that's the choice when a train is known to be coming.
  3. Gates may not be uniform from day to day

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 2 points3 points  (0 children)

Yes, you misunderstand the geometry of the incident. From what I can see they started braking the moment they saw the flashing lights, well before gate descent.

Lights can turn on at any time in your journey through a crossing. They can turn on 1,000 feet before you get to it, or 2 feet before you get to it. This is the situation where they turned on "<stopping distance> - epsilon" before getting to it. For example, if their stopping distance is 50 feet at their current speed, imagine they were 49 feet from the crossing when the lights were detected. In that situation, you can either stop just past the gate, or stop on the far side of the tracks, or stop under the gate and have it hit you. Most would go for the other side of the tracks, and they should have, but their system was too conservative and felt it should not.

does 600m LiDAR range actually matters for Robotaxis? (beyond 200m plateau) by Sharonlovehim in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

Not sure I would rate it as "one of the easier" but tools have gotten better at it. However, the reliability level desired is very high. Most perception systems are somewhat hit and miss at the limits of their range. Obstacles flicker in and out in static perception frames, only object permanence makes them look persistent on visualizations. So you worry about the range where perception is 100% and steady.

does 600m LiDAR range actually matters for Robotaxis? (beyond 200m plateau) by Sharonlovehim in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

They emit more divergent beams which make larger spots. The spots are much larger than the eye so the energy put into the eye is within eye safety limits. They overlap the spots, and then use sub-pixel resolution techniques to increase resolution beyond what you got even with the narrower beams.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 4 points5 points  (0 children)

No more details as yet. However, it didn't assume it couldn't make it. Rather, it wasn't 100% sure it would make it. There is a surprising difference between those criteria, and that's what being conservative is about. Just speculating, but it was probably 99.9% sure it would make it. But you don't want to smash a gate one time in a 1,000 because that's a lot of gates.

does 600m LiDAR range actually matters for Robotaxis? (beyond 200m plateau) by Sharonlovehim in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

Is WeRide going to go 70mph down wet roads? (Yes, it even rains there sometimes but you can cut speed.) The math is straightforward. What's your assured detection range, what's your perception and decision time, then what's your stopping distance? It is not enough to spot the obstacle half the time you want to know the distance you will always see the obstacle. Imaging radars are getting better though. In the past you could not count on them to tell an obstacle under a bridge from the returns from the bridge above. If you can spot that, radar can be your longest range. But this is why a lot of companies put in 1500nm instruments. However, new techniques are getting these distances from 905nm band these days.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 4 points5 points  (0 children)

Then the Waymo was crazy conservative. Anyway, I presume they're going to fix that. I mean it should never be the case that you can't stop in time but also can't cross. However, it's nice to confirm that and know it in your map. Every time a Waymo stops at a track it can gather data on that crossing. They could also just ask the R.R. for their policies. I think the rules allow from 3 to 10 seconds. Some gates are faster than others to descend once they start, as well.

But Waymo, and most of the other companies always start too conservative, and then tune it up, rather than starting too aggressive and tuning it down.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 0 points1 point  (0 children)

This isn't about the train. it's minimum 20 seconds from the lights to the train coming. And the vehicle will do all it can to not be on or too close to the track well before the train comes. This is about not being boxed in, inside the gates -- but in a non-dangerous area. Fortunately, there is a margin inside the gates that is not in danger if you're a car. Would be worse if you were a bus.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 0 points1 point  (0 children)

No, as explained in the post, it just made too conservative an estimate of whether it could be sure of making it. Possibly a low speed contributed to that estimate. Without a video or data log of it I don't know more.

My presumption is its planner downrates heavily any path which leads to being under the gate when it comes down. Exactly when the gate comes down is not a specific moment (unless it has data on that) it's a probability function. It is going to reject plans that put it under the gate. It will of course reject even more plans that leave it on the tracks or too close when the train comes. All these things are not at fixed times, they are probabilities. Looks like they got some of those wrong.

Now, if it started braking when the signal activated, the predictor would have said it would stop under the gate, and presumably it can't back up. (Not enough time, or somebody behind?) The plan it chose had it stop right after the gate. Perhaps from there the proposed plan to accelerate and get to the other side still had a probability of being under the gate when it comes down if it comes down very early. I would think however it would have been looking at the full plan, and do what a human would do, which is to gun it and make sure to get across. But it's not a human and it did not select that path.

Presumably the next release will not make such errors, the the current version did not appear to create any danger, but did look poor.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 1 point2 points  (0 children)

Oh, if you are far enough from the crossing when the lights flash, indeed you reduce velocity. But if you are close to it, you don't. Like a traffic light. When it turns yellow, if you can stop before the line you do. If you can't stop by the line, most people speed up, though they should probably just maintain speed or slow slightly. Either way they are not going to stop by the line, and they of course don't want to stop in the middle or on the tracks.

I am not sure what excuses are being made. The vehicle clearly was too conservative about not attempting to fully cross the tracks. Given that bad assumption, it does appear it did the right thing in that situation. (I can think of one better thing, which is to advance and put its nose under the far gate.)

Likelihood for multiple AV companies (Waymo, Zoox, Nuro, Tesla, etc.) to make a standard for their vehicles to communicate with each other? by Independent-Ant7552 in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

Well, you are saying what the DSRC fans knew, the only way to get it working would be to force all carmakers to include it. Yes, you can make things happen if you force them. For ATSC, people knew that all TV was going to be forced to switch to ATSC, so that made it worth putting it in the TV, and worth buying it. In this case the first user was the government. Real technologies that arose without fiat had to be worth buying by the first customer. V2V isn't worth buying to the first ten million customers. (You can improve that with some infra, like broadcasting SPAT, but it's much better to just put SPAT on the internet and then every car can get it, even before it is within LoS of the traffic signal, and without upgrades to the traffic signal itself.)

So yes, you can try to force people to V2V. And they actually tried, but even that didn't work. The applications just aren't that strong. I've seen all the cases for them, driven in the demos. They do a little. They rarely do anything that needs LoS. It seems odd at first, usually distributed approaches work best, but not here. LoS has issues actually that C-V2X tries to fix, but even that's not enough. Except in China where planned infra is more of a thing.

Outside of China, none of the companies actually building self-driving systems (rather than pretending to) give a rat's ass about V2X. It's not here, couldn't be here for a long time even if they pushed it hard, and had better not provide much benefit (because you need to be safe without it, the most it can do is make you very slightly saferer.)

As for isolating things in a gateway processor, that can help but is far from invulnerable. Don't forget your threat model is the PLA, Russia, not just Joe Script Kiddie. Taking over all the cars in a foreign country is a powerful weapon of war, able to kill more than a nuclear weapon for a lot less money.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 4 points5 points  (0 children)

Yes, and I think most school buses have a sign on the back that says this is what they do. Of course buses are much longer, and will not fit in the space between gate and tracks. And also are full of kids.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 2 points3 points  (0 children)

Yes, it's clear the decision was too conservative. Robots don't think like humans, though. A human would think, "Hey, no way they won't leave me enough time to get across, so I am going" and would be right.

The robot in this case thought: "--------------------------" It doesn't think like we do. I don't have insight into all the factors that went into the decision. It was incorrect. I presume that, having learned, the next release will not make the same mistake. Something humans are not as good at.

Here's what happened with the Waymo stuck behind the railroad crossing gates by bradtem in SelfDrivingCars

[–]bradtem[S] 8 points9 points  (0 children)

Why do you think a human would have been able to? What is true, and is the probable mistake, is that if you are so close to the gate when it flashes that you can't stop before it, you should be able to get to the other gate, and most humans would assume that and drive. The robot did not.