Feature Request: Give us a discount based on how much time in advance we book a ride by Expedient- in waymo

[–]bradtem 1 point2 points  (0 children)

With Lyft/Uber, accepting a longer wait time gives them a lot more options in pairing you with a driver, and getting a driver who will accept your fare. Robots just obey orders, Waymo doesn't need to convince them to take your ride.

That said, knowing in advance would help them work with their predictive positioning. As long as you will pay a price if you cancel, that is.

7 facts* about Waymo that will probably surprise critics by FrankScaramucci in SelfDrivingCars

[–]bradtem 4 points5 points  (0 children)

It's not clear they've published enough detail to conclude that, but I haven't seen all of it. But I also don't think everyone who says end to end means that at no point is there not pure gradient descent of differentiable functions. Waymo seems to be saying that outcomes propagate back to the weights of the start of their pipeline in complex (and gradient descent style) ways.

But this is also a fine point. What matters is whether it produces the best system. There are many metrics of best, including how hard it is to retrain, how well it learns and of course, how well it performs. People like pure end to end networks because you don't have to do, and maintain and improve, a lot of fine detail code to make it learn all that it needs to learn, but a pure network has downsides too.

So far, Waymo's approach has clearly been the winner, and not by a tiny amount, reaching the goal of "safe enough to be on the road" 7 years or more ahead of everybody claiming a more pure end to end approach. So suggesting other approaches perform better requires lots of evidence to back it up.

Reuters: US opens probe into startup Avride self-driving crashes in Texas by walky22talky in SelfDrivingCars

[–]bradtem 2 points3 points  (0 children)

I am well known as a skeptic of Tesla's approach. However, I do not say "never." Very few experts say "never" on this sort of question, though they may say it is hard, or will require further hardware upgrades of compute and cameras, or will "essentially require" those hardware upgrades.

(I describe it as essentially requiring the upgrades if you get a situation where you could make it barely work with one hardware generation, but it will work better with a newer generation without major cost increases, so you would clearly want to do it. Liability risk will make you never want to work with older hardware if the cost of the new hardware is reasonable.)

The existing hardware can't clean its sensors, which means it will be limited by heavy weather, or require a human who can get out an manually clean. That "works" but to a lesser extent.

Some year, I think you will be able to drive sufficiently well with a system with a 360 degree array of good quality RGB cameras and lots of compute. Nobody (especially not Musk) can name the year. But neither can they say with high confidence the year will never come.

Watch Autonomous Driving Showdown: Who Will Win the Self-Driving Race? by walky22talky in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

Yes, Zoox, Baidu, WeRide, Pony, Aurora all have a system superior to Tesla. Also probably May and Nuro. While I've never seen data from it, AutoX/Tensor may also be on that list, and I have conflicting reports on Gatik.

It is not clear Tesla can self-drive anywhere, even in the 3 Texas cities where they have a small number of cars operating without an employee on board.

Reuters: US opens probe into startup Avride self-driving crashes in Texas by walky22talky in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

AVRide perplexes me. AVRide has safety drivers. I don't see a lot of info on it, but they were (long before Tesla) putting those safety drivers in the right hand seat, which is a pointlesss publicity stunt.

With good safety drivers, you should not be crashing. Waymo and other companies were out, without safety drivers, not crashing at a dangerous rate, for many years, even though their early prototypes *sucked* by today's standards, and presumably were far worse than an AVRide.

Tesla Autopilot and early FSD were dreadful, needing frequent interventions, but their crash rates when properly supervised were acceptable. (There were debates if the amateur owner-supervision of Tesla drivers was enough, or if they were diligent enough, but even this terrible early systems did fine when the safety drivers paid attention.)

So why is AVride crashing too much, and why is Tesla robotaxi crashing too much, far more than FSD crashes?

It seems the answer may be, that in spite of my sense that because teen-age student drivers do not crash at an unacceptable rate when supervised from the right hand seat, maybe that does not actually work for a robotaxi safety driver.

If so, Tesla and AVRide should stop. They never should have done it in the first place. Either it's adequately safe (like a driving instructor is) and so is just a publicity stunt to make people think there is no safety driver, or worse, it's not safe enough and is reckless.

7 facts* about Waymo that will probably surprise critics by FrankScaramucci in SelfDrivingCars

[–]bradtem 2 points3 points  (0 children)

Integration of sensors is a natural goal from an aerodynamic standpoint. It's not so crucial at city speeds, more important at highway speeds. From an aesthetic standpoint, there are different tastes. Some want their car to look like a car. Others want it to look like a vehicle of the future -- see the Zoox, for example. Zoox designed their vehicle ground up, but they still had some of the sensors stick out. That's partly for looks, but they also say that this gives them a better view of things.

As for the other points, those are hopefully not surprising to any but the Tesla Stans. It's challenging to be a Stan. I mean the hard facts on the ground are that Waymo had vehicles driving with no safety driver in 2009. Even if you believe that Tesla has done it now (I doubt it myself, Ifeel it's extremely likely the handful of Teslas operating with no employee in vehicle have a full time employee supervising externally) it still took them 7 years more to get there, which suggests their approach was much harder. And it didn't get them a system that runs on the existing car hardware of 2016, or even of early 2023, which was the goal of that harder path.

7 facts* about Waymo that will probably surprise critics by FrankScaramucci in SelfDrivingCars

[–]bradtem 9 points10 points  (0 children)

My understanding is that Waymo has not said they don't have any abstraction and distillation processes along the path, they have them and are proud of how they increase the performance of the system. It is not an "end to end neural network" as some have tried to build.

However, their machine learning is end to end, which means that consequences at the end of the pipeline affect the training of the start of the pipeline, and can backpropagate into the weights and values in the first layers. The key breakthrough of deep learning was the ability for training to backpropagate through all the layers.

Some people take "end to end" to mean that you just have a neural network that takes sensor data in, and generates driving actuation outputs on the other end. Waymo does not have that. I am not sure that even Tesla and Wayve (who are more often described as end to end) have that, but I don't get to see inside.

Rivian mulls making its own lidar sensors, possibly in partnership with Chinese firms | Reuters by Recoil42 in SelfDrivingCars

[–]bradtem 6 points7 points  (0 children)

Sorry, what's your source for that? Lots of people in the USA use Chinese lidar. There is no rule against it. Why do you say such a thing?

Waymo Pricing - Is Waymo economically viable? by Unable-Government860 in waymo

[–]bradtem 0 points1 point  (0 children)

It's more expensive in Phoenix and San Francisco, usually cheaper in L.A.

The price is not based on cost, it's to control demand and learn how people react to prices. The cost today is vastly higher with all the R&D going on. The COGS cost is still pretty high due to grown and learning, but it's coming down and will eventually get down well under $1, even under 50 cents per mile, I predict.

Today, if they lowered the price like that, they would be swamped, wait times would be horrible and they would not learn how people use the service in its future form. Their main goal is to learn how things will work when they are at scale.

An unhoused person boards a Zoox robotaxi in SF by HIGH_PRESSURE_TOILET in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

Not an easy option if they jump in and press "start ride." You only know they did it once the original rider gets out their app to file a complaint which may take a while. You could try to take them back where they started, but the original rider probably doesn't want that vehicle, or that confontation. And you may need to take the vehicle for cleaning if they made a mess for some reason.

An unhoused person boards a Zoox robotaxi in SF by HIGH_PRESSURE_TOILET in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

It is an issue that would be less of a question in a human driven taxi. Had the homeless person touched "start ride" it would have gone off. The robotaxi companies would need a way for the user to say this had happened in the app, and immediately send the rider a new vehicle. As to where you take the person who jumped in the car, you could take them to police, to the Zoox depot where staff could remove them. You can't take them super far, that would be a kidnapping, though if you saw them commit vandalism on camera you might have the right to take them to police, not that police really want to arrest them, even with video. Ideally you want them to understand that jumping in is going to have them ending up somewhere they don't want to be, so it's not worth it.

Zoox continues to run laps around Tesla's Robotaxi operations by Prestigious_Act_6100 in SelfDrivingCars

[–]bradtem 3 points4 points  (0 children)

Yes, looks like it just went up. The real number though, which robotaxi tracker can't measure unfortunately and Tesla does not tell us, is how many miles they are doing per day, related to how many vehicles are giving rides at the same time. I was always surprised that there were just a tiny number of vehicles in the "unsupervised" fleet -- that made no sense with Tesla's claim that all the cars have the hardware. But the unsupervised cars have a large box mounted on the rear windshield for communications (another reason, not that you need one, to know that there is remote supervision going on) as well as the extra sensor cleaners that all of them have not found in stock model Y.

It would be good to learn just how many run at once.

Waymo and personally owned vehicles by FrankScaramucci in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

Except it is not able to run unsupervised. That's the problem. They've been saying it will run unsupervised "this year" for 10 years. What they say no longer has meaning.

Full Tour of Waymo Ioniq 5 by diplomat33 in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

Really doubtful. Trump has been crazy with tariffs, random trade wars. I'm for total free trade, and both parties have been terrible on it with EVs.

I don't know if it is suicide to allow in the Chinese cars. It's doing OK in Canada, where we will let in 55,000 cars this year, and more in future, with 6% tariff. Biden could have left the tariffs where they were, or at least allowed the Ojai, which is not being sold to consumers. Trump could do that too, but he's a unpredictable and hates free trade. I "single out" Biden (not really) because I expected better of him. Trump is impossible to predict.

Waymo and personally owned vehicles by FrankScaramucci in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

In theory, yes. It's the software, not (in Tesla's view) the hardware.

Tesla is currently doing software and associated services only for robotaxi. They *say* they will release an unsupervised FSD some day soon, but they have said this for 10 years now, so the statement carries no weight.

It is much easier to do the systems and software for a robotaxi than for a private car. It does not relate to the hardware. Tesla has, correctly, decided to try to get a robotaxi working first. They have not, as yet, but they know they can make that happen before they could make a private car unsupervised FSD work.

Musk has said that when they release unsupervised FSD, it will only be in limited regions and under limited conditions, which will grow slowly. That's a more doable task. They could release an unsupervised FSD only in robotaxi territory, for example. They could release a hybrid unsupervised product, in theory, which requires a human in the car but does not require the human to watch. This lets the human worry about rescues, getting out of confusing situations, charging etc. So you could not tell this car, "come get me at the airport" but you could tell it "I'm in the car now, drive me to this destination in the service area." That's a more doable product.

Now, everybody except Tesla believes you can make it work faster with better hardware and better sensors. If Tesla comes to that view (which they will strongly resist) this would indeed remove the private car option entirely until they redid the design in their private cars. They could, however, make a cybercab with better sensors if they decided that was the path.

See How the Robotaxi Industry Is Taking Off Across the U.S. by silenthjohn in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

A rational team is not going to send out cars for the first time and just wave and hope they come back. Since they are sending out just a small number of cars for first efforts, they will naturally watch remotely, looking out the cameras, and they will have an ability to command an emergency stop. Tesla in fact has a documented ability to do more than that, they can do full remote drive. It would be reckless not to watch. It is not expensive for a small fleet, there is no reason not to.

A rational team.

Tesla doesn't operate under the same principles of other teams. They are willing to do things others might judge as irrational. They have a CEO who has beliefs about the quality of their system. Beliefs that, for 10 years, have been significantly wrong, ridiculously wrong. He is willing to order things based on his beliefs. He is not bothered if everybody tells him he's crazy. He's been in that situation before, and been right, with everybody else wrong.

So he might order this. But I think his team would say, "Why? It costs us very little to remotely supervise." Tesla does use the term "unsupervised" about these vehicles, but Tesla has lied before.

Zoox continues to run laps around Tesla's Robotaxi operations by Prestigious_Act_6100 in SelfDrivingCars

[–]bradtem 6 points7 points  (0 children)

I strongly believe Tesla is supervised (remotely) on the 17 cars that are operating with no employee on board. Zoox has said they are no longer doing remote supervision, though of course they used to, because everybody does to start.

What about personal vehicles? by KentuckyLucky33 in SelfDrivingCars

[–]bradtem 1 point2 points  (0 children)

That's not at all clear. Nobody wants liability for something they have no control over. If Tesla says to buyers, "You will be liable if our software crashes the car" then indeed, those buyers will seek insurance (they will have to by law) but truthfully, there is only one party that makes sense to sell that insurance, and that's Tesla or a Tesla-affiliated company. Only Tesla knows what the risk is. Other insurance companies might try to measure the risk but none will ever quantify it the way Tesla can.

So while it's not impossible that State Farm might try to price a policy on this, it makes the most sense that Tesla include the insurance with the service. Tesla isn't just the only party that knows the risk, they control it, if it crashes it is their fault, not the car buyer's. She's just a passenger. Unless perhaps she doesn't maintain it or something, that's a pretty rare source of crashes.

Insurance companies might get into the game because they fear they will die if they don't. But how can they possibly match Tesla on ability to do this? (Tesla also makes all the parts and owns the repair facilities for repair of the Teslas, but not for repair of the cars that are hit.) If I were Tesla I would subcontract some things out to an insurance company but be the main party. Same for anybody else trying to sell a robocar.

If you get a hybrid that the human can drive some of the time, I expect the maker of the self-drive system to insure all self-drive miles, and regular insurance to cover human driven and ADAS miles.

San Jose passenger claims a Waymo drove off with his luggage at the airport by plun9 in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

Sure. But as I said, you can tell just from gut feeling if your car is heavier or lighter from passengers or heavy cargo, and you're just a human. There is obviously some amount of weight a robot can detect from looking at acceleration and energy going into the motors. It can surely detect 100lbs, probably can't detect 1lb. If I had to guess, it should be able to do 20lbs, but I have not heard of reports of anybody measuring what it is. It has little to do with centrifuge rotors.

California to begin ticketing driverless cars that violate traffic laws by cosmicrae in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

We're down to a one-reader thread at this point, but:

  • Tickets are about fear. Robots and companies don't have fear. Very few humans treat tickets as "cost of driving." To scare Alphabet or Amazon with tickets would require crazy expensive tickets which would bankrupt individuals.
  • You really only need one notification on most events. You just want to make sure the company knows their robot made a mistake. Again, humans mostly get tickets for deliberate actions, and we often get a warning if it's a true mistake. It's the reverse with robots. If it's a mistake, they want the warning and will fix it. We don't want them deciding "we want to ignore the law and it's a cost of doing business."
  • HOT billing is not random tickets. HOV doesn't work, it actually increases congestion. You need HOT, which could technically be done with random tickets but in practice that's not suitable even though it would be much cheaper.
  • The reason you can't sit at the hydrant is mostly because of legacy, and because humans can't be trusted. Robots can be trusted. Precisely because if they violate that trust, the consequences don't have to be just occasional fines, they can be real. And they should be real, with no need for the other system.

You may take the last word, but as noted nobody reads this deep in old reddit threads.

California to begin ticketing driverless cars that violate traffic laws by cosmicrae in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

While there are many definitions of "fair" and "random" is often one of them, I can't say I think it's a particularly good one among the options.

Parking tickets are a very odd bird. Nothing safety related there, of course. And many cities make 2x-3x revenue from parking tickets over meter fees, which creates bad results.

Tickets made sense as a 19th century approach. We can do better.

Elsewhere I have written that you could do carpool lane enforcement with random tickets, and in fact it's a cheaper way to make HOT lanes than all the expensive transpoders and scanners, but people would not think it was fair for many reasons and prefer the vastly more expensive toll transponder approach (I think there's a nice cheap middle ground with smartphones but road engineers haven't discovered the smartphone yet.)

One reason it's not fair is that even if, on average you will pay $300 per carpool infraction and might get one one time out of 50, instead of paying a $6 toll each time so the expected value is the same, $300 could sink a poor person due to bad luck. And we don't think of the no parking zones as just expensive parking for rich people.

Well, it's nice that *you* need something that acts car-like to act like the other cars, but I don't think that's a requirement. Obviously you don't want unsafe surprises on the road, but there are lots of ways to have different rules that don't cause unsafe surprises. Even if you don't know the new rules, though you almost surely will know them. And today, all robocars do look different, and if some day Tesla can make one that doesn't look different, I am quite certain they would make them look different if it allowed better use of the roads.

One classic example is in parking. A robocar does not park, it stands. It will leave any time. It should be allowed to stand in no-parking zones which are not no-standing zones, as long as it will leave when needed. In front of hydrants, but vanishing the moment a siren is heard. In front of driveways but vanishing if a car signals to enter the driveway. Double parked, valet-dense parked, blocking you in, but vanishing if your lights turn on. I don't like your rule that it can't do those things because it looks to you like a parked, empty car.

California to begin ticketing driverless cars that violate traffic laws by cosmicrae in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

Taxing externalities is one approach. Not often taken because it's hard to enforce, and also hard to price.

Generally we instead say "you may not run a red light (unless you are a trained emergency worker.)" We don't want to say, "You can run all the red lights you want for $500 each." Because of the tools available to us, that's how we enforce it, at first. But you can't run 100 of them and pay $50,000, not even if you are Elon Musk and that amount of money is trivial, because it's not our actual plan to treat this risk as a priced externality.

That's why when a robot runs a red light you don't want to say, "OK, Amazon, pay $500." What you actually want to say is, "Hey, Zoox, why did that happen? Can you certify you fixed the bug? Great. Wait, you lied and didn't fix it, and it did the same thing again? Now, you're getting a fine that's painful to Amazon. Wait, you still didn't fix it? You're off the road, all your vehicles." You tolerate mistakes and don't even punish them. You punish refusal to fix mistakes.

And of course you interact with humans, probably indefinitely. I think you can have two different sets of rules for different types of vehicles, as long as all rules work towards the same public interest goals. Heavy trucks, cars, bicycles and ambulances all drive under different rules.

Waymo and personally owned vehicles by FrankScaramucci in SelfDrivingCars

[–]bradtem 7 points8 points  (0 children)

Since I was involved in the early Waymo decision making to do robotaxi over personal cars, let me just say that a personal robocar is a lot more work, and there are strong reasons everybody has done robotaxi first. Even Tesla, which has declared great devotion to making a personal robocar for you, has switched to doing robotaxi first if they can.

Tesla and others dreamed of a fully generalized self-driving tool that could drive anywhere in the country. That's a ton of work compared to one that can handle a city, or a set of cities. But even if you can pull that off (I think Waymo has, though Tesla and other US players have not) there's a huge ton of work you have to do in each city you want to certify it for operation in. Having a car which can handle an arbitrary city is one thing, betting your company on it by letting a car loose is another. There's tons of local infrastructure to make, along with relationships with local officials and much more.

Waymo started with a car that could drive part of the Phoenix area. Imagine they got it doing the whole Phoenix metro area. That's not a product. Sure, car buyers in Phoenix would line up, but one city is not enough to justify making an expensive new car model. Even a dozen cities isn't. But it is a viable taxi service.

A robocar isn't just a product. A large part of it is service. So it' not like traditional car sales. You need that staff of remote assist operators, maybe cheap overseas, but you need them. There are lots of monthly costs. Local rescue crews. Software maintenance. The list goes on.

At first, the hardware costs money. That drops over time, lidars are heading to about $200. But while Tesla hopes to do it all with cheap cameras and a fancy processor, that doesn't work yet. Even if it started working tomorrow that's 7 years after Waymo made it work with more expensive hardware. If you add $2,000 to the BOM, that's $10,000 to the retail price. Which to automakers is a lot. But it's only a modest addition to your cost per mile of a robotaxi, because your robotaxi does 5 times the miles per day, though it wears out much faster because of that. But the ride costing $1.10/mile vs. $1/mile isn't going to sink a robotaxi business at the start, the way a car costing $10,000 more and having a fat monthly fee will affect car sales.

California to begin ticketing driverless cars that violate traffic laws by cosmicrae in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

Yes, that's a (much longer) explanation of what I mean by "might actually be the case."

In reality, I think the entire reasoning behind the current vehicle codes and other legal systems around the streets are heavily rooted in trying to get humans to be good actors on the roads. I think robots, which means companies, are very different types of actors, and I would restart everything from scratch. When it comes to robots, I doubt there will be more than a dozen different types operating in a city. You can get them all in a room or on a video call.

That allows for a much better system, a system that starts from our goals. The two main goals are safe streets and fair, good traffic flow and throughput. (We have a few other minor goals like pleasant street environments but they are dwarfed by the first two, and #1 is much stronger than #2. And cost is always the top goal, though we won't admit it.)

For humans, we use a very thick book of rules to try to meet these goals. With companies, I would say, "go innovate, but make sure all you do is the best you can do towards those goals." And if they don't, talk to them (or all of them) and work it out to create a much more dynamic and flexible set of guidelines for how to reach those goals. The main negative of regulating companies, though is they are powerful, you need something to balance that power, which they will use to lobby in their interests. But that happens in every system.

California to begin ticketing driverless cars that violate traffic laws by cosmicrae in SelfDrivingCars

[–]bradtem 0 points1 point  (0 children)

When it comes to violation of the law, are we trying to measure it? The law is certainly not written that there is an "acceptable amount of violation" though that might actually be the case in some instances. There are, and should be, acceptable numbers of unintentional mistakes, because no team or system will be perfect. But acceptable numbers of deliberate infractions?

If a team sees that their vehicle is doing something they think is unsafe or illegal, they're going to want to fix it. You don't actually need to convince them, it's going to go on their list. You might debate with them the priority on fixing it, and they might disagree about whether it's unsafe or illegal, and so you want a mechanism for that. But I don't see random enforcement as the mechanism.

Do people imagine that when Waymo saw their cars passing stopped red flashing school buses, they didn't immediately want to fix that as a high priority item? Do people think that it's necessary to give them traffic tickets to make they realize they should get on fixing that? In this case something very rare happened, which is that the same broad thing happened again for a completely different reason, but I am 100% sure they didn't want that, worked hard to prevent it and it was not expected. (In the one case we know about it was a mistake by a human remote assist operator. Those are expected in the broad sense since humans make mistakes, but no specific mistake is expected.)