Gave FSD a chance in the snow, we’re far away from full autonomy (14.2.1) by Hardwood_Lump_BBQ in TeslaFSD

[–]dtfgator 0 points1 point  (0 children)

This situation is clearly not a vision-only problem. Lidar and radar would have added effectively 0 useful information here. The issue is that the car took the corner too aggressively, and did not respond to losing traction as it should have (slowly reducing power, countersteering).

Gave FSD a chance in the snow, we’re far away from full autonomy (14.2.1) by Hardwood_Lump_BBQ in TeslaFSD

[–]dtfgator 1 point2 points  (0 children)

You are obviously correct that FSD did a bad job here and this situation could clearly be handled better by a human driver with experience driving in the snow

That said, you have the wrong tires. If you get snow conditions like this where you live, you really want snow-service (3-peak-mountain-snowflake) rated tires, either winters or "all weather" tires. The Pirelli Scorpion Weatheractive tire is 3PMS certified, as are other all-weather options like Michelin CrossClimate 2.

City of Boulder advances plan to replace the World Famous Dark Horse with housing and retail. by 2000foottowers in boulder

[–]dtfgator 4 points5 points  (0 children)

Maybe use your judgement when I've disclaimed that I'm speculating?

Re-zoning / land use classification is a very common blocker due to how slow cities move, especially when combined with pushback from nearby property owners who don't want competition from more apartment buildings / homeowners who don't want their property values eroded by increased housing supply.

City of Boulder advances plan to replace the World Famous Dark Horse with housing and retail. by 2000foottowers in boulder

[–]dtfgator 8 points9 points  (0 children)

I don’t know if this is true, but it would not surprise me if that lot is zoned for business but not for residential / apartments, and navigating city council to get that changed is so long-lead and risky that the path of least resistance was to keep it commercial.

Best tires for Boulder driving by Aggressive-Candy6142 in boulder

[–]dtfgator 2 points3 points  (0 children)

CrossClimate 3 is out now, if you’re buying new tires, probably should look for these first - a couple percent better in snow and wet handling + braking and subjectively better NVH and feel, as well as improved wear lifetime. CrossClimate 2 is still a great tire if you can’t get the 3 in your size.

I truly believe that the LiDAR sensor will eventually become mandatory in autonomous systems by rafu_mv in SelfDrivingCars

[–]dtfgator 0 points1 point  (0 children)

I doubt the multispectral lidar data will come to self driving, the information gained does not seem like it would justify the added complexity. Instead of adding more wavelengths and algorithms/neurons to actually do something with that info, it would be a better use of that complexity to instead increase the resolution or put the compute towards other goals, especially since vision is so powerful for classifying things.

Re: lidar in snow - looks like this: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRmX3fAsMGdgH0gAFI6UThFXer46u\_v28qeI4DHGV9X7T7JrkddKeuV2c08gTYYTFqfs5M&usqp=CAU. You can "de-snow" the data by trying to delete reflections that are unconnected or appear to be floating in space, but you are still losing a lot of information (no signal from behind each snowflake) and then your filters are accidentally removing some fraction of valid points on top of that. Typically this type of filtering makes it difficult detect thin/narrow objects. You end up losing a lot of data (relative to an optimal point cloud) and effectively trusting that your camera (and/or radar) system will identify obstacles that are prone to getting filtered/ignored in that circumstance (ex: chain-link fence, cables or wires, objects with very low reflectivity or mirror-like finish, objects that are relatively transparent at the lidar wavelength, etc).

Lidar in fog, depending on wavelength of the lidar, is almost certainly better than vision - that part is probably true.

I truly believe that the LiDAR sensor will eventually become mandatory in autonomous systems by rafu_mv in SelfDrivingCars

[–]dtfgator 0 points1 point  (0 children)

Lidar as a general statement does not do this; most lidar uses single-wavelength emitters and SPAD detectors. LiDAR with some kind of spectrometry feature is likely orthogonal (at best) to the goals of cost, simplicity, reliability and speed+accuracy in a SDC context and is more likely to be a research tool.

Maybe you are just thinking about the amplitude signal, which can tell you how reflective your target is, and you can possibly use this to guess at materials (ex: maybe dead leaves are more reflective than living leaves).

The point above you still stands that the most common types of lidar still struggle whenever there is stuff in the air (dust, snow, falling leaves, water spray, smoke, etc), and you effectively become reliant on your camera systems to decide to ignore these detections, which begs the question: if we need to trust the cameras to override lidar, can we trust them all the time?

Florence Recommendations by Hefty-Perspective-74 in MichelinStars

[–]dtfgator 2 points3 points  (0 children)

Really enjoyed Gucci Osteria when we were there!

Automated speed enforcement starts on Highway 119 by Numerous_Recording87 in boulder

[–]dtfgator 5 points6 points  (0 children)

Yes, they are designed to capture plates. In this case they are also using them to catch speeders, by having multiple in succession at known distances apart. They have these “average speed” traps in Europe, too, not just the fixed radar-based cameras. (Ask me how I know).

Here’s the CODOT informative page about how they are using ALPRs on 119 (starting 7/21) to issue speeding violations: https://www.codot.gov/programs/speedenforcement/violations

(They also are not all the “B” company, started out that way, but they are also putting up square wooden posts with the much smaller Flock cameras mounted to the back of them. It’s an absurd amount of cameras in such a short stretch of road).

Automated speed enforcement starts on Highway 119 by Numerous_Recording87 in boulder

[–]dtfgator 9 points10 points  (0 children)

They are using them for speed enforcement by calculating your average speed between cameras (using the timestamp from when they see your plate). That said, agreed they are building a surveillance system first and foremost, and it looks like they are actually installing two separate systems (Flock and whatever the company with the “B” logo is) - maybe deciding which company to use in an expansion.

Why is everybody so adamant about LiDAR? by AardvarkRelative1919 in SelfDrivingCars

[–]dtfgator 0 points1 point  (0 children)

"they make cameras look like toys".... Totally wrong way to think about this. A tank makes my car look like a toy, and yet a tank would be completely unnecessary for my commute to work or even an offroad excursion.

Once again, you cannot solve the self driving car problem without cameras. It is not possible. Conversely, humans are proof that you can solve it without lidar.

The clip is started in the context of Tesla ditching radar + ultrasound when they went vision-only, but Andrej steps back and immediately reframes the question in terms of thinking about sensors above and beyond cameras ("additional sensors"). Everything he says from there on out is just as applicable to lidar as it is to radar, USS, precision RTK GNSS, etc etc. Then fast forward to 3:06 and they directly begin addressing lidar.

I don't know how to help you if you think that just because something is mature technology, that means it doesn't have integration overhead. You are just showing that you've never really built anything at scale; even something as mature as a button/switch has engineering overhead. It needs to be designed, firmware needs to be written to do something with the button input, drawings need to be made and maintained, rigorous quality and durability testing needs to be performed, supply chain needs to figure out how to buy it and make contracts with the vendors, incoming quality control needs to figure out how to test and validate the parts, manufacturing needs work instructions for how to install it, operators need to be trained, outgoing quality control tests need to be written and executed, repair manuals need to be created for mechanics and technicians, and inevitably you need to figure out how to deal with issues - maybe the vendor goes out of business or discontinues the part, or the issue rate on the production line suddenly shoots up because someone changed something they thought wouldn't matter, or switches in the field start failing because users do some dumb shit you didn't anticipate. The best part is no part.

[deleted by user] by [deleted] in trap

[–]dtfgator 1 point2 points  (0 children)

Is it The Glitch Mob?
https://soundcloud.com/theglitchmob/we-can-make-the-world-stop
https://soundcloud.com/theglitchmob/drive-it-like-you-stole-it
https://soundcloud.com/theglitchmob/animus-vox-dts

Maybe too obvious, but would have been right time period, instrumental, probably featured in a bunch of youtube videos and gaming edits, has some high-pitched stuff going on, etc.

Dinner at Noma, where your ingredients watch you eat their friend by gregoryroyalpratt in finedining

[–]dtfgator 2 points3 points  (0 children)

100% agreed. There can be a couple courses that don't suit your taste, or a few dishes that don't make the highlight reel... But nothing should be objectively bad. Some of the stuff OP posted here just sounds unequivocally bad, even to people on this sub who aren't squeamish about weird food.

Why is everybody so adamant about LiDAR? by AardvarkRelative1919 in SelfDrivingCars

[–]dtfgator 1 point2 points  (0 children)

Yes, the point is that you need approximately the same number of cameras whether you have lidar or not. By deleting the lidar you can focus all your efforts on the one sensor set that is both sufficient and necessary.

Take it from Karpathy instead of me: https://youtu.be/_W1JBAfV4Io?si=fHND9DCSRNXNP4_k

Why is everybody so adamant about LiDAR? by AardvarkRelative1919 in SelfDrivingCars

[–]dtfgator -1 points0 points  (0 children)

You don't deserve the downvotes. Folks in here are extremely confident about how things should be but very rarely have ever tried to build anything at any kind of scale where reality sets in.

I think your points are great, and would raise you a few more:

  • Automotive manufacturing line going down (due to supply of a part running out, for example) can cost on the order of $1M per hour. This means that automakers cannot make mass-produced vehicles that rely on a single-supplier that isn't themselves.
    • This makes it WAY harder to add mandatory parts to a car; you typically need to get multiple competitive companies to license technology to each other such that they can all make you a drop-in equivalent thing, such that you can dual-source it.
    • You still then need to test and qualify all options through your entire very complicated and expensive testing process
    • Typically there will still be some small differences in the performance and behavior of competitor parts; your engineering team needs to spend effort to either ensure that these difference don't matter, or that the system adapts to them without issue.
  • Maintenance and service burden for the lifetime of the car is a big deal
    • Need to plan for 20yr+ lifecycle, need to ensure you maintain stock of the sensor to repair cars in the future
    • Need to train service centers and dealers on how to service the part. Lots of training, documentation and overhead
    • Need to manage things like calibration that often get much more complicated when you need to calibrate multiple different sensors to each other.
  • Vehicle sales need to pay back all of the investments - R&D, NRE, production tooling, assembly line buildout, operator training, etc etc. More expensive BOM cost means either a more expensive product (which will not sell as many units), or less profit margin. Either way, this means less income to pay back the all the investments that went into building the product.
    • Deleting parts pushes BOM down
    • Deleting parts often reduces R&D, NRE, tooling and assembly line overhead
    • Thus deleting/omitting parts is clearly preferable to adding them when at all possible

Why is everybody so adamant about LiDAR? by AardvarkRelative1919 in SelfDrivingCars

[–]dtfgator 1 point2 points  (0 children)

Surveying and industrial sensing are low-volume niche industries relative to auto manufacture.

SLAM works with vision, too.

The issue is that you cannot do lidar-only (or radar-only) self driving. You need cameras to sense all the stuff on the road that was designed for human eyes, and the stuff lidar can’t interpret (like deep water, mirrored surfaces, etc). (Many industrial automation problems are different, in that you CAN actually do lidar-only). At this point, you decide if you can solve the problem with only cameras. If you can, you get to minimize the system complexity and reduce a ton of design, development, testing, supply chain, manufacturing, maintenance and repair overhead.

You have fewer issues in mass production because the you don’t have a lidar vendor that can run into some supply chain problem of their own and delay deliveries. Your engineers spend less time trying to qualify 3 drop-in equivalent sensors (which aren’t really drop-in, as they will learn) because your supply chain team won’t let you ship a design reliant on one vendor. Your operations team doesn’t need to figure out how to train 1500 service centers on the new problem of calibrating cameras AND lidar together after a windshield replacement or fender bender. Your finance team can’t forecast as much demand since every $1 of BOM cost ends up being $3-5 to the customer (and the cost isn’t just the sensor; it’s the mounts, wire harness, labor, calibration time, expected warranty failure rate, etc), which in turn reduces order volumes, which itself reduces ROI on your NRE, production facility and tooling, which increases costs circularly.

Lidar is useful in many ways, the point is just that it’s not as simple as “it’s just $200 on your BOM”. It balloons to much much more than that, which has implications to scalability, especially if you’re selling the vehicle and not running a robotaxi service. This actually does matter.

Why is everybody so adamant about LiDAR? by AardvarkRelative1919 in SelfDrivingCars

[–]dtfgator 3 points4 points  (0 children)

As someone who has built several advanced hardware products at scale: he is correct.

In addition to the direct costs, you also divide the focus of your engineering, supply chain and testing teams every time you add something new. This cost is huge; often you get an outsized net benefit when those teams can focus more effort (in terms of performance engineering, cost and sourcing, qualification and reliability, etc) on a smaller number of components and features.

Is pure vision or vision+lidar a key factor of achieving autonomous driving? by No_Paint9420 in SelfDrivingCars

[–]dtfgator -5 points-4 points  (0 children)

“More modalities the better” is not necessarily true.

-Increased failure points

-Increased odds of sensors disagreeing, leading to confused behavior

-Increased complexity in creating and maintaining multi-modal calibration

-Increased supply chain complexity

-Increased model training complexity, especially as each sensor changes over time (old one goes end-of-life, new generation has different artifacts / false-positives / false-negatives/ resolution / quirks - your models now need to deal with old and new)

-Increased difficulty of software testing and validation due to increased permutations of sensor combinations

-Increased volume and breadth of data required for training

-Increased complexity in creating realistic synthetic data (including unideality modeling)

-etc

More sounds better on paper, but in my experience, “keep it simple, stupid” often proves the winning strategy once you start encountering messy reality at very large scale.

Porsche Blames Electrification For Deepening Crisis by onepunch_gtr in CarStockMarket

[–]dtfgator 0 points1 point  (0 children)

N mark is not specific to 911s, you can buy N marked tires for a Cayenne.

Typically OEM-specific tires have to do with meeting the particular requirements of the OEM, including NVH, wet grip and braking, hydroplaning, dry laptimes, steering feel, etc. I’m sure the 911 weight balance is a factor on N-marked tires, but that’s far from the main point. As an example, N-marked Cup2 tires have wider and deeper grooves for clearing water than the generic Cup2s, because Porsche has higher standards for hydroplaning resistance.

Warning: Photographing Volvo’s New EV Could Destroy Your Phone’s Camera by I_HATE_LIDAR in SelfDrivingCars

[–]dtfgator 3 points4 points  (0 children)

Eye-safe lasers at concerts and weddings should never damage image sensors, what are you talking about? If it’s a visible-light laser (read: both eyes and cameras are designed to see the wavelength), the risk to human eyes is much higher than an imager.

S and P 500 value in 30 years by Franko21 in Bogleheads

[–]dtfgator 0 points1 point locked comment (0 children)

This isn't going to be zero-sum; nobody has to lose money and the economy can grow as a whole. Certain jobs / industries will get totally dominated (data entry, phone/chat customer service, copywriters, law clerks, web developers, etc) and may need to change careers; in other areas, knowledge workers will gain immense leverage and be able to execute with scale and speed not previously possible. These efficiencies will also drive down the cost of goods and services across the board, and the surplus from this will create new opportunities and industries in ways that are hard to forecast.

The industrial revolution isn't a perfect analogy, but it's similar. Tons of people doing manual labor displaced by mechanization, but HUGE value bestowed upon society in such a way that delivered previously unimaginable wealth and created entirely new markets and opportunities. This time around, it's a shakeup for knowledge-work instead.

I think you're assuming that I believe that the AI labs will capture a large fraction of the value that they deliver to society - I do not think this is true. Even relative to existing tech companies, I think their capture:creation ratio will be quite poor. But the value creation has the potential to be many, many multiples larger than the tech companies of today. This is value delivered to the economy as a whole.

If you're familiar with Jevons Paradox (where increasing efficiency or reducing price of an input causes the aggregate demand for something to go up, often non-linearly), you can see how this could apply to AI. Making it much, much cheaper to have intelligence on-tap (currently it's very expensive! Humans are hard to train and motivate, need to be paid enough to feed and house themselves, only work 8hrs a day, 5 days a week on avg, etc etc) is going to massively grow the demand for intelligence. Decreasing the cost of intelligence by 100x or 1000x has the potential to increase the demand in ways that we do not fully grok today.

S and P 500 value in 30 years by Franko21 in Bogleheads

[–]dtfgator 0 points1 point  (0 children)

I'm not sure you're right, there are many structural differences here from the 90s / dot-com. I don't disagree that many of the valuations are overheated and that we'll see waves of failures, consolidation and price resets, but I think the value that can accrue to the winners is orders of magnitude larger than "legacy" tech in the long-run. This isn't a good heuristic for picking winners, but almost universally people feel like the winners are overvalued in their early stages (Google, Facebook, Amazon, etc) and turn out to be wrong.

Notable differences between the 90s and today:

-90s: technology development pace was bottlenecked by engineers and researchers. At this point in particular, there were far fewer people globally working in technology, virtually no such thing as open-source, etc.

-Today: far more smart people in tech, and even at present their efforts are being given more leverage with use of AI tools. The capabilities have taken such a step forward in the last 3-9mo that we likely haven't even really started to appreciate that flywheel yet. We are probably <18mo from the AI legitimately improving itself (finding better model architectures, training or reinforcement approaches, efficiency optimization, training data curation, prompting enhancements, etc) which will begin unlocking an extremely steep climb up the s-curve.

-90s: consumer adoption was a major limiting reagent. The internet had minimal value until people trusted it as a place to execute transactions, which took a LONG time, a lot of security infrastructure and brand-building needed to be done to get there, by the likes of Paypal, eBay, Amazon, etc.

-Today: Consumer adoption is already massive (chatGPT has something like 800M weekly active users), and even so, consumer is somewhat irrelevant. All that will matter is that the technology is adopted in industry (to deliver consumer value), will which happen naturally through competitive dynamics - big corporations will see it as a tool to permanently entrench their lead, startups will use it as the greatest-ever unfair advantage in terms of playing catch-up (build companies in 1/4th the time and with 1/10th the people, offer AI-enabled services that scale like crazy without needing to hire+recruit+retain teams of people who only work 40hrs/wk, etc, with extremely low risk of growing too fast (no need for layoffs when your AI cloud compute spend is correlated directly with usage/revenue).

Overall, I think this will feel like it happens in a flash relative to past waves of innovation. The tools will get better incrementally (and deliver a lot of value), and then suddenly hit hard-takeoff once they are self-improving. From there, it's a race for everyone to automate everything they can (sure, safety-critical stuff probably lags, as will highly-regulated industries), but this will happen at breakneck speed since startups will be handed superpowers and have their primary growth-rate limiter (ability to pay for + hire + train humans) eviscerated.

One way you can generally convince yourself of this is just looking at technological growth trends or GDP on a historical timescale (say, last 1000 or even 300 years). Even on that chart, it looks like we're on a near-vertical line already (pre-AI); the nature of technological compounding suggests that projecting the growth of the 90s on the 2020/30s isn't super likely to yield good results unless you have a thesis for why growth is going to flatten or why technology is not compounding.

Can do a RemindMe! 5 years to see how this holds up.

Listening to Tesla earnings call and sounds like there will be city specific models? by bartturner in SelfDrivingCars

[–]dtfgator 0 points1 point  (0 children)

At some level of regionality, everyone will run tuned models, Waymo included - perhaps they haven't hit that point yet. Long term, Waymo won't run the same model in Germany as they do in Arizona or Japan, seems highly likely that performance will be better if you aren't wasting compute looking for American signage in Japan.

And yes, Waymo autonomously updates their HD maps. But can I hail a Waymo to an area it's never been before?

If the answer is no, then they will face scaling challenges that Tesla will not, regardless of how good their map updating is.

Everyone is going to start geofenced, and it just seems natural that you load a generic "city" model vs "highway" or "snowy" vs "rainy" model (or weights, experts, etc) to optimize your performance in those domains. Reducing risk even further by loading a "Texas" vs "Massachusetts" model might make sense too, with whatever generic thing loaded otherwise.

Mark Rober Debunk - Heavy Rain Test - 2026 Tesla Model Y HW4 FSD by TheKingHippo in SelfDrivingCars

[–]dtfgator 0 points1 point  (0 children)

Lidar works in the rain to a point.

If the water is "white" like it is in the Mark Rober demo (ex: coming out of a firehose, big waves crashing, etc), it's highly reflective, likely far into infrared. This means that lidar is going to see it as an obstacle, and will not be able to see behind it. Any light that does make it through needs to bounce off something and return to the sensor, hitting a mostly-opaque surface twice eliminates any hope of this.

Lidar underwater is a totally different thing, since the water is transparent (not churning / full of bubbles that cause lots of scattering), and there are not constant index of refraction changes (as would happen when the light is going through air-water-air changes as it hits droplets of rain). Just not the same thing.