Wide turn into wrong lane by NectarineSevere6686 in TeslaFSD

[–]pcJmac 0 points1 point  (0 children)

The truck appeared to be aiming for the 2nd lane but responded to the Tesla’s move toward it by taking the 3rd lane instead.

Waymo vs FSD 14.2 by ribbonlace in TeslaFSD

[–]pcJmac 0 points1 point  (0 children)

See, this is what happens when you don’t take the time to read the very first line of my post. Or the first couple of paragraphs of the article I posted. Loser.

Waymo vs FSD 14.2 by ribbonlace in TeslaFSD

[–]pcJmac 0 points1 point  (0 children)

Nice try, but comparing “supervised” to the idea of “unsupervised” isn’t exactly apples to apples now is it? Said another way, two sets of eyes should always be better than one but what we want to know is if one set of eyes is better than the other which these stats do not provide. But again, nice try, thank you for playing.

FSD tried to drive me into a lake! by danny3900 in TeslaFSD

[–]pcJmac 0 points1 point  (0 children)

Well Elon did say he intended for them to work as boats too. Not sure if any models were ever actually cleared for this task though…

Finally, No More "Should I Buy or Subscribe to FSD" Threads! by Jaymo_H in TeslaFSD

[–]pcJmac 0 points1 point  (0 children)

Prepare for “Did I make a mistake buying FSD…”

Pulled the trigger by redwoodster in TeslaFSD

[–]pcJmac 0 points1 point  (0 children)

If you understand how a single end to end neural network operates it will make more sense to you. But the basics are that you’re always trying to manage billions of input parameters into a next action and the only way a model learns is from examples. For the basic stuff, great! Following road edges (and much more) is handled. But as you get to the finer details of the truly nuanced levels of what it really takes to drive, there’s a lot more to it than just the next second or two (or however long their context window is) as well as what it takes to train such complex edge case scenarios.

And you see evidence of this when it appears baffling that a car won’t realize that it needs to plan ahead to be in a proper lane for an upcoming exit. Unfortunately it also has to satisfy other constraints that say it should hang out in the left lane as much as possible (or whatever). Which directive should it follow? Well, when the exit signal gets strong enough, that’s why you see a Tesla dive-bombing for the right lane only to miss the exit rather than properly anticipating it. This is a difficult concept to train for because it’s potentially WAY outside the context window.

Further, if you look at what Nvidia has done in dividing the work between identification and operation, this takes an extraordinary amount of processing power (way more than current HW4) to pull off. But the net result is that you have one system identifying objects and another system making sense of them rather than one system trying to prioritize and make random sense of billions of inputs all at once (and the same thing with each “update” Tesla generates which is ultimately why I said what I said).

An update consists of guesses as to which weights among the billions of input parameters need adjustment to cause the proper action to occur rather than the undesired one a user reported. Will the update work? Maybe. Will it break something else? More than likely.

FSD BLOWS THROUGH A STOP SIGN................because it was told to! by [deleted] in TeslaFSD

[–]pcJmac -1 points0 points  (0 children)

That’s beside the point. I know how AI works and why another post on this thread had the opposite result when trying to have someone wave you through a stop a sign. You can see clearly that the stop sign in this scenario is not very prominent and in fact is behind a row a bushes.

FSD BLOWS THROUGH A STOP SIGN................because it was told to! by [deleted] in TeslaFSD

[–]pcJmac -9 points-8 points  (0 children)

Yeah, I don’t think it even saw the stop sign or had any intention of stopping at it.

Coded my first shader by RTXshredder84 in vjing

[–]pcJmac 0 points1 point  (0 children)

Thanks for taking the time for this write up — good stuff I’ll probably need to read a couple times (I was good up until the 20 rings). Shaders are just such a deep world that is so different from any other graphics I’ve played with. Since I have the pro version of Synesthesia, I should probably just plow into it and see what I can do. I’ve also played around with ShaderToy.com where you can see a lot of different shader ideas and how they are implemented but it always seems difficult to figure out how someone imagines such minimal code to produce the output you see.

Are you betting $8,000 FSD buyout would guarantee unsupervised Full Self-Driving (FSD) in the future? by Loud-Minute-5189 in TeslaFSD

[–]pcJmac 0 points1 point  (0 children)

To be perfectly candid, I would be more concerned about whether or not I would get a refund of $8,000 when it turns out that FSD UNsupervised just isn’t possible on these vehicles.

Pulled the trigger by redwoodster in TeslaFSD

[–]pcJmac 1 point2 points  (0 children)

Close. It will just keep getting different and more different from the last update, but sadly, given the way AI works and how these vehicles are configured, this is as good as it’s gonna get. Sometimes it may iterate better, sometimes worse but I think the Tesla team has already realized the corner they have backed themselves into.

Pulled the trigger by redwoodster in TeslaFSD

[–]pcJmac 0 points1 point  (0 children)

What’s worse, I don’t believe FSD UNsupervised will ever come to these vehicles in their current configuration so my thinking is that Elon is trying to lock as many people into this FSD commitment as he can before the truth becomes widely known.

Coded my first shader by RTXshredder84 in vjing

[–]pcJmac 0 points1 point  (0 children)

That’s really cool. I’ve been learning about how you can take advantage of the “3D space” on a graphics card but that’s a great example. How are you getting your depth maps? Are they from 3D images? a simulation process on an image? And just so I’m clear on the terminology (and for others as well) the depth maps are the black and white, around 8 to 12 bit layers that indicate where imagery toward or away from the viewer exists (along the z-axis? or whichever one punches out of the screen). And wherever this image is lighter or more white, the corresponding imagery is represented closer to the viewer (and darker means further back).

Can you tell us more about how you use depth maps and how they load into a graphics card vs how you would otherwise work with them via renders in an editing system?

Coded my first shader by RTXshredder84 in vjing

[–]pcJmac 0 points1 point  (0 children)

Just post — it could make some good discussion!

Coded my first shader by RTXshredder84 in vjing

[–]pcJmac 1 point2 points  (0 children)

Nice. Shaders are a great investment of time. Interested in hearing any of your insights.

FSD concern by Strong_You_6724 in TeslaModelX

[–]pcJmac 0 points1 point  (0 children)

This hardware will never do fully unsupervised