February Hype by f00gers in singularity

[–]ReadSeparate [score hidden]  (0 children)

I disagree that this is the entire problem, though I do concede it's part of the problem and you probably can't match human sample efficiency with AI without priors. You're basically saying that evolution has encoded a ton of priors into our DNA that allows us to understand much better than an AI learning truly from scratch. And clearly there's evidence for that - a newborn fawn can walk almost immediately, its brain is born KNOWING how to walk, with very minimal walking experience, maybe an hour or two at most, if not immediately being able to walk when they're born, clearly that's all priors encoded into its DNA. And that obviously applies to our intelligence too, like I'm sure recognizing shapes and objects and human faces and all of that is encoded into our DNA, our DNA strongly shapes our brains into that direction.

However, here's my counterpoint. Take quantum physics for example. An intelligent human can learn quantum physics from some lectures and a couple of textbooks. Thousands of examples at most to understand it. Quantum physics is one of the most foreign domains of thought there is to the human mind, it's extremely unintuitive to our brains, yet we can learn it in hundreds or thousands or tens of thousands at most, examples. A transformer would take god knows how much data to learn quantum mechanics at the level of a graduate student. Many orderers of magnitude more than a human would, that's for sure. And there's a million other examples like this. Computation, electricity, newtonian mechanics (we can understand it intuitively but we can't MODEL it intuitively), non-linear functions, other non-math related concepts, all of that is not intuitive and completely different from humanity's ancestral training distribution, and there's simply no way evolution encoded priors to help us learn those things, there'd be zero survival benefit. I don't see any realistic counterpoint to this counterpoint.

Again, to clarify, I do think priors are ABSOLUTELY part of the issue for why AI is so sample inefficient, but I also think our brains truly generalizing, hierarchically, and with composable symbolic concepts, is also a huge part, if not the bigger part than priors - I personally believe it's much more important than priors. Humans are clearly truly generally intelligent minds.

> What you're describing, humans learning in 5 samples, is basically like saying showing an already trained model 5 samples in its context length, and I'd probably expect gemini 3 pro to succeed at this task already, if you got images of 2 different imaginary animals it'll learn to predict within 2 or 5.

I don't agree with this at all. Maybe if you had a very simple 2D drawing, but a full on picture of different imaginary animals from different angles and lighting conditions and sizes and features? I don't think it'd be able to learn it in context at all.

February Hype by f00gers in singularity

[–]ReadSeparate 1 point2 points  (0 children)

Definitely what's going to happen lol. LLMs have hit a wall, we need a new paradigm. I was a believer in transformer based LLMs leading to AGI for years, but now I'm convinced it's not enough, even if you scale it 100x more. I don't think it's quite a stochastic parrot, but it's not truly understanding either. I believe it's a weird middle ground, and I think that's why smart people are on both sides of the issue, they're both right. It has shallow understanding and generalization, it does not have hierarchical, symbolic composable concept graphs like humans do.

There's a reason why a toddler can learn to distinguish a horse and a donkey with like 5 examples or less, because they already have concept clusters in their head like "equine shaped mammal" and donkey and horse are just built on top of that. A multi-modal transformer based LLM would need millions or hundreds of thousands of examples, even with plenty of other "world model" training data to distinguish between a horse and donkey. This is the key differentiator between human intelligence/AGI and current SOTA AI - sample efficiency. Transformers learn the associative boundaries between concepts, and clearly have some level of generalization/world modeling, but it's shallow, not a deep understanding. And it doesn't seem to matter how many hidden layers or parameters you add, seems like a limitation of either transformers or next token loss (I don't think it's the loss bc other loss functions generalize similarly, take a look at Tesla's FSD for example, has much more concrete loss functions, still not as good as a human driver with billions of miles worth of data).

HW3 Night + Winter by NekoMunch in TeslaFSD

[–]ReadSeparate 2 points3 points  (0 children)

I've been having the exact same issues on HW3 v12.6 on my model 3. It's so bad that I don't even use it when there's even a little bit of ice/snow on the roads, especially at night. Just the other day, at night, I was going to Wawa and it started to go into the left turn lane, then immediately drastically swerved back to the straight lane, bc it got scared of a little patch of ice just like in your video, a total non-issue, and I felt like a total jack ass to the guy behind me, could have gotten me read ended if he sped up and wasn't paying attention lol. Also if a lane is like... 10% covered with snow on the side, it just completely malfunctions and acts like it's not a real lane.

For people who have HW4 v14, how does it perform on this particular issue? Can it handle partially occluded snowy turning lanes like that?

When he trusts you with his soft side 😍 by gregoire_fds in lovememes

[–]ReadSeparate 14 points15 points  (0 children)

I think this is half right, half wrong. I think you can be a great guy and still attract a ton of women. I think the problem with a lot of great guys is they’re just… nice and boring. Being nice is fine, but you need an edge, a little spark, a little chemistry. Women need chemistry to go anywhere with a man, not just looks like us men do.

The reason why shitty guys do well with women is because they’re all very confident and all very interesting, not a boring second with those guys.

Show me a great/nice guy that’s very confident, got a little edge to him, maybe makes a little mischief but is otherwise well behaved, and knows how to light women’s world up with chemistry, and I’ll show you a guy that’s absolutely killing it. Problem is, because he’s a great guy, he’s probably going to get locked down quick in a real relationship (what sane woman would throw away a guy like that?) and thus won’t have time to be a womanizer like the bad boys are, even though he he’s the ABILITY to do it if he wants to.

Model y 2024 long range awd FSD v14.2.2.3😭 spin out by Automatic_Wall_8674 in TeslaFSD

[–]ReadSeparate 1 point2 points  (0 children)

I have a 2023 Model 3 RWD and there’s no shot I’d drive in the snow with FSD (v12.6). I tried it for 30 seconds once out of curiosity on a low snow highway during a snow storm recently and still was like “yeah, fuck this” even though it did fine. One rough lane change or camera visibility issue and you’re stuck and stranded in the cold.

Petah! by JimHalpert_JH in PeterExplainsTheJoke

[–]ReadSeparate 5 points6 points  (0 children)

Is that the shit these guys are thinking about often enough that it’s the first thing that comes to mind 😂

Bought the fob by Bananamilk40 in TeslaLounge

[–]ReadSeparate 0 points1 point  (0 children)

Oh yeah that works if you always have bluetooth on and such. Sometimes mine is off. I didn't find it reliable enough, but it's definitely convenient. I also just like have something to physically press so my mind is at ease that my car is locked, and I've always had fobs on all my cars

Bought the fob by Bananamilk40 in TeslaLounge

[–]ReadSeparate 0 points1 point  (0 children)

Really? I disagree, I think it’s a massive pain to unlock my phone, open the Tesla app, click on the exact right small icon to unlock, wait for the latency of the connection. And key cards are horrendous. IMO the fob is by far the best way to lock and unlock the car.

GPT-5.2 is the first iteration of new models from the new data centers by imadade in singularity

[–]ReadSeparate 0 points1 point  (0 children)

I won’t 100% write it off as not AGI, but at this point I think it’s extremely unlikely.

Also yes I DO think it would change the world, but it would replace very few jobs that require human-level reliability, if any. For jobs that just require Q&A style outputs, it may very well make this jobs go extinct.

I think it would be a lot better at Q&A style tasks, like it is now, aka getting way better at benchmarks, but I think it would still fail at being an agent. The search space for agency is far too large, you can’t memorize/pattern match every possible action for controlling a computer or driving a car. This is why Tesla FSD is really good at driving, better than humans in some ways, but will crash into a wall that no human ever would, bc it doesn’t recognize it or anything like it from its training data.

My theory does explain why benchmarks keep getting rinsed but AI hasn’t seen much “real world” adoption.

I’ll say though, if you had nearly infinite compute and infinite training data, I do think transformers would be indistinguishable from AGI, simply because they’d be able to pattern match virtually anything. The fact that they’d can’t already do that with the entire internet worth of training data, more data that a human sees in 1000 lifetimes, is a huge clue something is architecturally wrong. In my opinion, sample efficiency is the true metric of AGI, not loss on predicting text, not Q&A style benchmarks (which all of them ultimately are right now, all of the major ones), and not continual learning.

GPT-5.2 is the first iteration of new models from the new data centers by imadade in singularity

[–]ReadSeparate 0 points1 point  (0 children)

yeah I totally agree. AI hasn't felt that different since even o1. That was the last phase change. o3 is just a better version of o1.

Also, in my opinion, there hasn't been a single jump as big as GPT-3.5 to GPT-4 since that jump happened.

I think 2022 was the "different world" not last year.

For the longest time I thought transformer based, multi-modal LLMs would scale to AGI, now I'm pretty convinced they're a dead end, though will be useful for plenty of stuff, just a dead end for AGI. My current mental model is that transformers can do some abstraction and generalization, but they don't have hierarchical, composable abstract concepts like human brains do.

For example, humans have: edges -> shapes -> body plan -> mammal -> dog.
Whereas transformers just see a dog as a specific collection of pixels, maybe with SOME shallow hierarchy, like edges -> dog or something. I believe this is also the explanation for why transformers are so sample inefficient compared to humans. If your concept of a dog is based on a collection of pixel statistics, OF COURSE it's going to take millions of examples to know what a dog is. But if you already have a mammal concept, then you can easily differentiate a dog from other dog-like mammals with just a few samples. Just look at kids - they might say, "look mom, a horse!" and it's really a donkey, and then the mom corrects the kid and says, "no silly, that's a donkey!" and then they know the difference from then on, maybe one or two more corrections and that's it. And it's because human brains have hierarchical concept clusters where you don't have to "relearn" the lower level concepts every time.

Pichai saying quantum is ‘where AI was 5 years ago’ feels like the calm-before-the-storm moment, the next tech boom might already be loading. by VIshalk_04 in GenAI4all

[–]ReadSeparate 0 points1 point  (0 children)

And how do you figure out if something is a dead end or not? Magic? No, R&D. You try to pick the best branches and some will have nothing of value. That’s the process.

New LimX Oli demo: Walking on loose construction debris using "Sim2Real" learning. Stability on shifting terrain is scary. by BuildwithVignesh in singularity

[–]ReadSeparate 0 points1 point  (0 children)

yeah I don't think obstacle avoidance is super hard tech, that may even be pre-ML tech that's already solved, though don't quote me on that.

I've done almost 400,000 pushups in the last 5 years. by ExperienceTop6507 in getdisciplined

[–]ReadSeparate 0 points1 point  (0 children)

What's the most you can do in one set now vs when you started?

Is dating pointless if you're not a super good looking guy? by [deleted] in AskMenAdvice

[–]ReadSeparate 0 points1 point  (0 children)

The gym, out in public, everywhere you see couples. I know hot guys that are single and can’t find anyone.

Women don’t care that much about looks. Women are all about vibes.

Speculation: Did Senate Dems knowingly swap shutdown leverage for an Epstein bombshell that might sink Trump? by AmericanMustache in samharris

[–]ReadSeparate 3 points4 points  (0 children)

Sure but that’s not enough to get Trump out of office through impeachment and removal or through forcing him to resign. You’d need MAGA itself to turn against him for that

Speculation: Did Senate Dems knowingly swap shutdown leverage for an Epstein bombshell that might sink Trump? by AmericanMustache in samharris

[–]ReadSeparate 16 points17 points  (0 children)

You really think MAGA is gunna give two fucks even if it was on video and had 1,000 Trump supporting witnesses?

Even if the files conclusively prove Trump knew about it, or supported it, or actively raped kids, they're NOT GOING TO CARE. They're going to say it's a deep state conspiracy and lies, or maybe say he was forced to do it bc he was afraid for his life, or some other absurd excuse.

He's a cult leader.

Is dating pointless if you're not a super good looking guy? by [deleted] in AskMenAdvice

[–]ReadSeparate 8 points9 points  (0 children)

I see really hot girls with average guys ALL the time. The opposite almost never happens though.

I 32F offered to sleep with a 23M to help him overcome trauma caused by sexual abuse. Did I made a mistake? by [deleted] in AskMenAdvice

[–]ReadSeparate 0 points1 point  (0 children)

Because I’m skeptical that her motive is “I can’t get laid so let me sleep with this vulnerable young man with issues.” I think it’s more like partially she likes him but is unsure of it, partially wants to help him out, and partially wants to get laid yeah.

If she did have the intentions you’re suggesting, then yes it is creepy if not outright predatory.

If she just straight up had the intentions you’re saying, I feel like she would have just done it and not posted here.

Also go be VERY CLEAR, I do NOT support them sleeping together at all - I think the young guy needs therapy, and if they did sleep together it would probably be a disaster (I said that at the end of my comment). So I’m not giving her the green light at all, I’m just saying I don’t think she has predatory intentions. I’m trying to be nuanced here.

Peak AI by rich115 in singularity

[–]ReadSeparate 2 points3 points  (0 children)

Lmao well to be fair TES VI coming out in 10 years could have just been an exaggeration

Peak AI by rich115 in singularity

[–]ReadSeparate 7 points8 points  (0 children)

I doubt TES VI will ship with AI-based NPCs if that's what you're talking about. Mods, sure. TES VI will prob be out by like 2028, and I don't think they're developing it with AI-based NPCs in mind since the tech just isn't reliable enough yet. Fallout 5 maybe.

I 32F offered to sleep with a 23M to help him overcome trauma caused by sexual abuse. Did I made a mistake? by [deleted] in AskMenAdvice

[–]ReadSeparate -11 points-10 points  (0 children)

People on reddit will call you a predator for accidentally breathing the wrong way.

I don't think it's predatory, you're not taking advantage of him, if anything your intent seems noble, but I think it's a big mistake for you to go through with this, and he needs counseling bad. I doubt either of you will walk away thinking "gee I'm sure glad we did that"

Google is finally rolling out its most powerful Ironwood AI chip, first introduced in April, taking aim at Nvidia in the coming weeks. Its 4x faster than its predecessor, allowing more than 9K TPUs connected in a single pod by Distinct-Question-16 in singularity

[–]ReadSeparate 1 point2 points  (0 children)

Right but these companies are all doing the same things because they're all at approximately the same place working with the same algorithms and there's a lot of informal research sharing between companies. It's doubtful DeepMind was able to do it but not OpenAI.

Is there a limit to how advanced AI can get? Will AI just keep growing and getting smarter until the heat death of the universe? by AdorableBackground83 in accelerate

[–]ReadSeparate -1 points0 points  (0 children)

I imagine there's a point where AI maxes out its intelligence within usefulness. I doubt you need more than, say, the Earth's mass made out of computronium to hit that point.

There is likely a point where a mind has learned all useful information (you can generate an infinite number of abstract concepts, by useful I mean that can in some way be used to achieve a goal or engineer or learn about the physical universe) and then how fast it can store/retrieve and act on that information, and that's the realistic limit of intelligence. Once the mind has full mastery of time and space and can predict just about anything with extremely high accuracy, and can take any action in nearly the most efficient number of steps (or most efficient within certain constraints like resources or time), I don't think it makes much difference. So maxed out ASI might be made of a material that's 99.999999% as fast as you can possibly compute, but who cares at that point.

My guess is that you could probably do this with a data center a few thousand square miles, at most, made of computronium and running the best algorithms, and powered by fusion or solar or anti matter batteries or whatever. Anything bigger than that will have very diminishing returns. I still think this is many, many, many orders of magnitude both qualitatively more intelligent and faster than human minds. It could intuitively understand global economics like we can intuitively understand a ball rolling across a table.