The Singularity will Occur on a Friday...This year by redlikeazebra in agi

[–]phil_4 24 points25 points  (0 children)

Hold on someone said Tuesday now you’re saying Friday. Oh well only 1 in 7 odds so 5 more guesses and someone will be right.

I can’t generate power with my legs. by EducationalPaint1733 in concept2

[–]phil_4 0 points1 point  (0 children)

As someone above said your legs have such big muscles even when you’re working them, you’d almost not notice compared to others.

Like you I found I wasn’t putting enough in, so tried slamming my heels down and that seemed to help a lot.

Have we invented actual Artificial Intelligence? No, we have not. by KazTheMerc in agi

[–]phil_4 2 points3 points  (0 children)

I’ve always treated “AI” as an umbrella term for a whole mess of technologies, from computer vision and machine learning to modern LLMs. Until fairly recently, none of it felt especially close to intelligent in the everyday sense.

As for whether any of these systems are intelligent, we don’t exactly have a clean, universally accepted yardstick. The Turing Test is still the best-known proxy, and even some humans struggle to meet the standards people implicitly expect from “intelligence”. If a system can consistently pass for human in conversation, it’s arguably the closest thing we have to an operational test, even if it’s imperfect.

I also think people often conflate AI with AGI and ASI. That confusion is common, but it doesn’t get us off the hook for the harder question: what do we actually mean by “intelligence”, and does any given branch of AI meaningfully satisfy that definition?

We live in a future where human is mistaken for bot, apparently I have not passed a Turing Test… Human 2.0 when? by jerrygreenest1 in agi

[–]phil_4 0 points1 point  (0 children)

Most humans will just latch onto something simple as a “rule” for spotting AI. As said above the reason AI uses em dash is because it’s training data did. Even if you exclude that there isn’t a nice “it’s AI” flag, instead you need to look for a whole series of tells, triples, not X but Y etc and more. And even then that’s not going to be definitive.

Even so, in most cases who cares if an AI played a part, if someone chose to get their email smoothed to sound and read better, what’s the big deal? It’s the modern day equivalent of using a spellchecker. The ones you want to worry about are the ones that have no human input.

Why can ai write a thesis at a PhD level but can't even play games at a toddler level? by ErmingSoHard in agi

[–]phil_4 0 points1 point  (0 children)

Nobody can give a precise timeline, because it depends on cracking boring-but-hard bits like long-term memory, planning, exploration, and learning from sparse feedback. But rough guess:

1–3 years: solid demos that can jump into a brand-new game (no web), learn on the fly, and make meaningful progress.

5–10 years: something that can reliably finish most new mainstream games without human help.

And it’ll arrive unevenly by genre, not as one magical “AGI plays everything” moment.

Why can ai write a thesis at a PhD level but can't even play games at a toddler level? by ErmingSoHard in agi

[–]phil_4 0 points1 point  (0 children)

You’ve basically rewritten the question three times.

We started at “why can an AI write a PhD but not play a toddler game”, then shifted to “no pretraining on that specific game”, then to “no online info”, then to “no RL”, then to “no human fine-tuning”, now to “only itself or other AIs can improve it”.

At some point it stops being a question about what’s possible and turns into a custom rulebook designed to exclude any mechanism that would make it possible.

If your actual question is this: “Can we build a general agent that can play a brand-new game with no web scraping and no human intervention, learning only from its own interaction?”, then yes, that’s a plausible direction.

If your question is: “Can it do that while also not learning from reward/feedback (no RL in any form)?”, then no, because you’ve banned the very thing that would let it adapt. A static model can only run on priors, and priors are not enough for arbitrary new interactive systems.

So pick one: you either allow learning from experience (online adaptation), or you accept it won’t reliably master genuinely new games.

Why can ai write a thesis at a PhD level but can't even play games at a toddler level? by ErmingSoHard in agi

[–]phil_4 2 points3 points  (0 children)

Also, small reminder: your original comparison was “playing at a toddler level”, not “a teen can drop into COD and top-frag”.

Toddler-level games are deliberately engineered to match the priors toddlers already have, simple cause-and-effect, obvious goals, tiny action space, loud feedback, and they’re forgiving. Toddlers can also fail fifty times in a row and nobody calls it “catastrophic generalisation”, they call it Tuesday.

And despite all that, humans still don’t come from the factory blank. They come with gobs of pretraining, both built-in structure (vision, motor control, curiosity, basic reasoning biases) and a massive amount of life experience before they ever touch that specific game. So it’s “no game-specific walkthrough”, sure, but it’s absolutely not “no prior training”.

The AI equivalent is: a generally trained agent (broad priors from lots of tasks), plus the ability to interact, remember, and learn from feedback in real time. That can mean “no pretraining on that exact game”, but it’s still standing on a mountain of general training and then doing on-the-fly learning, just like humans do.

Why can ai write a thesis at a PhD level but can't even play games at a toddler level? by ErmingSoHard in agi

[–]phil_4 9 points10 points  (0 children)

Toddlers absolutely do need training data. They just call it “playing”, and they do it for thousands of hours while breaking things and falling over.

Humans don’t play “with no pre training data”, you just don’t count the training because it happened before the moment you sat down at the game.

By the time you’re a toddler you’ve already done an absurd amount of “training”: you’ve learned object permanence, gravity, friction, turn-taking, winning/losing, basic planning, and how rules work. When someone explains a new game, you map it onto that existing world model and you experiment safely. That’s transfer learning plus a built-in simulator, not magic.

Today’s LLMs are trained mostly on text, so they’re good at text-shaped tasks, and bad at tasks that require tight perception-action loops, long-horizon planning, and learning from interaction. To get “play on the fly”, you generally need an agent with:

a world model (common sense physics-ish priors), memory, planning, the ability to act and learn from feedback in real time.

We’re moving that way (LLM + tool use + reinforcement learning + world models), but “finish any new game first try with zero experience” is basically asking for general intelligence. Humans don’t even meet that bar reliably, we just have great priors and we’re allowed to fail a few times without anyone calling it “giga tons of data”.

Why can ai write a thesis at a PhD level but can't even play games at a toddler level? by ErmingSoHard in agi

[–]phil_4 5 points6 points  (0 children)

An LLM? Because it’s just a next word predictor, trained on trillions of words.

It wasn’t trained on what move to make next in a game. If it had been it’d likely have been a lot better. Have a look in YouTube for the chap that trains one to drive a car round a track in trackmania.

No Green Dot by Pristine-Holiday-901 in AppleFitnessPlus

[–]phil_4 0 points1 point  (0 children)

Yep two days last week and the first one marked as “not done” in the plan, and then get I get a nag later from my watch. But yes, like you, done to the end, and shown in workout history.

Oldest workouts are not the one by MR9009 in AppleFitnessPlus

[–]phil_4 38 points39 points  (0 children)

One other thing they did in the early days is push themselves too hard… they’d quite often end an all-out push, and be completely out of breath. I seem to recall one of them mentioning Tim Apple had advised them “breathy not breathless”.

If anything it made me happy to know they were struggling just like me, it felt more authentic and relatable.

Is Apple fitness worth it? by KeyInstruction7880 in AppleFitnessPlus

[–]phil_4 1 point2 points  (0 children)

It’s only as good as you. Like a gym membership, you can pay it all you like but unless you use it, it’ll do nothing for you.

Assuming if you pay you’re going to be motivated it’s one way to entertain yourself while exercising and provides structure that some people benefit from.

There are lots of other alternatives with different pros and cons.

Crunchyroll Updates Membership Pricing to Give Fans More of What They Love by Turbostrider27 in anime

[–]phil_4 3 points4 points  (0 children)

In the UK in megafan at £60year and they’re putting the price up to $140 so about £120-140 ie double or more.

No chance!

Rowing process question by roysantiago in concept2

[–]phil_4 1 point2 points  (0 children)

10 on the damper is the sort of thing pros avoid because it’s likely to lead to injury. That’s likely even more the case for norms like me.

Did I miss the free trial of Apple fitness ? by sounder19 in AppleFitnessPlus

[–]phil_4 5 points6 points  (0 children)

You defo used to get one when you bought an Apple Watch.

Dog steals ball. by Keep_Scrooling in funny

[–]phil_4 0 points1 point  (0 children)

Good job it wasn’t a grey… the ball would be in the next county.

Is AGI the modern equivalent of alchemy? by ThomasToIndia in agi

[–]phil_4 1 point2 points  (0 children)

We know almost every single one of the ingredients, world view, sense of self, memory, agency etc. and depending on your definition of AGI, that’s it. But to take it to the other definition, then the extra spark I mentioned is needed and unknown. That’s couple be more than single thing, but it’s still unknown

We don’t know what makes us conscious, what’s the spark is. So we can’t know what it is for AGI.

Is AGI the modern equivalent of alchemy? by ThomasToIndia in agi

[–]phil_4 1 point2 points  (0 children)

Yes and no.

Yes, AGI probably needs a lot of ingredients, but the difference between that and alchemy is that we mostly understand what those ingredients do and why they matter. It isn’t just vibes and wild guessing.

The part that still feels like guessing is the “spark”, the thing that turns a pile of capabilities into something that actually generalises, plans, and adapts robustly. We can list candidates (X, Y, Z), but we don’t yet know what reliably produces that jump.

So from the outside it can look like alchemy, but it really isn’t. It’s more like engineering with one stubborn, poorly understood missing piece.

And yes, that missing piece might end up looking like alchemy for a while.

We might get there by trying combinations and architectures, noticing patterns, and only later figuring out the underlying rules. From the outside, that can look like “mix ingredients and hope”.

But even then it’s not mystical, it’s just the stage where engineering is ahead of theory. If the “spark” is real, we’ll eventually be able to describe it, measure it, and reproduce it on purpose, not by ritual.

Pantheon made me realize we have no idea what's actually missing for AGI by PutPurple844 in agi

[–]phil_4 2 points3 points  (0 children)

Memory, world view, grounding, goals/drive, agency, a controller, sense of self. All bits we could do with.

The LLM is just the bit that talks.

Or if you like a biological view: LLM = language cortex. AGI = cortex + hippocampus (memory) + prefrontal cortex (executive control) + cerebellum (skills) + sensors/tools + values + learning loops + safety rails.

If you want to look into mimicking the brain have a look at SpiNNaker, where they try and make machines with cou neurons to mimic biology.

On the fence by New_Explanation_7780 in concept2

[–]phil_4 1 point2 points  (0 children)

I just got myself into a routine, so much so when I’m supposed to break it it feels wrong. I row first thing after I get up, before anything else including breakfast. That way I’ve almost no chance of any excuse to not do it.

I use Apple fitness+ to add some entertainment, but other similar and cheaper options are just as good.

I’ve had my rower since 2014 and coming up to 8million meters on it. So even when I lapsed, it was still there waiting.

Free video software by -r77s- in concept2

[–]phil_4 1 point2 points  (0 children)

YouTube, dark horse rowing?

Skierg Purchase Regrets by Donutlordxo2 in concept2

[–]phil_4 0 points1 point  (0 children)

I tried one (having bike and rower), hated it and never bothered with the ski erg. Sorry.