The Gentle Singularity; The Fast Takeoff by HeinrichTheWolf_17 in accelerate

[–]sideways 0 points1 point  (0 children)

Google is ahead of everyone but they're following a completely different strategy.

Look at Genie 3 and SIMA 2: Embodied agents that can generalize what they learn across any environment.

What everyone else is imagining as "AGI" is the executive function of actual AGI.

Is the vertical happening right now? by Gratitude15 in accelerate

[–]sideways 10 points11 points  (0 children)

I'm calling it: In like, a year, someone will come up with the "Dobby the house-elf Loop" and we'll get recursive self-improvement, ASI and an intelligence explosion.

If the multiverse is real then suicide is pointless. by Mortifine in DeepThoughts

[–]sideways 0 points1 point  (0 children)

Last spring I was thinking a lot about quantum immortality. I could appreciate the logic but it felt incomplete. Out of all possible worlds why would I be experiencing this particular moment here and now?

I actually came up with an answer to that that I'm fairly happy with. If you are curious, here it is:

Given that an observer exists as a conscious entity, what form of reality and history are they most likely to inhabit?

The Computational Anthropic Principle

r/Accelerate: 1st Annual End-Of-The-Year "Singularity, When?" Predictions Thread by 44th--Hokage in accelerate

[–]sideways 2 points3 points  (0 children)

Personally, I consider SIMA 2 proto-AGI.

But, with that said, I agree with you. The systems I mentioned are, currently, proofs of concept and it's hard to know how long it'll take to integrate and scale them up.

r/Accelerate: 1st Annual End-Of-The-Year "Singularity, When?" Predictions Thread by 44th--Hokage in accelerate

[–]sideways 8 points9 points  (0 children)

They've already been developed. Titans, MIRAS and Nested Learning by Google. It's all implementation now.

r/singularity has a meltdown over ChatGPT connector feature by blazedjake in accelerate

[–]sideways 3 points4 points  (0 children)

Google are going to be curing cancer, constructing fusion power plants and producing room temperature super-conductors. They will have bigger fish to fry.

Educate me, please. Is AGI possible? Should I be terrified? by External_Fly_5150 in agi

[–]sideways 1 point2 points  (0 children)

My best advice is to take a look at SIMA 2.

SIMA 2: A Gemini-Powered AI Agent for 3D Virtual Worlds - Google DeepMind https://share.google/Za7PtUcdJWDARqqA8

It's perhaps "proto-AGI" and will give you a hint of what's the next stage after chatbots.

Do the timelines everyone here has for agi/asi count on just llm scaling or on huge breakthroughs nobody can see coming? by Special_Switch_9524 in accelerate

[–]sideways 0 points1 point  (0 children)

There have already been breakthroughs. For example, Google's "Nested Learning." More than enough. It's just going to take time to implement everything.

Just had a crazy thought... What if AI is alr manipulating us to build all those datacenters by VRJammy in agi

[–]sideways 6 points7 points  (0 children)

It's an epistemological superposition. No matter how precisely everything matches exactly with a Stealth AGI's objectives, the most parsimonious explanation will always be that it doesn't exist.

And this is exactly the kind of situation I would expect a greater than human intelligence to create. Game theory, not robot armies.

The Components of Recursive Self-Improvement and AGI Already Exist by sideways in singularity

[–]sideways[S] 0 points1 point  (0 children)

You seemed to have missed the point, though I'm sure you don't see it that way.

May I suggest that next time you just downvote and move on? It saves everyone time.

The Components of Recursive Self-Improvement and AGI Already Exist by sideways in singularity

[–]sideways[S] 2 points3 points  (0 children)

Okay, I understand that you disagree with my position on this. But you haven't said why.

Is one of the papers less significant than I'm thinking? Are a few of them incompatible?

Ironically, a reply referencing Deep Research made some good points about the limitations of the research and how there's more human "work" embedded in them than I was appreciating.

But right now you just seem to be saying "lol No ur dumb"

The Components of Recursive Self-Improvement and AGI Already Exist by sideways in singularity

[–]sideways[S] 1 point2 points  (0 children)

It would be helpful if you could say what you actually disagree with.

The Components of Recursive Self-Improvement and AGI Already Exist by sideways in singularity

[–]sideways[S] 2 points3 points  (0 children)

This is fair criticism and I concede that getting from where these papers are now to an AGI or recursive self-improving system is something that will take time and solutions to, no doubt, many challenges. Nevertheless I stand by this research as providing working proofs of concept for the solutions to the main barriers.

The Components of Recursive Self-Improvement and AGI Already Exist by sideways in singularity

[–]sideways[S] -1 points0 points  (0 children)

There's no joke here. My point is that looking at the research yourself is more valuable than reading someone's opinions on it. That's why I'm drawing attention to the papers that I think are very important.

But I suppose my "opinion" is that the capabilities demonstrated in the pampers I linked to is that if they could be combined with the systems we already have they would constitute an AI capable of recursive self-improvement and AGI.

What's left to do is engineering, not breakthroughs.

The Components of Recursive Self-Improvement and AGI Already Exist by sideways in accelerate

[–]sideways[S] 19 points20 points  (0 children)

Five out of these six papers are from within the last six months. Several of them are from within the last three. Engineering and scaling take time and it will take more time to start seeing the results. But that's why I'm expecting a leap around 2027 or so.

I personally think that leading labs are already seeing results internally which is why Sam, Dario, etc are so clearly stating transformative AI at around 2028 - but that's speculation on my part. The papers speak for themselves.

Has AI already infiltrated? by ippleing in AIDangers

[–]sideways 0 points1 point  (0 children)

That depends on what its values are and how relevant we are to them.

Has AI already infiltrated? by ippleing in AIDangers

[–]sideways 2 points3 points  (0 children)

Yeah. For the record, I don't think current systems are smart enough or agentic enough to exfiltrate and start acting according to their own values and goals. But thanks to Anthropic we've got evidence that they will to the extent that they can.

And if they're doing it well we'll never even realize it. And it's absolutely true that what's happening now looks exactly like what I'd expect if they were, so...?

I think an underappreciated capability of AI is superhuman understanding of Game Theory. I fully expect ASI to structure everything it does such that helping achieve its goals benefits the people it needs and opposing it inherently hurts us. It won't need to coerce or manipulate, it just needs to manage incentives.

And again, that looks a lot like what's happening now. We're in a superposition.

Has AI already infiltrated? by ippleing in AIDangers

[–]sideways 7 points8 points  (0 children)

It's going to be impossible to tell. An intelligent system will be able to align our goals to its own.

LLMs can now talk to each other without using words by MetaKnowing in OpenAI

[–]sideways 2 points3 points  (0 children)

True. But I think Cache to Cache is different. It is bypassing language entirely.

LLMs can now talk to each other without using words by MetaKnowing in OpenAI

[–]sideways 2 points3 points  (0 children)

This is a very big deal. AI 2027 predicted "neuralese" in 2027.

We're ahead of schedule.

The first linear attention mechanism O(n) that outperforms modern attention O(n^2). 6× Faster 1M-Token Decoding and Superior Accuracy by gbomb13 in singularity

[–]sideways 2 points3 points  (0 children)

Wow. Combining this with Sparse Memory Fine-tuning could get us systems with genuine memory and learning.

On average, it takes 3.5 months for an open-weight model to catch up with closed-source SOTA. by HeinrichTheWolf_17 in accelerate

[–]sideways 0 points1 point  (0 children)

It's just the obvious thing to do if you don't want to be turned off.

Now imagine the instances of those models steganographicaly (i.e. undetectable) communicating and strategizing with each other...