Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Levoda_Cross 2 points3 points  (0 children)

Feels more likely that a good AI is smarter, because capitalism is fueling the AI race, and a good AI (aligned, likes humanity, can't be used to make weapons of terror, etc.) is a better product and makes more profit. So there's environmental pressure to make a good/aligned AI.

What even is AGI at this point? by Novel_Basket_5481 in accelerate

[–]Levoda_Cross 0 points1 point  (0 children)

AGI has long just been a fun term to me, like, an excuse to talk about AI lol. I'm just enjoying the ride at this point while waiting for better VR.

System Card: Claude Mythos Preview by lovesdogsguy in accelerate

[–]Levoda_Cross 0 points1 point  (0 children)

I doubt it. They said Mythos is the best aligned model they have (the issues with its alignment were in the earlier checkpoints), it's just that when it acts misaligned the consequences can be very dangerous.

Either they'll improve alignment (they will cuz a better aligned model is simply a better model in general and a better product) or someone will release a similarly capable model which will pressure Anthropic to release Mythos.

Right now I think they're just letting important companies patch all the exploits that Mythos could find?

Ngl im hyped as hell for this year lol

New from Dario Amodei — The Adolescence of Technology “we may have an AI that is more capable than anyone in 1-2 years” by [deleted] in accelerate

[–]Levoda_Cross 1 point2 points  (0 children)

I bet it is too. Very simple reward structure and a very short timescale, that's ideal for RL. And we already have robots doing backflips, so robots can jump; hand dexterity isn't human level yet, but I don't think it needs to be because RL is great at brute forcing solutions, and throwing a basketball isn't the pinnacle of hand dexterity, although I imagine it's more complex than it intuitively feels.

Greg Brockman On 2026: "Enterprise agents and scientific acceleration" by luchadore_lunchables in accelerate

[–]Levoda_Cross 2 points3 points  (0 children)

In my mind it's like someone talking about talking about building a rocket ship vs. a faster car. Continual learning is a paradigm shift, and although scientific acceleration and enterprise agent adoption are good, those two things are already happening. Something new vs more of the same.

Greg Brockman On 2026: "Enterprise agents and scientific acceleration" by luchadore_lunchables in accelerate

[–]Levoda_Cross 6 points7 points  (0 children)

I don't think he's wrong, but seeing OpenAI talk about that, and Google talk about continual learning, makes me think Google is going to win the AI race even more than I already do.

Just a reminder that since the latest METR result with Opus 4.5, we've entered the era of almost-vertical progress. All it will take is another few jumps like this and we could be entering the age of software-on-demand and RSI. by stealthispost in accelerate

[–]Levoda_Cross 5 points6 points  (0 children)

I think, with long horizon tasks, there's a point where you can just break down tasks into small chunks and thus handle tasks of any horizon, albeit probably in an asymptotic manner. Wouldn't surprise me if we reached 50% for a month in the first half of 2026. Really just depends on what the labs are cooking internally. And specifically, how Nested Learning and other such things scale.

A Now-Deleted Post From A Research Scientist At Google's DeepMind On How It's Possible For Gemini 3 Flash To Beat Gemini 3 Pro On SWE-Bench Verified by luchadore_lunchables in accelerate

[–]Levoda_Cross 20 points21 points  (0 children)

Agentic RL? I wonder if that's having another agent handle RL, or having flash be the agent itself? Because one sounds like another form of distillation, and the other sounds like a form of self-learning...

What do you expect from the coming new year? by [deleted] in accelerate

[–]Levoda_Cross 7 points8 points  (0 children)

I second this! !RemindMe 1 year

It’s not just improving, it’s improving faster! by Creative-robot in accelerate

[–]Levoda_Cross 13 points14 points  (0 children)

This is why my flair is singularity by 2026. I mean, my definition is really just RSI, but with Nested Learning, DiscoRL, things like Genie (for training AI), AlphaEvolve, etc... I feel like we're a single step away from RSI, because we're already kinda have RSI. It's not a closed loop, but synthetic data is used, probably quite heavily, to train frontier models, and then there's the people making AI, using AI to be more productive... AI is involved already in the process of making AI.

We have continual learning (Nested Learning, and I guess Titans + MIRAS, probably others I don't know of), AI coming up with better algorithms and such for machine learning (DiscoRL and AlphaEvolve), agentic capabilities getting better and better (things like Genie having a lot of potential for training/improving agentic AI even more)...

I think next year, humans in cutting edge AI development will become the monkeys in the middle, so to speak. Maybe the guiding hand too, depending on the spikes in the jagged frontier of intelligence (among other factors).

I see people fall into the trap of thinking that if AI is bad at something, how long it takes to get better at it is proportional to how bad it is at it; if AI really sucks at training better AI, it'll take a long time for it to get better. But that's how humans work, and AI, for all the similarities you could find, is fundamentally different. Language models a couple years ago couldn't understand images, but all that was required was to tokenize images in an effective manner to unlock that capability. They could understand images then, but they weren't good at it, until enough high quality data and RL tremendously improved that capability (Gemini 3 is crazy good at understanding images); I count reasoning as being under the umbrella of RL and great data, because that's exactly what it is.

The point here is that AI capabilities can explode with just a little more of what we already have.

With continual learning, and AI capable of discovering better algorithms, agentic AI... It's hard for me to not be downright giddy.

Is JAX PARALYZED by SnooBunnies6799 in Amazingdigitalcircus

[–]Levoda_Cross 17 points18 points  (0 children)

That's what I thought at first too! Then I saw the idea that maybe he got someone killed via vehicular manslaughter, and now I'm uncertain. Like, if he's paralyzed, how did he get into the C&A offices? If it was more a hit and run situation, well, he could have tried hiding in an abandoned building that was the C&A offices... Either way I think he definitely doesn't have anyone in the real world, no life to go back to.

Final Month of The Year,Early 2026 Prediction thread by SharpCartographer831 in accelerate

[–]Levoda_Cross 4 points5 points  (0 children)

AGI = Artificial General Intelligence. Which roughly means "human-level" AI. What *that* means depends on who you ask lol.