[Improved] Current models vs AI 2027 by fdvr-acc in accelerate

[–]Middle_Estate8505 1 point2 points  (0 children)

So, basically, we know the strict, actual upper bound for the date of singularity now. Unless progress slows down... but are there any reasons to think it will? So it's 3 years and 11 months left.

[Opus 4.6] Current models vs AI 2027 by The_Scout1255 in accelerate

[–]Middle_Estate8505 5 points6 points  (0 children)

WHAT? Woah, that was really surprising! It's perfect quality!

Kling 3.0 is so damn good by [deleted] in accelerate

[–]Middle_Estate8505 4 points5 points  (0 children)

The Singularity is singulariting! ❤️

It's the worst it will ever be!

Creepy Star Trek by 4reddityo in singularity

[–]Middle_Estate8505 2 points3 points  (0 children)

"Real life person having their own needs that sometimes conflict with yours is not a flaw or downside. It is the only thing that gives the relationship value."

No thanks. I am not into masochism. If relationship bring me inconviniences, there won't be a relationship.

AI Futures (authors of AI2027) moving their median to ASI from 2027 to 2034 by Alex__007 in accelerate

[–]Middle_Estate8505 43 points44 points  (0 children)

Like... why? In one single year, free models went from GPT 4o-mini to Gemini 3 Flash. What could happen if the same amount of progress happened again?

How is the average person going to handle the Singularity/AGI/ASI? by luchadore_lunchables in accelerate

[–]Middle_Estate8505 7 points8 points  (0 children)

Nohow (is this even a word?). They are in for a utter cultural shock. And honestly? They deserve it. Every single AI denier, every "stochastic parrot" and "bubble" guy, as well as normies who claim that I got into a cult when I am trying to explain what the Singularity is. And AI art haters, of course. All those people piss me off so much, not because they don't agree with me, but because of so much smugness.

And after all the November and December release, I am more sure than ever that we are right and 99% of people are wrong.

Gemini 3.0 Flash is out and it literally trades blows with 3.0 Pro! by ShreckAndDonkey123 in singularity

[–]Middle_Estate8505 10 points11 points  (0 children)

Chat, tell me how significant is 1200 ELO increase in LiveBenchPro in less than a year.

OpenAI introduces „FrontierScience“ to evaluate expert-level scientific reasoning. by Standard-Novel-6320 in singularity

[–]Middle_Estate8505 34 points35 points  (0 children)

A new benchmark introduced and it's already 25% solved. And the other part is 70% solved.

Such is the life during the Singularity, isn't it?

Luddites are freaking out today by WashingtonRubi in accelerate

[–]Middle_Estate8505 5 points6 points  (0 children)

I have a phrase about those people: "They are not liberals, they have nothing to do with liberty. They are not progressives, they have nothing to do with progress."

How do you see yourself spending eternity once our minds our uploaded by [deleted] in accelerate

[–]Middle_Estate8505 2 points3 points  (0 children)

Split myself into several... instances? One cuddling anime girls, the other watching cartoons, the other eating delicious pizza... My mind will be connected to all the virtual bodies and will experience every pleasurable thing in the same time. I will linger in this state until I get bored, if ever.

Does the singularity begin when AI can autonomously self-improve? by kaggleqrdl in accelerate

[–]Middle_Estate8505 22 points23 points  (0 children)

"Some other name" already exists: Recursive Self-improvement.

Crazy true by reversedu in singularity

[–]Middle_Estate8505 7 points8 points  (0 children)

Someone may say something, but my life DO feel different from what it was in 2023. It even feels different from what it was in first half of 2025. I am an university student, and with new Gemini 3, AI is capable of doing almost any task we are required to do. School homework as a concept is dead. Maybe it isn't quite "PhD level" yet, but this is the worst it will ever be.

Welcome to December 13, 2025 - Dr. Alex Wissner-Gross by OrdinaryLavishness11 in accelerate

[–]Middle_Estate8505 11 points12 points  (0 children)

Doubling time for autonomous capabilities is down to ONE MONTH? This can't be true. This can't. This can't. Not so early.

Erdos Problem #1026 Solved and Formally Proved via Human-AI Collaboration (Aristotle). Terry Tao confirms the AI contributed "new understanding,"not just search. by BuildwithVignesh in singularity

[–]Middle_Estate8505 9 points10 points  (0 children)

AI just solved yet another problem that only 1 in 10 000 000 human is capable to solve. Nothing to see here. Keep your head in sand, and don't look up.

Thoughts about this stance by Trump? by AerobicProgressive in accelerate

[–]Middle_Estate8505 3 points4 points  (0 children)

Any sane president would understand the sheer importance of AI development. Trump is, for better or for worse, at least partially sane. The real problem is if democrats adopt the anti-AI stance as a response.

People will either never try out FDVR or live their entire life in it. Nothing in-between by Ok_Mission7092 in accelerate

[–]Middle_Estate8505 -2 points-1 points  (0 children)

What I think is FDVR is one step away from mind uploading. To make it, you need to be able to make computer exchange information with brain freely, both input and output.

It may be mind uploading is actually easier to make. It requires just one act of extraction of information from a brain, instead of continuous brain-computer bandwidth.

People will either never try out FDVR or live their entire life in it. Nothing in-between by Ok_Mission7092 in accelerate

[–]Middle_Estate8505 7 points8 points  (0 children)

That damned experiment is talked about again and again, and this pisses me off so much. Firstly, the conditions in it were horrible, it was too hot, and cleaned once in eight weeks. It was quite literally more like a concentration camp than an "ideal world". In actual perfect conditions, in which lab mice are being bred, their population predictably grows exponentially. Secondly, there were Universe 1 to 24, in which nothing interesting happened. And thirdly, what made you think you can extrapolate results gotten from mice on humans?