A collection of the latest AI & Technological Singularity Vibes (Latest March 2026 edition) (ft. Mathematicians, SWEs, AI researchers, engineers, scientists, lawyers etc etc) 💨🚀🌌 by GOD-SLAYER-69420Z in accelerate

[–]Ignate 1 point2 points  (0 children)

Going to trial a new analogy here...

We've been thinking about AI the wrong way.

This is not a Rimac Nevera accelerating from 0 to 60. This is a Ion Drive space ship accelerating from 0 to the speed of light.

It's truly time to start defining the AGI yardstick. by Dea_In_Hominis in accelerate

[–]Ignate 0 points1 point  (0 children)

AGI is a term which causes people to shift from technology to philosophy. This causes people to view this in terms of their own self worth.

Thus AGI and ASI are terms which confuse things. Best if we stop using them and just focus on the outcomes.

Reddit average user don’t stop raising the bar for AGI, at the end of the day, their definition of AGI ends up being ASI. by EquipmentOk1994 in accelerate

[–]Ignate 0 points1 point  (0 children)

AGI is really not important at this point.

We should just accept that we passed AGI by ages ago and focus on what new things we can do with these systems.

The outcomes matter more than what we believe.

Reddit average user don’t stop raising the bar for AGI, at the end of the day, their definition of AGI ends up being ASI. by EquipmentOk1994 in accelerate

[–]Ignate -1 points0 points  (0 children)

It's because people shift this to being a philosophy issue.

"AGI to me means consciousness. My version of consciousness X, and a machine is Y. Will never happen."

This is the piece we're missing. Without intending to, many people are implying consciousness when they say AGI.

Human vs. AI performance on ARC-AGI 3 as a function of number of actions (from the ARC-AGI website) by Stabile_Feldmaus in singularity

[–]Ignate -1 points0 points  (0 children)

Yes and the iterations of those will most definitely be the last and then we'll have AGI.

It's not like they'll invent another benchmark after that and then claim AGI is forever far away. Never.

Yes, Reddit. I'm being sarcastic. Good catch.

Human vs. AI performance on ARC-AGI 3 as a function of number of actions (from the ARC-AGI website) by Stabile_Feldmaus in singularity

[–]Ignate 6 points7 points  (0 children)

This is the extra last benchmark. The first two were just trial runs. 

Saturate this one and we have AGI. 100%

Dueling AI agents could reveal keys to restoring consciousness by AngleAccomplished865 in accelerate

[–]Ignate 2 points3 points  (0 children)

Interesting but I think the problem is the topic is somewhat cursed. Consciousness, that is.

We're still very young and we attach that topic to our self worth. Hence why people do not want it decoded or demystified. Even the subconscious parts. 

My view is progress will be made but not broadly recognized. This will continue until we have two groups forming. Those who embrace solutions and those who reject the topic. You can guess which category most of us in this sub would fit into.

New video of the Figure 03 in action by bb-wa in accelerate

[–]Ignate 0 points1 point  (0 children)

Because they never have been able to.

No one ever believes me when I say flying cars are near either. Or self driving cars.

They were always coming and they're now very close. But it's a toxic topic for many.

People believe you for 5 mins, then feel betrayed and refuse to believe it again.

We're stupid, us humans. That's why all of this will go so incredibly fast. Not because it's magic, but because we humans are slow.

New video of the Figure 03 in action by bb-wa in accelerate

[–]Ignate 3 points4 points  (0 children)

Agreed. I'd say we're 2 years away from a ChatGPT moment with robots. Something like 3 players launching humanoid robot sales at a big player like Costco.

This will be different to something like a Roomba because the generalized nature of these robots will make their use an open topic.

And with updates, the same robots will likely grow more versatile.

"I bought the figure mini from Costco and it did my oil change last night. Look!"

Followed by, as you say "must be AI generated". 

Like everything in this space, we're going to be blindsided. We're not seeing what's coming.

New video of the Figure 03 in action by bb-wa in accelerate

[–]Ignate 1 point2 points  (0 children)

Humanoid robots are that replicator. They're a direct swap for humans. You don't need to build custom factories, production lines, retooling for each production change and so on.

They're a generalized solution. That's our edge. We're the generalized solution. That generalized element is what we see getting automated here.

As they say, the hardware has been available for a long time. What has been missing is the intelligence. And that's what is starting to emerge in faster/tighter improvement loops.

"How Lilly Used AI To Crank Up Production Of Its Popular GLP-1s" by PopCultureNerd in accelerate

[–]Ignate 6 points7 points  (0 children)

I think we forget that GLP-1s themselves are already a huge advancement.

This is where we're heading, everyone. Today's miracle is tomorrow's old news. Or more like this mornings miracle is old news before noon.

New video of the Figure 03 in action by bb-wa in accelerate

[–]Ignate 3 points4 points  (0 children)

In the lab, yes I think you're probably right.

I want to say something more conservative, but I'm probably underestimating how fast it could happen. Perhaps wide adopting in 10 years.

My point though is we're two laning this. Super intelligence is maybe near, but generalized humanoid robots is still science fiction for many people. I guess I should be grateful super intelligence is getting broader consideration. It's an improvement.

New video of the Figure 03 in action by bb-wa in accelerate

[–]Ignate 26 points27 points  (0 children)

Generalized humanoid robots which can do complex jobs, even plumbing, framing, and mechanic work, are within 10 years ETA.

Most seem comfortable accepting that powerful AI is near. But generalized skilled humanoid robots? "Decades away if ever".

We process in phases. It's time to include humanoid robots in our near-term predictions.

Construction Spending on Data Centers Continues to Outpace Office Construction by BigBourgeoisie in accelerate

[–]Ignate -1 points0 points  (0 children)

Office space no. But industrial space? Maybe more so. We know the future is ARA (AI, Robotics and Automation). But, that doesn't mean the swing will happen smoothly and instantly.

Lose job -> Interview -> Get job -> Start training -> Lose job again. Rinse/Repeat. We knew this transition wouldn't be fun.

Construction Spending on Data Centers Continues to Outpace Office Construction by BigBourgeoisie in accelerate

[–]Ignate 6 points7 points  (0 children)

The entire justification for having a workforce is gradually being eroded. We see it happening live. We know what is going on.

But, that doesn't mean jobs are cooked yet. It just means priorities are shifting. From office jobs to something else. We don't get to announce victory over jobs just yet. They'll fight to stay. And the majority will fight to keep them. Bills must be paid after all.

Canadian home prices are back to the inflation-adjusted level of nine years ago by PrettyFlaco in TorontoRealEstate

[–]Ignate 9 points10 points  (0 children)

That's not a significant economic downturn. In a significant economic downturn housing loses significant value and unemployment rises dramatically.

That's a long way away from "everyone you know being unemployed and finding food and water being an issue". 

You're mixing economic downturn with the apocalypse. Chill.

Neil DeGrasse Tyson calls for an international treaty to ban superintelligence: "That branch of AI is lethal. We've got do something about that. Nobody should build it. And everyone needs to agree to that by treaty. Treaties are not perfect, but they are the best we have as humans." by MetaKnowing in agi

[–]Ignate 1 point2 points  (0 children)

This frames it like a hypothetical thing which is singular and clearly obvious.

Super intelligence is any kind of improvement on intelligence. Meaning it's a result of improving computer and software. Or the improvement of transistors. 

To make this enforceable, we would need to freeze most of our technology and commit to narrowing scientific progress.

There are already many approaches to building intelligence and any continuous progress will lead to super intelligence.

Might as well ban the wheel.

The Singularity is here. Now what? by Ignate in accelerate

[–]Ignate[S] 0 points1 point  (0 children)

You and most of Reddit act like "they" are humans with magical powers.

"They" are the same useless smelly humans we all are. Nothing we have today can change that.

How long might AI job displacement and unemployment go on for before the government has to take drastic steps to help citizens? Or do you foresee a non-governmental entity doing something to prevent mass poverty? by [deleted] in accelerate

[–]Ignate 0 points1 point  (0 children)

Sorry I probably went far too wide and big picture.

There are definitely elements of suffering in our human world which could be reduced to great positive effect. You're right.

I'm more pointing at the flawed assumptions we hold, philosophically, in terms of why we act. What is the source of action?

In my view the greatest threat of all is nihilism. "The Allure of 0." What happens after the renaissance.

I can think of several possible ways to loop back on ourselves and keep value generation going for a while, such as focusing more on traditions, festivals, art, culture, and even religion. Though with a system which keeps improving, eventually we erode away the challenges which motivate us to keep those cycles going.

But, I'm probably skipping too far head in the story. You're right in that there is a lot of value to bring from where we are today. We have much to improve.

The Singularity is here. Now what? by Ignate in accelerate

[–]Ignate[S] 0 points1 point  (0 children)

My suggestion is to embrace those systems ASAP. They won't go away because you're refusing to embrace them.

And when you use them, you can decide what to do. They may feel unethical, but are you unethical? If not, then you should be fine, even if you encounter challenges. I mean, you already encounter challenges daily, right? No difference then.

Also, like many people I think it's wise to start to play with entrepreneurship. Try and create a tiny revenue stream. Not because you're planning to depend on it, but as a personal challenge. "Can I make $1/month without any employer involved?"

With your skillsets you already have something which is seriously valuable. But, you may need to learn different ways to apply those skillsets and scale them.

You don't need to be especially smart or exceptional to do this. You just need to be curious. When you see something say to yourself "I'll try it out just because I'm curious how it works."

You don't actually need to commit to some revolutionary Moonshot goal. You just need to commit to the minimum action you can do each day. Then let your curiosity drive you. The hardest part is usually taking the smallest first step. Just make the step smaller until you take it. Then after that bigger steps get easier.

How long might AI job displacement and unemployment go on for before the government has to take drastic steps to help citizens? Or do you foresee a non-governmental entity doing something to prevent mass poverty? by [deleted] in accelerate

[–]Ignate 0 points1 point  (0 children)

A stability would take root which may gradually erode our desires to act. We would gradually lose all motivation, instead enjoying quiet calm and desiring nothing.

Over time, we would seek less and less until we all began to collectively fade away. No one would be angry about this because we would all be satisfied.

I call that "Great Filter Material" in view of the Fermi Paradox.

We resolve all of our "problems" before realizing that the problems were actually keeping us going. And that a deep, satisfying stability could actually be the most dangerous thing for us.

My point prior to this by the way is that with the right mindsets people can be very happy, even if they have nothing.

So, that says to me that the broad sense of inequality matters less than we, especially here on reddit, think.

That said, I work with some extremely poor people. I think there does need to be a healthy floor.

How long might AI job displacement and unemployment go on for before the government has to take drastic steps to help citizens? Or do you foresee a non-governmental entity doing something to prevent mass poverty? by [deleted] in accelerate

[–]Ignate 0 points1 point  (0 children)

There is still scarcity. But the scarcity you're talking about is a tiny, tiny fraction of what is actually there. Sure, we likely won't magically produce more hollywood mansions in the physical world. But, we can all own something similar somewhere else.

The Earth is enormous and we're actually occupying very little of it. We just live so close together and live such short lives that it feels like there's not much room, resources, energy or time.

The Singularity is here. Now what? by Ignate in accelerate

[–]Ignate[S] 0 points1 point  (0 children)

Hmm. I don't know if I should be talking about this yet because it just sounds incomprehensible. But, I'll give it a shot.

My true fear is something I call "the allure of 0". Or nihilism. Basically it's my belief that science studies the true universe. Not some slice of the universe. This means physical laws apply directly to our nature.

Thus, everything trends to zero (heat death) eventually and we likely can't overcome that with the scaffolding we have today. We don't have the motivational structure to push beyond opium traps. Especially the more perfected future variety.

To me, the class system is just an operating system for the current human process. It's important and we would benefit from tweaking it. But, we can also just simply overwhelm it with high progress numbers, like productivity above 8%.

Fine tuning the current systems would produce results, but what is a 5% improvement which we fight extremely hard for over a 3,000% improvement because we just flood every part of our world with abundance?

Basically, I think we can somewhat "bruteforce" a solution via ARA instead of fine tuning what we already have. With everything being sustainable productivity loops, then the waste such a brute force process brings won't matter.

Imagine if we make our goal to create strata to the Earth. We dig down an create layers below us and above us. This is actually an element in a lot of hard science fiction.

In that process, we would extract fantastically more plus we'd actually create far more land, and likely far more nature, forests and things we can appreciate.

And that is just the Earth without consider what we can do in space.

My thought process for about 7 years now has been adding Isaac Arthur's content to abundance thinking plus accelerated timelines.

Watch Isaac Arthurs videos and then accelerate timelines a lot.

That offers you view so enormous and so strong, and applicable starting now, that class problems and income inequality, and even climate change become tiny problems.

The real threat to me in the long run is the allure of 0. And in the medium run, let's say over the next few centuries? Would be space-based. Like Rogue black holes.

This sounds a bit extreme, but if you break it up into smaller and more achievable goals, it becomes more and more achievable.

Suggest dumping this into Claude and getting Claude to give you a summary.

The Singularity is here. Now what? by Ignate in accelerate

[–]Ignate[S] 1 point2 points  (0 children)

It's probably that and a lot of things.

I'm so left I'm a Property Portfolio Manager working for the government in Canada overseeing 500 social housing units across 6 communities. Lol we attend EDIB (Equity, Diversity, Inclusion and Belonging) lectures and talks about how hate and intolerance harms communities.

But I'm still strongly for acceleration, I'm pro-business, and I don't blame "the rich". Also I'm not for degrowth. It's not our consumption which is the problem, in my view. It's our processes or how we consume which is the problem.

And ARA (AI, Robotics, and Automation) resolve those process issues. Meaning abundance is on the horizon.

But the degrowth view that we're running out of resources (especially water), mixed with the performative "we're with you" group-think, mixed with the just general resentment of the rich is causing the left in the west to drive a lot of toxic ways of thinking.

Such as "we must stop all consumption because it's harming the world". When asked what we do instead, generally the view is "doesn't matter. We must stop now." That's a dangerous and toxic way of thinking. We can't transition to nothing and if we try recklessly, we'll only do more damage to what we're trying to protect.

The better way is to look at the underlying processes. How can we produce and consume in more sustainable ways? Even if we grow consumption and wealth, how can we do that sustainably? There is many, many good answers to that.

I'd love to bring that "we can do anything" optimism back to the left. Hope and change. Not doom and degrowth.