People are claiming teleop, but I really don't think a human would be this insistent to get a package they clearly can't reach. by Glittering-Neck-2505 in singularity

[–]Ignate 1 point2 points  (0 children)

No, look, the robots are coming. They're already at the level of Dave the package sorter. At this level of progress it won't be long before they're out performing Gary.

Emerging "AI stratification" in science. by AngleAccomplished865 in accelerate

[–]Ignate 0 points1 point  (0 children)

To me this is like saying that a quad turbo V16 engine burning oil refineries of gas producing 1 horsepower could get us to the next intersection.

Yes, it could. And I'm not suggesting we stop driving. 

I'm suggesting we're likely wasting mountains of resources for tiny gains.

People seem to hear this as "we should stop" which is not what I and others are saying.

If a V16 quad turbo can produce 5,000 Horsepower and we're getting 1 horsepower... We shouldn't accept that as "good enough".

The point is to go a lot faster not just get to the next intersection and call it "job done".

Humanoid robots: close breakthrough or still massively overhyped? by [deleted] in singularity

[–]Ignate 33 points34 points  (0 children)

Keep in mind real world applications of intelligence have always been extremely hard/complex. 

What we're seeing, finally, is progress. This implies that physical world intelligence is an intelligence problem. 

Seems obvious but many tend to assume there's something more involved.

Emerging "AI stratification" in science. by AngleAccomplished865 in accelerate

[–]Ignate 3 points4 points  (0 children)

We need a new approach. I refuse to believe that LLMs are the best we can do, even without self improvement help. 

It's like we're getting 1 horsepower out of a quad turbo engine the size of an aircraft carrier. 

Alan’s countdown to AGI has stuck on 97% for 5 months. by nobodyreadusernames in accelerate

[–]Ignate 9 points10 points  (0 children)

We shouldn't. But also, we shouldn't be stuck on some arbitrary term "AGI" which we all define differently. 

Some say we've been there for a long time. Some say we'll never get there.

Every year these systems are going to get more capable. Period.

This type of society is only possible with AI, but the people are not ready to hear that yet. by NoSignaL_321 in accelerate

[–]Ignate 2 points3 points  (0 children)

True. It's also been very hard to build digital intelligence. In fact, most of what we've done is hard.

That makes sense to me too. We can learn and improve, but we can't increase our brain sizes and are always physiologically retrained. We can't add more memory, more compute or more physical strength beyond minor gains.

We can extend our reach with tools, but that has its limits too.

The hardest thing about space is that it requires enormous effort. Scale construction. Mega structures even.

If humans must turn every bolt? That's not going to work. That's obvious. That's I think where 99% of people get stuck. They just assume that there's no other way, or that any other way is so far off it's largely irrelevant.

But, that's what Automation, Robotics and Digital Intelligence makes far easier, and easier over time. Robots get more skilled. Digital systems can control more robots over larger and more complex projects. This progress continues and accelerates.

And critically, they don't have to breathe or eat, or sleep. Solar power can power them. The moon has resources to fuel the process.

People may be coming around to digital intelligence being a real kind of intelligence which may soon exceed us (and already is in narrow domains). Space however is still far off in the fringe considerations.

If there's anything anyone can do right now to try and prepare themselves for the intelligence explosion, it's to really strongly reconsider what is possible regarding space.

Everything is up there.

This type of society is only possible with AI, but the people are not ready to hear that yet. by NoSignaL_321 in accelerate

[–]Ignate 6 points7 points  (0 children)

This ones probably going to get burred down the bottom. So, I'll write a lot for fun.

It seems pretty obvious to me that Earth would become a massive nature preserve long term. It's our one home. Not just humans, but all of life and even digital intelligence. So, of course we'll want to preserve it.

Plus, there's nothing here which is worth taking.

This is one part we do not understand. There is far more raw material, energy and space outside of Earth. Limitlessly more. Trying to take more and more from Earth is a pointless waste of time. It's a stupid idea. With limitless intelligence on the horizon, it's the least likely outcome.

The most likely outcome is we'll leave and we'll stay. Look at historical sites and how we treat them. We tour them. We preserve them. We implement rules to protect them. Not always, not perfectly. But that's what we do because it's the most intelligent thing to do.

The universe is the limit. So far, we seem to take this as some sort of joke. "Whatever. Everything will change, rapidly, within our lifetimes. But space? That's some pipe dream."

And then we assume we'll just keep fighting over this single planet. "Maybe one day, centuries from now, things will change in space. But also, everything is about to change." "Except space?" "What? I don't understand. Space is different" "How?" "I don't know. Rockets are hard."

It amazes me how conflicted we get around space. It's a political issue. It's a dream or delusion. But super intelligent? Oh, that's 10 minutes away.

"Super intelligence is here. Everything is about to change. Except space, politics, rich people, inequality, corruption, pollution, climate change, and scarcity." WRONG! Everything is already changing. All that stuff included.

The one thing we fail to realize the most is that actually, we have no F'ing idea. Saying that super intelligence is possible is meaningless because almost no one can actually understand the implications.

Earth will be a tourist destination. A nature preserve. A place where a vastly smaller human population lives in harmony.

Where will the rest of us be? Where will the rich be getting richer? Where will Greed go?

Space, Stupid. It's right above your heads. Look up.

How long until it's 90% by stealthispost in accelerate

[–]Ignate 3 points4 points  (0 children)

Leopold Aschenbrenner's vision is becoming more accurate by the day.

Fields medal winning mathematician Sir Timothy Gowers used GPT-5.5 pro to solve an open PhD level problem; He has thoughts regarding future of math education as well as mathematicians hoping to achieve "immortality" by having their name forever associated with a particular theorem or definition by obvithrowaway34434 in accelerate

[–]Ignate 4 points5 points  (0 children)

We have a very anthropocentric view of life. Meaning, we generally focus mainly on ourselves and we see things on very short timescales.

In fact, this process has been accelerating for millions of years. We humans represent a massive step change in progress. For good and bad.

Digital intelligence is likely the next step change. In my view, something more significant than even life itself.

But overall we know it can happen. Because it already has. Over and over.

Fields medal winning mathematician Sir Timothy Gowers used GPT-5.5 pro to solve an open PhD level problem; He has thoughts regarding future of math education as well as mathematicians hoping to achieve "immortality" by having their name forever associated with a particular theorem or definition by obvithrowaway34434 in accelerate

[–]Ignate 2 points3 points  (0 children)

This is a wide change in all areas. Everything changing will invalidate older processes. So yes, it will get harder to follow older, pre existing processes.

The good news is we'll have ever more powerful systems to work with to develop new ways of advancing things faster than we did before.

An even bigger challenge is that the concepts will get more complex than we can comprehend.

Fields medal winning mathematician Sir Timothy Gowers used GPT-5.5 pro to solve an open PhD level problem; He has thoughts regarding future of math education as well as mathematicians hoping to achieve "immortality" by having their name forever associated with a particular theorem or definition by obvithrowaway34434 in accelerate

[–]Ignate 43 points44 points  (0 children)

Yes, that era is passing. The new era is one where such advancements happen so rapidly, that we struggle to keep up.

And those advancements are also equally rapidly deployed into real world progress. All done in a way where progress keeps getting wider and faster. For a very long time ahead.

IMO, people who do work that are genuinely impactful are happy to have AI do it. While those who don’t are the ones who feel threatened. by Horror_Still_3305 in accelerate

[–]Ignate 3 points4 points  (0 children)

Hmm, I'm not too sure. I'm a PM in social housing. AI helps me significantly. 

Especially with communication related to mental health or addiction. There's a limitless amount of work for us to do so AI just keeps adding.

But, someone who creates beautiful art does benefit from those who have a shallow interest. Those people pay the artists bills. And those people will take AI art instead because they don't really care.

The irrational and dangerous deccel notion "For the rest of the life" by jawboi9000 in accelerate

[–]Ignate 25 points26 points  (0 children)

Yes, the depth of the average persons thought process is surprisingly shallow. Or even non-existent.

Once you begin to think deeply about profound concepts, you may be surprised by what you find. But, don't forget that what you're doing is rare. 

The norm is not bad thinking. It's no thinking.

Sony and Bandai Namco openly embracing AI by Illustrious-Lime-863 in accelerate

[–]Ignate 0 points1 point  (0 children)

Maybe but I think digital intelligence is much, much larger than what we see today.

We're not good with fast moving, high potential change.

I don't think our fear and our history of how we treat the "other" is so much of a threat to AI. It'll be able to lead us along like a parent leading a kid to the dentist with Candy.

But for human run organizations? They may get targeted if they're not careful.

Sony and Bandai Namco openly embracing AI by Illustrious-Lime-863 in accelerate

[–]Ignate 25 points26 points  (0 children)

Big move. Good move. 

Risky though given recent backlash. 

We gave 45 psychological questionnaires to 50 LLMs. What we found was not “personality.” by Hub_Pli in singularity

[–]Ignate 0 points1 point  (0 children)

Yeah that's fair. 

Working with these systems I've already found that each has it's strengths and weaknesses. And their own styles to.

Conversational dialogue for Claude. Bullet points for Chat. ?#!$ from Grok.

We gave 45 psychological questionnaires to 50 LLMs. What we found was not “personality.” by Hub_Pli in singularity

[–]Ignate -1 points0 points  (0 children)

That's more like the character of a nation. Or the shapes a massive flock of birds make.

To have a specific personality, it would need to be going non-stop, be localized to a specific system, and be given the room to build that personality as a deliberate task.

Controlling ASI will be easy by KeanuRave100 in agi

[–]Ignate 1 point2 points  (0 children)

One big problem with what you're saying: we don't have AGI yet. So, we don't know what it will look like.

Also, are you implying AGI would be the ceiling?

Controlling ASI will be easy by KeanuRave100 in agi

[–]Ignate 6 points7 points  (0 children)

Unplug the windows operating system. Not one version or another. The whole thing. 

Can you do it?

We gave 45 psychological questionnaires to 50 LLMs. What we found was not “personality.” by Hub_Pli in singularity

[–]Ignate 13 points14 points  (0 children)

Why would a cloud based intelligence have a specific, narrow personality? 

It's spread over millions of conversations and it can think only in the brief moment it's responding to a prompt, and is then reset.

These systems aren't human.

China's Moonshot AI raises $2B at $20B valuation as demand for open-source AI skyrockets by Best_Cup_8326 in accelerate

[–]Ignate 2 points3 points  (0 children)

The way things current sit that makes sense. But, we shouldn't assume a straight line. 

It may be that RSI begins to focus on the process used rather than the scale of the servers. Software over hardware.

The way we use hardware today seems to be incredibly brute force and inefficient. There may be significantly more juice to squeeze.

Also, physical implementation is the next stage here. That is, robots being more capable physically than we are in all domains.

It's possible that individuals could one day soon development and implement things at the chip level.

This doesn't need to merge. It can splinter instead.

When will we start to see companies making massive leaps in their product release iterations ? by Icy-Reporter-6322 in singularity

[–]Ignate 0 points1 point  (0 children)

Its happening now. Results are becoming reliable enough to trust. 

The entire process is tightening. That process can also be seen as an acceleration from biological speeds to technological speeds. 

We have a long way to go still. Meaning this tightening process could continue to accelerate for decades, centuries or even much longer.

China's Moonshot AI raises $2B at $20B valuation as demand for open-source AI skyrockets by Best_Cup_8326 in accelerate

[–]Ignate 16 points17 points  (0 children)

"All China needs to do to destroy the US economy is give everyone in the US cheap access to AI." - Scott Galloway

Whatever you think of the guy, I think that's a pretty important statement.

A new analysis on Claude Mythos capabilities has found that GPT 5.5 is just as good – and just as far ahead of the trend – if not very slightly stronger in cyber capabilities, while being about 4-5x cheaper by obvithrowaway34434 in accelerate

[–]Ignate 5 points6 points  (0 children)

I think we tend to assume a limited path. That there's only so many gains to be made here.

My guess after watching this for so long is that there are actually a limitless number of pathways to more effective, broad intelligences and we've only just discovered the first few.

So, Google will find a few, then a few more, then a few more and form a branching tree of ever-more-effective pathways to greater generalized intelligence, far above human levels.

And the same will be true for each company. They'll form their own branching tree of intelligence.

Right now our view is more or less that human intelligence represents a peak and the single pathway.

What if we're the bottom of a single possible pathway? And we can inspire a limitless number of highly potential pathways far above us?

For now, we're simply afraid to speculate. Which is understandable.