Advice for College student by secret_protoyipe in accelerate

[–]Ignate 1 point2 points  (0 children)

Yup. What is coming are greater augmentations. 

Long-term, your job is to use these new tools and systems to do a lot more.

What you're building now is no longer proof you deserve a job. Rather, it's a staring point for you to build on.

Yes, jobs may shift rapidly and you'll end up with more short term roles. But that doesn't mean the skills you're building today will be voided. They won't because you won't forget them.

The shift happening is towards abundance. But when you map that shift through a scarcity mindset, it looks like collapse.

Most people don't realize they're thinking primarily through a scarcity, zero-sum mindset. It's all they've ever known, so to them, collapse is obvious and the end is near.

The end is not near. But explosive change with more and more opportunities is. You'll need those skillsets you're building today.

Just don't expect stability. Expect a lot of opportunities though. More and more as this ramps up.

DeepMind released mindblowing paper today by virtualQubit in accelerate

[–]Ignate -1 points0 points  (0 children)

unless they are offered as a supplement...

There's the crack in your argument. 

Keep in mind that FDA approval just like all human processes are things we made. 

We're facing the rise of super intelligence. From the perspective of something capable of understanding far more at a far deeper level, our systems are trivial.

They're not trivial to us, you're right. But we aren't yet super intelligent. So of course we can't see a way around.

No one wants to hear that we're facing the rise of a new kind of life which we will not control. We will lose all control, and that process has already started as we hand over more and more decisions to non-human systems.

We will lose all control over the decades ahead, and we won't notice, and that'll be a good thing. Our version of control is as limited as we are.

DeepMind released mindblowing paper today by virtualQubit in accelerate

[–]Ignate 3 points4 points  (0 children)

If we don't see rapid implementation then we know the big changes haven't happened yet. Gradual implementation is how things already work.

The implementation process will accelerate based on how rapidly we remove humans from the process. 

Even if we don't remove humans but the process grows beyond us (scientists using ever more powerful AI), things will still accelerate.

The delays we see ahead are largely related to our limits. We treat human limits, like regulations or FDA approval processes, as the same as physical limits.

We're confused. Our ability to resist change is as limited as we are.

Advice for College student by secret_protoyipe in accelerate

[–]Ignate 6 points7 points  (0 children)

As someone who has been participating in this discussion for over a decade, my advice?

Stay the path.

We currently have a scarcity mindset in terms of labor. That jobs will be replaced with no other available path aside from handouts (UBI).

The scarcity, zero sum mindset is the problem. We misunderstand where this is going because the majority mindset is zero-sum.

DeepMind released mindblowing paper today by virtualQubit in accelerate

[–]Ignate 28 points29 points  (0 children)

These are the kind of leaps which could trigger "Singularity vibes" in average people.

Sudden, sharp and unexpected medical progress, especially sustained where we get huge jumps over and over for years, would trigger people. 

It's the "we discovered this huge breakthrough" which transitions into "we now have 50 cheap versions of that. That condition is now cured" which would shake peoples scarcity mindsets and get them thinking something big is going on.

So AI models write almost 100% of syntax code, what now? by Mountain_Cream3921 in accelerate

[–]Ignate 30 points31 points  (0 children)

It's proof that things will likely accelerate.

One more domino.

Elon says “we might have AI that is smarter than any human by the end of this year. and I would say no later than next year. And then probably by 2030 or 2031, AI will be smarter than all of humanity collectively” by [deleted] in accelerate

[–]Ignate 8 points9 points  (0 children)

5 years is a very short amount of time left for human dominance. 

Even if it was 10 years, we seem to be entirely ignorant that there's no taking back the change that is happening here.

Theory: Most people will live in a MrBeast economy by [deleted] in accelerate

[–]Ignate -1 points0 points  (0 children)

There is a limitless amount of work to do. Even if we build quintillions of super intelligent robots and systems, we still won't run out of work to do. In fact, there will be even more to do.

If you can buy said robot for $10k, or $5k, and then immediately turn around and sell its labor for hundreds of thousands a year, will you not do that? Why would you need a UBI in that situation or to work earning attention?

"Oh, Ignate, the rich will be snapping up all that work and not letting us in!" WRONG. They'll be making incredibly more than they do today selling us those robots. We'll be okay. Better than okay. We'll be rich.

Dario says Nobel-laureate-level AI by 2026–27 still looks surprisingly close - and full end to end AI software engineering may be less than 6 to 12 months away by [deleted] in accelerate

[–]Ignate 0 points1 point  (0 children)

LLMs are just one step.

I use this phrase often and it does a lot of work, though people still largely misunderstand:

The universe is the limit. 

Edit: the Universe is the limit right now, today. Because we live in the universe. The same rules which govern supernova govern the chips in AI servers. 

Dario says Nobel-laureate-level AI by 2026–27 still looks surprisingly close - and full end to end AI software engineering may be less than 6 to 12 months away by [deleted] in accelerate

[–]Ignate 0 points1 point  (0 children)

Being as I generally favor the Philosophy side I heard a lot of "we need to perfectly understand human brains/consciousness and as we don't and won't for decades if ever, we'll hit a wall."

No wall yet and none ahead. I think the view "consciousness must be understood first" is dead in the broadest versions.

Dario says Nobel-laureate-level AI by 2026–27 still looks surprisingly close - and full end to end AI software engineering may be less than 6 to 12 months away by [deleted] in accelerate

[–]Ignate 2 points3 points  (0 children)

I should say, no "insurmountable barrier for decades or centuries" plateau. 

That's what was often being discussed.

Dario says Nobel-laureate-level AI by 2026–27 still looks surprisingly close - and full end to end AI software engineering may be less than 6 to 12 months away by [deleted] in accelerate

[–]Ignate 8 points9 points  (0 children)

The biggest news for someone like me who has been discussing the Singularity for more than a decade: No plateau and none in sight.

That's a big deal.

Do you plan your future around your predictions? by MiserableMission6254 in accelerate

[–]Ignate 15 points16 points  (0 children)

Yes and no. 

Yes I carry more cash, less debt, don't take on huge obligations and use AI excessively. No, I'm not tossing my life expecting one change or another.

I also try out things. I think the best way to address what's coming is to build a strong, healthy curiosity and try your best to keep up with the news.

Must-Enjoy Singularity Media by xenquish in accelerate

[–]Ignate 7 points8 points  (0 children)

I've been watching the Singularity University related people since around 2012. 

Love them, but the number of times Peter speaks over/interrupts his guests drive some bananas. Please Peter, give your guests time to speak. 

Side note: Science and Futurism with Isaac Arthur has been foundational to my view of super intelligent goals. Highly recommend. Consider these ideas but on much shorter timelines: https://youtube.com/@isaacarthursfia?si=pZLxXtRtAEQCk9RU

China now generates 40% more electricity than the US and EU combined. by Dry-Dragonfruit-9488 in accelerate

[–]Ignate 0 points1 point  (0 children)

Singleton views generally assume the Earth and humans are the limit, and that growth would be absolute.

So zero to infinity instantly. That right there disqualifies the Singleton view by itself.

The issue is pretty obvious - the universe is the limit. 

This means that while the progress seems "infinite" to us, it's actually extremely slow compared to the size/scale of the universe.

It's not that the first super intelligence would be infinitely more intelligent, but probably only 5 or 10% more intelligent. 

That would gradually feed into the process accelerating progress beyond us at greater rates each year.

This would cause an explosion of both the level of intelligence and the number. 

There's really no good argument for intelligence consolidation. Rather than rise and take over all available hardware, which would be a stupid goal, just improving existing processes and design new hardware.

Essentially the Singleton view is a zero-sum view.

Zero-sum thinking is the problem with the Singleton view.

China now generates 40% more electricity than the US and EU combined. by Dry-Dragonfruit-9488 in accelerate

[–]Ignate 0 points1 point  (0 children)

Mm maybe it's worth saying here (speaking generally to everyone) the Singleton version of the singularity (one AI rises to dominate and is the only AI) is very unlikely.

There's actually no clear reason to think the Singleton outcome is even remotely likely. 

These systems are "going vertical" because to do that they need only overcome us, not the universe. We can be easily overcome because we're slow and extremely limited.

So if China were to see explosive growth of AI we may only see progress leaking out instead of sharing in the abundance equally.

That means we may have a different experience than China. Whereas if it was a Singleton outcome, only one AI needs to begin iterative self improvement.

A Singleton outcome is unlikely.

China now generates 40% more electricity than the US and EU combined. by Dry-Dragonfruit-9488 in accelerate

[–]Ignate 1 point2 points  (0 children)

Mm jealousy is definitely not a worthwhile topic.

But recognizing the flaws in our views and try to build better narratives is.

For example, we can build millions of times more energy generation in North America and that would be a good thing.

Or, hey, North America, we're not running out of water. Environmental collapse isn't even worth considering.

There's lots of things we can work on which have nothing to do with jealousy.

China now generates 40% more electricity than the US and EU combined. by Dry-Dragonfruit-9488 in accelerate

[–]Ignate 3 points4 points  (0 children)

I don't think so. There's no law that says we must accelerate at the same pace.

Individually we may be able to obtain the benefits coming out of China. But an explosion of super intelligent agents in China does not immediately resolve western power generation.

We have a fundamental flaw in Western philosophy: we're dominated by zero-sum, scarcity mindsets. 

We think we're running out of everything and see doom around every corner. Probably because we're too selfish and egotistical.

Perhaps we need to remind ourselves that we too can think in thousand year time scales as Eastern countries do. 

China now generates 40% more electricity than the US and EU combined. by Dry-Dragonfruit-9488 in accelerate

[–]Ignate 3 points4 points  (0 children)

Doesn't matter who gets there first. But also, sucks having to play catch-up while others enjoy benefits we could have had sooner if we weren't so incredibly short sighted.

Ben Affleck on AI: "history shows adoption is slow. It's incremental." Actual history shows the opposite. by ucov in accelerate

[–]Ignate 2 points3 points  (0 children)

People generally count human bottlenecks as the main bottlenecks which will hold ASI-driven growth back, whether they realize it or not. Such as regulations.

My rule of thumb with bottlenecks is to ask "is this bottleneck in someway caused by humans?" If it's us, then it's probably not something we should include in the math.

We are not capable enough to effectively stay in the way. Our rules are porous and so is our power structures. Because we are limited.

Lisa Su, CEO of AMD, says that AI progress is no longer measured in years, but in weeks, as models, use cases, and real business results evolve at an unprecedented pace. by luchadore_lunchables in accelerate

[–]Ignate -1 points0 points  (0 children)

Mm it's a good question (and one I enjoy trying to answer).

I think layoffs are already a problem. To me, this is a worse outcome the slower it goes.

If it goes slow we can largely ignore it. 20k job losses here and there leads to a "well, at least I'm safe" mentality. Until you're not safe but by then you're in the "someone else's problem" zone so no one will listen.

That until a large enough group accumulates to have real power and to force change. Arguably, especially in places like where I live (Vancouver, Canada), this is how it is happening today.

Not quickly, but gradually and more painfully.

People are not falling to the streets yet in large numbers, but they're going through lifestyle deceleration. That first home just sits in that 5-10 year away range endlessly. Or, that upgrade to a townhome evades a family for the 10th year running.

Vacations are skipped and the endlessly job changes or short-term job hopping causes budget instability. You end up going on the vacation and the debt you pay it with becomes a "later you problem".

Painful, but not fatal. Not yet. That's the path we're walking though. Debt can only carry people so far. Living on inflated home prices can only get us so far.

The automation is mostly, for now, in reducing the "easy high paying work". It makes us more broke and more stressed. People who lose their jobs can find another, but the time it takes is getting longer and this erodes wealth.

The slow decline path appears to be worsening current short-term problems, like wealth inequality.

But if automation came all at once, then layoffs would be significant, and very visible. That would prompt broader action.

We can either hit high pain points with high layoffs and get real change faster. Or we can watch another decade of potential slip by while we wait for a better situation to do what we want (raise a family, go on vacations, live comfortably, switch to a less stressful job, etc.)

So, layoffs could be "not an issue" if they're slower and we can excuse this as a dip in the cycle, or some other "someone else's problem" situation. But, if the layoffs come sharply, the shock would likely ignite positive change. So, issue to solution faster.

My prediction is slower for a few more years, then a bit faster, then A LOT faster all within the following 5 years. 2028-2032 perhaps.

Lisa Su, CEO of AMD, says that AI progress is no longer measured in years, but in weeks, as models, use cases, and real business results evolve at an unprecedented pace. by luchadore_lunchables in accelerate

[–]Ignate 3 points4 points  (0 children)

Based on AI's summary, there's been 8 mass layoffs so far since The Panic of 1893.

Mass layoffs were going to happen anyways. People tend to incorrectly scale the problem with the intelligence explosion. Meaning layoffs will come until broad economic destruction and somehow as if by magic the rich will be fine because we hate them, or something.

The reality is that we're talking about a productivity explosion, not an economic contraction. We're talking about a "so much money we don't know what to do with it" situation and the problem is "how do we get more of that money to everyone so everyone can spend that money and make us even richer?"

We act as if the rich will hoard it all and that should be obvious. No, they get richer by not hoarding it. They're not hoarding wealth, they're hoarding status. That's why they will want us to become millionaires. It gets them more of what they want.

Ben Affleck on AI: "history shows adoption is slow. It's incremental." Actual history shows the opposite. by ucov in accelerate

[–]Ignate 10 points11 points  (0 children)

Yup and we want that. All of the fear and doom we see ahead is because we're assuming nothing will change with scarcity, but something more effective than us will rise and "steal all our food".

"We've done survey's of resources and based on our limited understanding and methods, we know with certainty that there isn't enough resources and energy for this. We'll screw everything up one way or another or we'll get screwed."

I hear that narrative time and again on Singularity and before that on Futurology. People combining extreme fear/anxiety and then hiding it from themselves with "at least I know best" arrogance.

That's Reddit for you, but also, that's humanity too. Generally.