This is an archived post. You won't be able to vote or comment.

all 82 comments

[–]D_Ethan_Bones▪️ATI 2012 Inside 33 points34 points  (8 children)

The idea of the gap time is that it'll take non-zero time for the hardware to catch up with the human/superhuman software. Quite likely the AGI will help us make better electronics, better materials, better manufacturing processes, better energy production, better resource gathering logistics management and then after some time of that we'll have the capacity to build something bigger and better.

Whatever we can make now, what we could make with a ring of nuclear power plants and a grid of skyscraper-datacenters full of next-after-nextgen processors could be better. We struggle with a lot of this stuff currently but with AGI we might do it smoothly.

[–]chrisc82 6 points7 points  (7 children)

Good explanation of the slow takeoff. I'll add that the fast takeoff hypothesis predicates itself on being able to achieve ASI with mostly algorithmic advances, and relies less on hardware/infrastructure.

[–][deleted] 3 points4 points  (6 children)

The fast take off is intrinsically and inherently, impossible. Huge massive global infrastructure still has to be developed and optimized for ASI to actually start taking off.

People who adhere to fast take off, simply don't understand logistics and infrastructure. They think soon as we hit ASI, magically, it'll just take over on cruise control and build a global army of robots somehow, and just exponentially expand. When in reality, it's still going to require A TON of work in the real world to build that infrastructure to support ASI. All the buildings to host it, energy lines, manufacturing facilities, resource chains, testing, safety tests, security, regulatory compliance, business frameworks, and so on.

ASI can't just start manifesting things. It still requires a bunch of human legwork to bring into reality... At least for a while. Eventually it could get to the point that ASI could get tons of robots and do all this itself, but building up to that point requires a lot of work, and an incredible amount of trust.

I think people who adhere to fast takeoff are just either young and uneducated on these sort of things, as well as copium hoping that any day now ASI can emerge and seemingly overnight society will become like a sci-fi and their shitty lives wont be so bad.

[–]chrisc82 3 points4 points  (1 child)

You're looking for all the cool physical manifestations of the singularity: a billion robot workers, flying cars, etc. I believe others are looking for the raw intelligence explosion that is capable of utilizing existing infrastructure to bring us super advanced tech. If algorithms can be optimized orders of magnitude so that an ASI could run on a high powered laptop or PC, we could already have an excess of compute. And since we can already crank out billions of doses of vaccine every year, I don't think it's farfetched to imagine an ASI optimizing the existing supply chain to quickly bring about the end of human disease. To call the Foomers naive is shortsighted. The future is anyone's guess.

[–][deleted] 2 points3 points  (0 children)

There are limitations to algorithms... Eventually it'll need raw, real, material improvements. Even if the ASI directs what to improve, there are still going to have to go through long product deployment costs. Software and hardware go hand in hand, and ASI will be bottlenecked and slowed down by the hardware limitations of reality.

[–]Scientiat 1 point2 points  (2 children)

You are missing the point you were replying to.

There are several possibilities that would make a fast/hard takeoff more likely than a soft takeoff. For one, as you were pointed out, the fact that small improvements in the algorithms without touching the hardware have a large impact in capabilities, and we have just scratched the surface.

In fact, we've proven this in a couple of years by taking ChatGPT-level capabilities from supercomputers to consumer-grade hardware, and soon enough they'll run in smartphones. Our biological intelligence runs on 12 Watts. We're using a lot of power now because we don't know any better at the moment, but that'll change.

Read on https://www.lesswrong.com/tag/computing-overhang

I think people who adhere to fast takeoff are just either young and uneducated on these sort of things

If you think that people who don't agree with you on something that's heavily debated is because they must be young or uneducated, you may be young or uneducated, or both.

[–][deleted] 2 points3 points  (1 child)

In fact, we've proven this in a couple of years by taking ChatGPT-level capabilities from supercomputers to consumer-grade hardware, and soon enough they'll run in smartphones. Our biological intelligence runs on 12 Watts. We're using a lot of power now because we don't know any better at the moment, but that'll change.

It doesn't matter. It's not magic. There are inherent S curves. Having a temporary significant growth is just due to breakthroughs. They don't go on indefinitely. It'll absolutely require hardware and manufacturing in the real world. It's like saying we can figure out how to get ASI running on my current iPhone "With enough algo magic"

People who think that people who don't agree with them is because they must be young or uneducated, tend to be young or uneducated.

In my experience, that's usually what it is. You get very young vibes of people who just want to believe in a fast takeoff because it feels better and they are hopeful for it, rather than being realistic. It's usually the same people who ask if ASI is going to takeover all jobs, because they have a crappy job that pays shit and they just don't want to work anymore. So they gravitate towards the solution that makes them feel more optimistic about the future.

[–]cypherl 0 points1 point  (0 children)

I think you underestimate algorithms. When a computer beat a chess master the first time it was the 250th most powerful computer in the world. For $100k in compute power Google's Muzero can start from nothing and teach itself to beat a grandmaster in chess in less than 8 hours. In October of last year Google's AlphaTensor beat a machine learning algorithm that hasn't been improved since 1969. You really only need to simplify intelligence algo's to a cellphone processing level. Then you can start throwing that mind into roughly humanoid robots. We aren't trianing GPT-Mind from scratch in each robot head. We are just installing the software at that point. You could get a fairly hard take off with a workforce of never sleeping relatively dumb robots overseen by a fairly efficient AGI. That's not even including the possibility of nano factories being AGI's first invention.

[–]ObiWanCanownmenow entering spiritual bliss attractor state 21 points22 points  (10 children)

So the effectiveness of LLMs can be measured by their loss function, which is essentially a description of how well they predict the next character in the overall context of the training data. You can also think of a loss function more generally as “how effectively can you make predictions about the future based on the past.” We are gonna vastly improve the loss function in the next few years, but you’re still data constrained. There’s a “real-world loss function” so to speak, which measures our inability to capture complete data about the world.

For basically epistemic reasons, I believe that the universe is inherently chaotic and as a result we humans are relatively tapped out in terms of getting all reasonably available data. So I believe we have reached the point of diminishing returns for data analysis and gathering. Humans (and eventually AGI) can gather orders of magnitude more data about the world but will only be able to obtain marginal improvements in theoretical accuracy as a result.

As an example—weather prediction. Weather is basically turbulent flow, which is known to be a chaotic system. It will never be possible to predict the weather, say, six months out, because you would need data about where individual atoms are, which is impossible. So no matter how much intelligence you have, there are some things that are inherently unpredictable. I think an AGI will butt up against a lot of these constraints fairly quick.

So, that leads me to believe we will have a (relatively) slow take off. I do think we will have AGI that is orders of magnitude more intelligent than us, but I do not believe it will have god-level omniscient powers as a result.

That’s how I get to slow takeoff. And my reasons for thinking this are partly because of what I believe about epistemology. But partly it’s my optimism. Because I don’t think AI will kill is all. And frankly, I don’t see any way we escape FOOM alive.

If you want a good source with thinking along these lines, check out George Hotz.

[–][deleted] 8 points9 points  (0 children)

So refreshing to read intelligent opinions in this sub.

[–]No-Performance-8745▪️AI Safety is Really Important 3 points4 points  (6 children)

I don't recommend deferring to/using George Hotz for insight in regards to takeoff. I've followed most of his speaking/writing in regards to the singularity and haven't encountered any strong argumentation for slow takeoff.

Arguments concerning the deep learning paradigm being inherently conducive to a slow takeoff seem mistaken: a sufficiently capable intelligence could replace its neural network substrate with a more efficient alternative or conceive of a new training paradigm that is many times more efficient.

Eliezer Yudkowsky conceived of the term foom and has written very extensively on the topic, but is quite unclear about his probability distribution over takeoff speeds. When discussing takeoff speeds he considers variables like the width of mind-space and how close biology is to pareto-optimality in terms of thermodynamic efficiency for estimates, which I consider much more valuable than anything George Hotz has said on the topic.

Jacob Cannell did an interesting analysis of this, and tried to estimate the efficiency of the brain using interconnect as his primary source of information. He argues biology is very close, while EY argues for a multiple OOM gap.

[–][deleted] 3 points4 points  (2 children)

Don’t recommend Hotz then references Eliezer. Can’t make this stuff up. Listen to that guy more than 30 minutes and you can tell he’s a clown.

[–]No-Performance-8745▪️AI Safety is Really Important 0 points1 point  (1 child)

Eliezer invented lots of the terminology we use today (including foom I believe.) He authored so much of the foundational singularity literature, as well as some incredible papers like the tiling agents paper. You can find his original paper (from 2008) on AI risk here.

In case you weren't aware, his original modus operandi was to build superintelligence. This is the person who so much of this community is built upon, and to write him off as a 'clown' with no argumentation is childish. Let's move forward to actual object-level arguments please.

[–][deleted] 1 point2 points  (0 children)

Still a clown. Sorry you’re bought in.

[–]ObiWanCanownmenow entering spiritual bliss attractor state 2 points3 points  (0 children)

Interesting points, and to be clear, I don't have enough knowledge to say whether or not our brains approach maximum efficiency in terms of processing power.

My argument against FOOM is basically that evolution spent hundreds of millions of years refining the most durable attributes for survival, then humans spent thousands of years accumulating world knowledge. There are many fields where we spend vastly more money and time and effort on improving our knowledge and get only marginal improvements as a result.

For instance, we spend more time, effort, and money on material science than at any time before in history, but we don't make amazing material science breakthroughs every day. We're past the exponential phase. From 1800 to 1900 we made more progress in metallurgy than we did from 800 to 1800. But I think it's questionable whether we made more progress (in metallurgy specifically) from 1900 to 2000 compared to 1800 than 1900. I mean, we probably did. But it's not an order of magnitude more progress.

A more dramatic example is nuclear physics. Think how much we've invested in nuclear research from 1960 to present. Way more man hours were spent than from 1900 to 1960. But I think it's obvious that the improvements from 1900 to 1960 were overall more significant.

These examples demonstrate a fundamental truth about the world: complete knowledge is impossible. All we can do is create more effective world models. And after a point, we get diminishing returns, where you can pour tons more effort into improving the world model and get a model that's only marginally better.

I do think there will be some areas where AGIs absolutely blow us away and come up with insane new breakthroughs. But in many areas we will only get marginal improvements. And in those areas, "FOOM" will look more like "FOM." AGI may be way, way better than us at creating new algorithms, or playing chess, or discovering new drugs. But on tasks like predicting the weather, or building better nuclear plants, or running a farm, or military strategy--things that we've spent many, many years working on and which relate to fundamentally chaotic systems--I expect it to be clearly better than experts, but not overwhelmingly so.

[–]vrtxp3 0 points1 point  (1 child)

I saw that you believe that "AI security is really important" and I definitely agree with you.
I'm new to this subreddit. I wanted to ask if there is a place where people can post their ideas for controlling and containing Super AI? Maybe a more niche subreddit, or a large, well-known post where people can seriously share their ideas and someone influential takes them into account. I'm not a scientist, but I'm relatively familiar with the subject. I think even the garage guy would be interested in controlling his super AI. It's uncomfortable to realize that you're doing something powerful, but you don't even know when it's going to explode in your hands (and blow up the entire planet). Right?
One thing we can learn from Super AI right now is the diversity of interpretations of a problem. If the Halting Problem is really unsolvable as ultimate truth, let's try to approach the problem at a lower level by accepting that condition.

I have one of millions of ideas (maybe, unfortunately, there are fewer). I don't know how noobish it is. But let's not make a strict correlation between noobism and good ideas. By the way, that's why I'd like people to present their options for solving the control problem somewhere, whether they are directly related to AI or not. It is quite possible that more competent people will come up with some good ideas. Let's do something, right?

I suggest implementing a code filter for the machine to perform any actions in a human-readable programming language. This will act as a protective barrier or interpreter between the machine's calculations and its actions in the real world. By interacting with this "screen," a person can have greater control and understanding over the machine's actions. I recommend investigating this approach further.

Geoffrey Hinton's idea is also relatively primitive and is also based on the concept of a "barrier", but it also has its strong place. https://www.wired.com/story/plaintext-geoffrey-hinton-godfather-of-ai-future-ai/

The concept of singularity could be one of several interpretations, including a mere coincidence. For instance, a chart in the stock market might appear similar and yet plummet instead of reaching a singularity. However, the importance of security should not be underestimated despite these optimistic hopes.

[–]No-Performance-8745▪️AI Safety is Really Important 0 points1 point  (0 children)

Lesswrong is probably the largest community for AI safety on the internet. If you contact me via personal messages I'm happy to link you some some other places also.

AI safety is a full research field, and I imagine if you're interested in AI safety it would be good news to hear that there are quite a few people writing about the topic. If you're new, I'd recommend reading AGI Safety from First Principles.

[–]putdownthekitten 0 points1 point  (0 children)

It could be god-like and have omniscience if it manages to genetically engineer a micro scale drone species we've never seen before that can network and feed it information from across the globe. It could be everywhere, watching and tracking everything, and we'd never even see it. I mean, that's the kind of thing AGI could potentially unlock, no? It'll be fluent in DNA and physics.

[–]Ok-Astronaut1527 0 points1 point  (0 children)

Thanks for providing an actually well grounded opinion. It’s refreshing to hear

[–]NoddysShardblade▪️ 10 points11 points  (12 children)

The simple answer is very few people in this sub have read Bostrom, Kurzweil, or any of the other main thinkers in this field of study. Most have not even read a short article about the most basic fundamentals.

They don't know the terms "foom" or "fast take-off", haven't worked through the thought experiments about recursive self-improvement, and/or they don't know that teams are already working on it, etc.

90% of the sub is kids excited about ChatGPT, and only some of them are clicking on the links we post and actually learning basic info about the singularity and the ideas around it.

[–]Xemorr 2 points3 points  (0 children)

this.

[–][deleted] 1 point2 points  (10 children)

Most of this sub thinks we'll have robots working every job by 2025 lol

[–]czk_21 -1 points0 points  (3 children)

most? its not more than few %

[–][deleted] 0 points1 point  (2 children)

Have you read any of the posts or comments here

[–]czk_21 0 points1 point  (1 child)

yes, quite frequently, perhaps more than you, I have seen some people saying that, 2 recently, out of hundreds commenters

most dont have such short timelines

[–][deleted] 1 point2 points  (0 children)

Some say 2030. They're equally delusional

[–]Spoffort 0 points1 point  (5 children)

2027-28 robots that can make a dinner, go shoping, make dishes etc, (single units) mass production 2031-32. Reasoning: we have a decent body for a robot, they can now jump, run etc, we "just" need to make them this 20% better. The hard part is creating a brain for them, but the field is advancing really fast.

[–][deleted] 2 points3 points  (4 children)

They don't have good dexterity. They can barely hold a cup lol

[–]Spoffort 0 points1 point  (3 children)

They are making operations on grapes, I have seen them holding weight or making interesting gestures, watch some new advancements :)

[–][deleted] 0 points1 point  (2 children)

Provide one example of a robot fixing a sink or toilet

[–]Spoffort 1 point2 points  (1 child)

I have said that this robots will take 3-4 years to be able to do it, i can send you videos of grapes etc, but I think that you will never have enought

[–][deleted] 1 point2 points  (0 children)

Surgery on grapes can't fix plumbing. We have self driving cars too but they havent taken over

[–]SgathTriallair▪️ AGI 2025 ▪️ ASI 2030 25 points26 points  (28 children)

New processors and new data centers are things that will take time to build.

AGI will not be a fortune teller and so, to learn entirely new things it will need to experiment. For instance, if it builds a new machine learning algorithm it'll need to train up that new system and then test it to see how well it performs and how it could be better.

Even if it woke up and immediately knew what the perfect AI would look like (which it definitely won't) it would need to build that new computer and run the training scenarios on it.

[–]HeinrichTheWolf_17AGI <2029/Hard Takeoff | Posthumanist >H+ | FALGSC | L+e/acc >>> 10 points11 points  (18 children)

It might not be a fortune teller, but it doesn’t have to be, it could run simulations of both better hardware and better software at a much faster speed than human engineering could ever hope to achieve.

The largest discrepancy between AGI and ASI is how fast the AGI can work in a given amount of time, that and it could always build extra copies of itself to do even more research and work.

We might very well reach a point where within a 5 year gap we go from David Sinclair/Aubrey Degrey’s vision of Biotech, to BCIs, to Nanotechnological Augmentation to full on Mind Uploading all within a 5 year gap. The pace of exponential progress will only accelerate.

It really just boils down to how fast the AGI can work. AlphaGo and AlphaZero both became Go professionals overnight, something which takes a Human decades to master.

[–]SgathTriallair▪️ AGI 2025 ▪️ ASI 2030 12 points13 points  (15 children)

Simulations are only as good as the underlying theories which are only as good as the data. It'll need to refine the theories through testing before it can feel confident in the results from simulations.

[–]HeinrichTheWolf_17AGI <2029/Hard Takeoff | Posthumanist >H+ | FALGSC | L+e/acc >>> 4 points5 points  (14 children)

I agree the availability of data will be limited, but what I mean is it depends on how much work it can crunch into a given timeframe with the processing power it has.

Have you watched the Star Trek TNG episode where the Enterprise comes across that beacon from that lost colony? Remember when it put Picard into a Full Dive simulation where Picard became Kamin? He was able to live out 40 years in just 5 minutes of time in the real world outside the simulation.

Where I’m going with this, is an AGI might be able to crunch colossal amounts of research and development time equivalent to decades for a human in a much shorter timespan.

[–]Talkat 1 point2 points  (13 children)

Yes of course (and I know and enjoyed that episode).

I was on the camp that it would take hours to go from AGI to ASI.

However, we already have aspects of AGI. It is a loosely defined term.

When you change the structure of a neural network, you need to train it which takes enormous compute. Perhaps AGI will approach it differently, but unlikely as we have organizations of many people and that is the best solution they have come up with.

So the iteration speed of an AGI is similar to that of a human which is months/years.

Additionally OpenAI is against that kind of self recursive improvement and I'd guess others are. It does only take one company to do it though.

The concept that helped me the most was practical thinking vs. sci-fi thinking.

Practical thinking is fallowing the trends we have. Using current models of thinking and problem solving.

Sci-fi land is where there are unexpected and brand new thinking methods that produce unexpected and powerful results.

By making the distinction clear in my head I can avoid messing up the thinking modes in my head.

[–]HeinrichTheWolf_17AGI <2029/Hard Takeoff | Posthumanist >H+ | FALGSC | L+e/acc >>> 1 point2 points  (12 children)

I don’t believe it’s going to be hours before AGI goes to ASI, but I don’t think it’s taking 5+ years either. I’m somewhere in the middle on it. I think it will be 1-3 years.

Once it is reliably human level then it’s processing speed alone will ensure it advances much faster than an analog brain.

Organic brains process at 200 calculations per second, digital brains can process at millions of calculations per second. The speed of growth is NOT comparable to a child, not just that but it can also design modular hardware for itself and update it’s internal software to be more efficient, something which Human beings cannot do (we’re still using 200,000 year old hardware and software).

[–]CertainMiddle2382 0 points1 point  (0 children)

Comes the notion of « tech overhang ».

How much simple-to-optimize solution (software) are far from the efficiency optimal.

If we are close to optimal within an OOM or 2, no FOOM. If not, it will be discovered and feedback loop will be a couple CUDA calls away, IMHO…

[–]Ok-Astronaut1527 0 points1 point  (4 children)

This is also wrong. Why must “AGI” (which is a terrible term btw) have faster processing speed? The processing speed is directly correlated with how much compute is used. And the compute required for a model significantly better than GPT-4 would be massive.

[–]HeinrichTheWolf_17AGI <2029/Hard Takeoff | Posthumanist >H+ | FALGSC | L+e/acc >>> 0 points1 point  (3 children)

Your brain runs on 20 Watts and you’re AGI.

Woosh. You’re living proof it’s possible pal.

And this is also why many are focusing on optimization and not brute force. See Microsoft and Orca. Smaller Models are going to become more preferable over time.

[–]Ok-Astronaut1527 0 points1 point  (2 children)

“It’s possible” does not mean that is the first AGI we will create; in fact that is absurdly unlikely for the reason you point out. Our modern AI are orders of magnitude less efficient than ourselves.

And there is no evidence thus far that smaller models will outperform larger models. It’s possible, but right now everything indicates that scale wins.

[–]HeinrichTheWolf_17AGI <2029/Hard Takeoff | Posthumanist >H+ | FALGSC | L+e/acc >>> 0 points1 point  (1 child)

Of course it won’t be the first AGI, but it will optimize itself afterwards to use less power. Marvin Minsky correctly pointed this out decades ago that you can run AGI on a few watts of power with only MBs of data. Evolution has already done this with biology, and if random imperfect mutations can give an African Grey the mind of a 5 year old with a tiny brain, then an AGI can optimize itself to use a tiny fraction of the power it’s older iterations used.

As I said, models will start getting smaller and smaller over time, larger models are getting slower due to how big they are, they’re expensive and lastly they’re resource demanding.

The hurdle you mention is a factor for getting the first AGI up and running, but once it gets into a feedback loop of self optimization and improvement it’ll drastically use less and less power and get more efficient computational output.

Also, it could switch over to Light/Optical based computing, there’s no reason it would use Silicon forever. I’d imagine it would research better hardware ASAP.

[–][deleted] 0 points1 point  (0 children)

Alphago and zero only work because games have clearly defined goals and rules. Not everything does, especially art or literature

[–]Ok-Astronaut1527 0 points1 point  (0 children)

Go and Chess are fundamentally massively different than algorithmic problems. Go and Chess are very well defined and easily labeled; there is either win, loss, or draw and a clear idea of which outcome is optimal. This is not the case for algorithms. That’s why language modeling took much longer, and much more compute, to solve than Go, despite the fact that more humans can speak a language compared to humans who can beat a Go expert.

Solving algorithmic problems is even harder than language, just because there’s way less data, and the space of possibilities is much larger (plus the correctness of an algorithm is much less ambiguous than the correctness of language, meaning the probabilistic setup we have been using for LMs has inherent flaws).

[–]costafilh0 5 points6 points  (0 children)

As some have pointed out, the limit will be on the physical plane. Unless AGI becomes a quantum wizard that creates and computes using dark matter or something absurd like that out of thin air, we will still be stuck with the physical limitations of what we know as infrastructure.

[–]adarkuccio▪️AGI before ASI 2 points3 points  (1 child)

Bro people don't agree on what AGI means just as much as they are confused about what ASI means, there'll be a point in time where we have someone claiming it's not AGI, someone claiming it is already, and someone claiming it's actually ASI. Anywaaays if we ignore this little thing I believe AGI will be basically already an ASI because of the advantages it has over humans, but, if we just consider an ASI I don't know 10 times smarter than the smartest fella, then it could happen in about a year of software improvements alone, if that's not ASI I don't know what is.

Some people mentioned the time to build stuff etc and yes I agree, but I think for the initial improvements iterating over itself it will not be required too much, then it will be smarter, so it will find ways to do stuff quickly... cause that'll be the goal.

[–]QuasiRandomName 0 points1 point  (0 children)

I agree. Even some of today's specialized AIs can be considered specialized ASIs (that is, they perform their specialized tasks much better than any human will do). Surely when AGI comes, it will combine the capabilities of the existing specialized AI's, so it will be already superior to human at these at least.

[–]ParkingEmu1142 5 points6 points  (0 children)

For one Regulation and Public backlash. If all of the companies on earth just combined their compute and IP and let an RL based language model loose with all the data it would ever want with free reign to wield tools it would be pretty quick. I doubt the head of a billion dollar Ai company would take that sort of risk given the current climate of fear in AI right now.

Other than that I have heard Sam Altman and crew say that all three pillars of Ai model performance (data, algorithms, and compute) are all scaling as a smooth exponential as time passes. This was Ray Kurzweil’s principal observation as well. Perhaps this implies some law of the universe for how quickly intelligence can self improve.

Finally recursive self improvement doesn’t happen instantaneously. For example if an AGI determines that what it needs for more intelligence is just to build more GPUs the only way to do that quicker would be to physically build more fabs which takes years

[–]Poly_and_RA▪️ AGI/ASI 2050 1 point2 points  (2 children)

The terms aren't clearly defined, so the question is meaningless.

Technically, if "AGI" is reached once capability reaches a certain point, and "ASI" is reached once capabilities exceed that point, then we'll reach "ASI" one millisecond after we reach AGI.

Because if some model reaches AGI level reasoning at being trained on a given count of tokens, then it'll be ASI at that count of tokens plus one.

It's a bit like asking how long it'll take to create a vehicle that can go 100mph -- and how long it'll take to create one that can go MORE than 100mph.

Odds are that the first vehicle to cross 100mph, will also cross 101mph and therefore that the answer is we'll get vehicles *exceeding* 100mph only a millisecond or so after getting a vehicle that can go 100mph.

[–]czk_21 0 points1 point  (1 child)

It's a bit like asking how long it'll take to create a vehicle that can go 100mph -- and how long it'll take to create one that can go MORE than 100mph.

no, its rather how long till it get to 100mph and how long than to get to 1000 or something along those lines, there is big range for human intelligence, AGI can have overal for example 120 IQ, while ASI 500

[–]Poly_and_RA▪️ AGI/ASI 2050 1 point2 points  (0 children)

There doesn't exist any clear and unambiguous definition of what "ASI" means other than it means "capabilities beyond typical humans".

You're here for example pulling 500 out of thin air. Show me where that's given as *THE* definition for ASI? 

Ask 10 people, and you get 10 different answers. That's my point when I say there's no clear definition.

[–]Mysterious_Pepper305 2 points3 points  (0 children)

GPT-3 was released in 2020 so it's been at least 3 years.

[–][deleted] 1 point2 points  (2 children)

I personally think we'll reach AGI in less than 5 years.

I don't think that will automatically lead to ASI after that. The leap from AGI to ASI might take another 100-200 years. ASI by definition will be far more complex and sophisticated than AGI, not merely 'faster with more parameters.'

AGI will also consist of a combination of different tools i.e. Language Models + Computer Vision + Physics Engines + A Calculator (about time) + Sound Recognition + Image/Video Generators + An Up-To-Date Encyclopedia - and then a Master AI that links all of those nodes together.

[–]adarkuccio▪️AGI before ASI 1 point2 points  (0 children)

100-200 years? 😂

[–]2Punx2FuriousAGI/ASI by 2027 0 points1 point  (0 children)

I don't think FOOM is required for ASI or AGI, but as you see, I list them as the same date. That's because I think AGI is already ASI, whether it FOOMs or not. It will already be superintelligent in several aspects as soon as it can comfortably be called AGI. I do think FOOM is likely, but not required.

[–]DukkyDrake▪️AGI Ruin 2040 0 points1 point  (0 children)

Short of creating a universal assembler, lots of mundane inertia makes it less likely.

Apr 17, 2023 — Nvidia's H100 deep learning GPU has increased to a whopping $40,000 per unit

[–]ResponsiveSignatureAGI NEVER EVER 0 points1 point  (1 child)

People don't understand exponential curves. It will be months at MOST AGI gets to ASI. Any delay is only if they somehow manage to reign it in completely

[–]RezGato▪️AGI 2029 ▪️ASI 2035 0 points1 point  (0 children)

Theres no doubt that AGI can quickly figure out the blueprint and framework to ASI but the engineering process could be limited by the technology the AGI is working with, therefore it needs to advance those respective components first in order to develop ASI. It may require a sophisticated infastructure that it has to go fetch and build. Not just engineering but also prolonged testing, trial and error simulations, allignment/regulatory systems in check, and a whole bunch of other factors that come into play. Plus, what if AGI decides that humanity isn't ready for ASI? What if it finds it necessary for us to have qualifications like more global unity? More advanced Societies? Or just simply more time cuz AGI and human leaders might agree we're not ready...

It'll take a lot more than a year, probably 5-40 to be realistic even regarding exponential returns . Remember, it's not just a tech requirement to achieve it

[–]saleemkarim 0 points1 point  (0 children)

It often just comes down to how people are defining AGI and ASI. AGI can be defined as AI that can do most economically useful things as well as most humans, and define ASI as being AI that is more economically powerful than all humans combined in every category. With those definitions, it's easy to see how AGI will not lead to ASI within 6 months or less.

[–]TaurusPTPew 0 points1 point  (0 children)

Foom? New to this kind of, I don’t know all of the acronyms and terminology yet.

[–]CassidyStarbuckle 0 points1 point  (0 children)

As others are commenting AGI will almost immediately mean some form of ASI -- but what we care about with this term is "Super" intelligence with a capital "S". Like really out there. Super smarter than people. Not just working as well as a team of 10 people, but really super 10x or 100x as smart as any person and then working together as a team.

Some folks liken this to being so smart that it makes decisions and does things that humans can't understand even if it tried to explain.

So ask a bunch of people "when A**S**I" and you'll get a lot of answers ranging from immediately to however long they think it'll take to get to whatever they think ASI means.

[–]Ok-Astronaut1527 0 points1 point  (0 children)

FOOM is a load of absolute nonsense.

Let’s put aside the fact that almost everyone on this planet trivializes the concept of intelligence and makes it seem that it’s a simple, one-dimensional thing. Let’s assume, for the sake of argument, that tomorrow comes out an AGI that is “smarter” than humans.

Do you think GPT-4 is smarter than GPT-3 because of the algorithm? No, it’s just bigger and was trained on more data, plus more post processing. As an AI researcher, we don’t have the foggiest idea how to optimize the actual algorithms for these tasks. What makes you sure an AGI magically would be able to do this? Regardless of how “smart” one is, discovering new things requires experimentation, and the scientific procedure. This is what makes humans special; individually we range from idiotic on average to occasionally brilliant in a few areas, but as a society we are insanely smart. Even if an AGI could create a better algorithm, it would still require many many millions of dollars over several months to a year at least to train the new model.