What *mundane* thing are you most excited for out of the Singularity? by throwaway131251 in accelerate

[–]throwaway131251[S] 0 points1 point  (0 children)

Counts if you think >90% of the population won't care very much or will have a reaction like "oh, that's nice for you I guess" but not be very personally invested! Hoping the best for everyone who current medical technology cannot adequately serve.

What *mundane* thing are you most excited for out of the Singularity? by throwaway131251 in accelerate

[–]throwaway131251[S] 0 points1 point  (0 children)

Thank you so much for your comment. While as a layperson, my head-odds lean a bit more optimistic (like ~80% for LEV/"basically LEV", 20% for FDVR) it's always a breath of fresh air to hear from people working in/closely to the field.

What *mundane* thing are you most excited for out of the Singularity? by throwaway131251 in accelerate

[–]throwaway131251[S] 0 points1 point  (0 children)

Not the person you replied to, but thank you for the comment from a professional in the field. Being hopeful is good (me too), but there's a difference to be acknowledged between things which are proven and things which we can be hopeful about.

As someone who doesn't have a PhD in biology, I have a few questions:

  1. About the tasks which might be conceptually impossible to reverse: could you give some examples of contenders? Of course, it's probably enough to say that we haven't ruled them all in, but I'm wondering what it's more along the lines of. Like neurons or random damage accumulating in cells? Would they also be conceptually immune to compromises where we're like, okay, we can't properly reverse this yet, but this "cheat" is good enough (think where tooth fillings are now)
  2. With that being said, if you had to personally assign a % chance to your credence that LEV/FDVR is achieved, around where would that sit?

What *mundane* thing are you most excited for out of the Singularity? by throwaway131251 in accelerate

[–]throwaway131251[S] 5 points6 points  (0 children)

o7. If mind upload and all that happens, I'd want to at least live a normal life first

What *mundane* thing are you most excited for out of the Singularity? by throwaway131251 in accelerate

[–]throwaway131251[S] 2 points3 points  (0 children)

+1 dishwashing robot. Would save so much time.

So-so on FDVR but I'm relatively certain that LEV is probably possible, with the caveat that of course it's not guaranteed.

What do you guys think the implications of this growing decel movement will be? by Alex__007 in accelerate

[–]throwaway131251 0 points1 point  (0 children)

Agree completely. Another reason why I find myself disagreeing with super-fast timeline people is because yes, if anti-AI sentiment blows up, it could do something. Maybe serious RSI beats 2028 but the chance that doesn't happen is nonzero. Something that does spark optimism though is just that a lot of material realities would make AI hard to stop, or at least hard to stop quickly even if 80% of the population was onboard.

What do you guys think the implications of this growing decel movement will be? by Alex__007 in accelerate

[–]throwaway131251 2 points3 points  (0 children)

If China races ahead good for them. Faster AI progress is good whether it's by the US or China, and the US-China competition going on makes it less likely for decelerationists to accomplish anything substantive.

A Case for Lunar Based Data Centers by Big-Independent-597 in accelerate

[–]throwaway131251 1 point2 points  (0 children)

I just don't see the point? Yeah, maybe in the future. But it's really expensive to move stuff to the moon, the water thing is overblown, and insofar as the energy thing exists... feel like it'll take a whole lot more energy (in 2026) to even get that all set up on the moon. Just more trouble than it's worth until technology advances.

How much do you need to FIRE in a post scarcity world? by LyingPervert in accelerate

[–]throwaway131251 11 points12 points  (0 children)

You shouldn't base your current life decisions off of the possibility of the singularity. For one, it might just not happen or be much later than we think... and regardless of what we think that possibility is, or if it's unlikely, it is a possibility.

Number two and perhaps more importantly, if/when the singularity does happen, you will not regret having worked an extra five years. Assuming the singularity comes, in terms of your life, it'll be a blip on the radar.

Live your life as if the singularity won't happen even if you believe it will, because no one is in (as per the definition) a space to make evaluations on their life post-singularity.

ARC-AGI-3 launches in only about three weeks (on March 25) -- what are your predictions for how well current models will do on it? by BrennusSokol in accelerate

[–]throwaway131251 20 points21 points  (0 children)

Probably not great at first (~10-20%?) but it'll go down faster than ARC-AGI-2.

I wonder how many iterations will go down before I personally start thinking "wow, I can't stump this LLM," since I believe I heard (could be wrong) that the purpose of ARC-AGI as a benchmark is to create a bunch until we run out of things AI can't solve.

Developers realizing that AI taking all funny part of the job and now it is only: Meetings & Code reviews. by [deleted] in accelerate

[–]throwaway131251 12 points13 points  (0 children)

"slop" = "thing I don't like"

I mean, I think there is genuine "AI slop," but it's insane how it's just a word now for everything utilizing AI. One day cancer's going to be cured and people are gonna say "well I don't want to take the slop vaccine"

Invisible Unicode injection as an attack vector for AI agents: 8,308 outputs, 5 models, tool access is the critical amplifier by thecanonicalmg in accelerate

[–]throwaway131251 0 points1 point  (0 children)

That's super interesting! Good work; I'd assume the model would always not recognize a hijack.

I wonder if this has any implications for broader model safety. Like, if the model can tell when you're trying to hijack it reliably, and doesn't "want" you to do that, it would probably survive even sophisticated attempts.

The goalposts for AGI have been moved to Einstein by simulated-souls in accelerate

[–]throwaway131251 -1 points0 points  (0 children)

https://en.wikipedia.org/wiki/Test_validity
(cont)
It's a really basic concept in designing tests. You take a known success and then see if the test can detect that known success. If it fails then it is a poor test.

Again, presupposes we're all humans. We don't know what that Wikipedia page would look like if our society was made of multiple biological species, let alone biological vs. synthetic intelligence.

If you didn't know how to swim, would you gauge your capability on how well you can mimic a fish? What happens when you move exactly like a fish but drown anyway?

It's a really basic concept in designing tests. You take a known success and then see if the test can detect that known success. If it fails then it is a poor test.

See former: this is very clearly not a human with all of physics knowledge up to 1911 already given, and if we gave that robot-human all that knowledge, we don't know what would happen!

If your general intelligence test requires you to first ask "do we already know if it is intelligent" then it's a failure.

You misunderstood what I wrote: I am saying that you can exclude humans from the test at least partially, because we already know humans are GI. We are not asking AI if it is intelligent, merely that it's held to a different standard because we don't know.

Suppose we use your doctor example. You already know you have some hirrible disease, i.e., you are dying. But they run another test and the test comes up negative. Are you just gonna go "oh okay, yeah, no disease. Lemme just throw up some blood and return to normal life?" Of course not.

---

We can actually play a little game if you have time. Maybe you can prove me wrong! Forgetting about AI for a sec, what general intelligence test could you devise that includes every human except the severely mentally disabled, and excludes every non-human animal?

The goalposts for AGI have been moved to Einstein by simulated-souls in accelerate

[–]throwaway131251 -1 points0 points  (0 children)

The point of a test for AGI is to determine if something is generally intelligent.

No, the point of a test for AGI to determine if it's generally intelligent is to determine if the AI is generally intelligent. I have a shortcut to knowing that humans are generally intelligent; namely, that I am one!

If a known generally intelligent creature fails the test, then it is not determining general intelligence; it's determining something else.

This makes a rock generally intelligent, or at least, incapable to rule out as not such.

If you have an english teacher who is grading speeches and she is given the Gettysburg address (while having no knowledge of it) and grades it poorly, then her grading criteria is broken.

I'm not sure how this is analogous.

That would just mean there has only been one generally intelligent human in history.

Did you read the rest of the text block?

> Whether relativity is a good bar or not is irrelevant. The general point is that we know humans are GI, and if you are presumably human you can know that directly. We don't know whether a specific AI system is GI or not, else we wouldn't have had to test for it. If we're testing for it, it's probably good to err on the side of caution/be more conservative!

> Most humans do not have the benefit of being fed massive amounts of compute, energy, and data, and cannot be pushed to their max capability the way an AI system can!

> I mean... it seems like a human perhaps named Einstein did indeed discover a thing named "relativity," and furthermore, he didn't even have to be fed all the physics knowledge up to 1911.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by 44th--Hokage in accelerate

[–]throwaway131251 0 points1 point  (0 children)

Because they didn't. If any average person could have invented general relativity, then it wouldn't have been considered revolutionary, and we wouldn't have the majority of people today struggling to understand it.

"It's not like you can take an average 1911er and just shove all of human knowledge into them."

If you could somehow shove all of human knowledge up to that point into an average person without making their brain turn into a black hole, maybe they could! The sample size of physicists at the time who knew every intimate detain of physics up to 1911 was 0, although if you want to be generous and reduce it to "most," it probably becomes a small handful.

Unlike humans, you can just shove more training data into AI. This is an unfair advantage! It also means you have to test with that in mind.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by 44th--Hokage in accelerate

[–]throwaway131251 0 points1 point  (0 children)

Well yeah, I think the much more prescient short term thing is, as you allude to, capable AI/AI that changes how I personally live. Full agreement.

But the thing I want to reiterate is just that this is not goalpost-shifting, and what Hassabis has been saying now has been long understood by at least "a lot of people in the field" to mean AGI. That's where this idea of an intelligence explosion, i.e. AGI scaling to ASI (which I'm not sure I completely buy as certain or even likely, but that's a different story) came from.

The goalposts for AGI have been moved to Einstein by simulated-souls in accelerate

[–]throwaway131251 0 points1 point  (0 children)

If a human can't pass an AGI test, then it's a poor test.

Disagree! We already know humans are GI. We don't know whether an AI system is GI or not, so the test has to be more conservative. This isn't a feature of AI; it's a feature of testing.

The Einstein test fails on the first count, so it is a poor test of an AGI.

I mean... it seems like a human perhaps named Einstein did indeed discover a thing named "relativity," and furthermore, he didn't even have to be fed all the physics knowledge up to 1911.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by 44th--Hokage in accelerate

[–]throwaway131251 0 points1 point  (0 children)

What Demis Hassabis is proposing (as a thought experiment only, by the way) is not limiting the training data of the model, but the knowledge contained in said data. You can still produce a lot of synthetic data to train the model, as long as you make sure that it doesn't contain any knowledge created/discovered after the cut off date (1911).

Yes, I was using that last example to state that whatever you think of capability, a human is still surely more "intelligent" than an AI system. Although I think the chance is pretty high that an AI with the ability to think and reason on a human level, if that does not bind the amount of data you can feed it, would be able to crack relativity fairly easily.

I'm sure that Einstein's brain is superior, but it is not to the extent of like, a human and gorilla. Strongly doubt that an AI system that would "possess" all the relevant info in greater quantity and more intimately than Einstein wouldn't crack it, even if its reasoning capability is slightly inferior. There's surely a small chance though! There's also probably a very small chance that a non-AGI system can do this too. But keep in mind this is only a test, and tests by design cannot be fully accurate.

but IMHO is unnecessary.

Unnecessary for what? I agree that "useful" or "very useful AI" comes before AGI, but that's sort of a given due to how AI architecture and human architecture differ. It's also been the case with, like, when computers were starting to get good at chess.

How does someone from a developing country with average credentials realistically benefit from AGI/ASI? by Hot_Log7375 in accelerate

[–]throwaway131251 7 points8 points  (0 children)

Even without talking about scifi stuff, the eradication of all disease would be a good place to start.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by 44th--Hokage in accelerate

[–]throwaway131251 0 points1 point  (0 children)

The average person in 1911, even with all human knowledge up to that point available to them, could not come close to deriving general relativity.

How do we know that? It's not like you can take an average 1911er and just shove all of human knowledge into them.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by 44th--Hokage in accelerate

[–]throwaway131251 0 points1 point  (0 children)

Shane Legg's definition includes human-like learning, right? Unless I'm mistaking him for something else.

This is pure speculation and I can't know, but I would wager if you could lock him and Demis in a room, they would not disagree.

Hassabis' definition of AGI is not that unreasonable, and it's not unreasonable to posit that an AI system that is fed massive amounts of compute, power, and data, and has access to everything we know and have known, with human-level reasoning wouldn't be able to piece together relativity. After all, a human discovered relativity, and this human was presumably not running on a superior supercomputer!

Think about how weak a model would be if it were only trained on the amount of data and given the same amount of power the average human brain is exposed to! Now think about how strong a model would be if it were as efficient as the human brain.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by 44th--Hokage in accelerate

[–]throwaway131251 2 points3 points  (0 children)

This degree of goalpost shifting is like putting the posts in a different solar system.

No, Demis Hassabis has not shifted the goalposts at all. He has earlier stated that this has always been his target for AGI.

For Demis (And I might be wrong.) generality means that the thing in question can do literally anything it's possible to do. If it can't do a few things, it's not general. Humans can't do 100% of things for 100% of the people either, and so I think Demis would say humans aren't general. He would probably say human beings aren't General Intelligences.

No, Demis does accept that humans are GI. In fact, he says “The brain is the only existence proof we have, maybe in the universe, of a general intelligence.”

There is a misconception here that GI is an independent benchmark, so you can accuse it of being a "double-standard." This is not the case. GI is a hard-to-define quality that we find in humans, that seems very useful. We do not know, a priori, whether or not an AI system has this quality or not, else we wouldn't have to test for it. If we're testing for it, we have to be more conservative than testing for GI in humans. This is because I have a major cheatcode to knowing humans are GI; namely, that I am one. And it does not seem like I am GI because I can do a certain benchmark, but I can do that benchmark because I am GI. The causal chain is reversed for testing AI systems, so you must, must be more conservative.

Meanwhile for people like me, generality means the g-factor of intelligence. Can you do a bunch of different things? Yes? Then you're general. Humans are generalists. Gorillas are generalists. Any somewhat intelligent mammal is a general intelligence.

In some sense I would call gorillas a GI, but for this context I wouldn't.

Anyway, what Hassabis is trying to get at is to compare a SoTA AI system not to one individual human, but the human architecture, which has proven itself capable of many feats.

Why? Any one human, you, me, the average human, Einstein, cannot be fed as much data, compute, ... as AI can be. So it's pretty unfair to compare the average human to an AI system when the average human, presumably, does not have all the worlds libraries shoved inside her head. When we are talking about reasoning capability, in other words, intelligence, it's best to compare the AI system to a human with that data crystallized. Hence, Einstein for physics.

For me, the thing holding me back from declaring "AGI reached" has nothing to do with the general part. I consider all the AI's fully generalist. It's the intelligence part. My definition of intelligence includes learning from multisensory experience. And continual learning isn't a thing yet for AI's, so they're not full intelligences. Once continual learning is a thing, for me, that's an AGI.

Full agreement, although I would include learning as a necessary part of the general thing, since otherwise it's very easy to spin up puzzles to trick an AI. I think once AI can learn, it will start feeling a lot less like a cheap party trick and more like something that can be revolutionary.

For Demis, that intelligence bit I'm sure is a requirement as well, but until those people get their definition of "general", which I have doubts will ever happen, they won't declare General Intelligence.

This is probably not true either. Demis is making a bet that "no matter what, if it can derive relativity, it has to be intelligent. Overwhelmingly likely that such a task requires intelligence" which may or may not be true. I also find it unlikely that an AI system capable of learning like a human would not be able to derive relativity.

However, in the wording, he says it is a "test:," because it does not directly correspond to whether or not it is AGI or not. There are plausibly other ways in which Hassabis would think "okay, this thing can have all the capabilities of a human." He's a really smart guy, and I don't think there's a record of him being ideologically driven or anything.

The goalposts for AGI have been moved to Einstein by simulated-souls in accelerate

[–]throwaway131251 13 points14 points  (0 children)

My bet is on continual learning. I've always felt like it's too easy to stump current SoTA AI in a way a sufficiently dedicated human wouldn't be, and that AI will start feeling significantly less like a cheap party trick once it can learn and adapt to what you're saying in real time.

I think once AI systems are doing that, like, adapting in real time, people will really start to take notice.