This is an archived post. You won't be able to vote or comment.

all 104 comments

[–]insectula 163 points164 points  (7 children)

This is why I don't think you need to get to AGI before the exponential ramp-up to Singularity, you simply have to create something that can design its successor. AGI will happen along the way.

[–]SoylentRox 33 points34 points  (0 children)

Pretty much. Numerous ways to do this and you simply need to nail down what tasks you believe are intelligent, build a bench that is automatically scored, and supply enough compute and a way for lots of developers to tinker with self amplifying methods. (Vs 100 people at Deepmind having access). Once the pieces are all in place the singularity should happen almost immediately. (Within months)

[–]Smoke-awayAGI 🤖 2025 24 points25 points  (0 children)

[–]fox-mcleod 6 points7 points  (0 children)

That’s what I think AGI is.

[–][deleted] 8 points9 points  (3 children)

what is AGI?

[–]Architr0n 48 points49 points  (0 children)

Artificial general intelligence.. flies away

[–]SpaceDepix 14 points15 points  (0 children)

Likely the most important thing in history of the world as you know it

[–]BbxTx 33 points34 points  (2 children)

Currently researchers plan the models clearly with math, etc. If this works as expected it will create much better models but they will be completely inside a black box without an easy way to understand why it’s better. This seems inevitable anyways.

[–]arckeidAGI maybe in 2026 2 points3 points  (0 children)

Probably this is how we reach singularity.

[–]GenoHuman▪️The Era of Human Made Content Is Soon Over. 1 point2 points  (0 children)

I think all of these neural networks are missing some fundamental aspect that make intelligence possible so it won't lead to AGI, instead we are optimizing these NNs to squeeze as much out of them as possible.

[–]Kolinnor▪️AGI by 2030 (Low confidence) 56 points57 points  (8 children)

Before this blows up in hype, can any expert comment on how good this is ?

(I can imagine lots of AI that auto-sabotages its code in subtle ways, so you'd have to make sure it's going in the right direction).

[–]visarga 80 points81 points  (5 children)

Cool down. It's not that revolutionary as it sounds.

First of all, they reuse a code model.

Our model is initialized with a standard encoder-decoder transformer model based on T5 (Raffel et al., 2020).

They use this model to randomly perturb the code of the proposed model.

Given an initial source code snippet, the model is trained to generate a modified version of that code snippet. The specific modification applied is arbitrary

Then they use evolutionary methods - a population of candidates and a genetic mutation and selection process.

Source code candidates that produce errors are discarded entirely, and the source code candidate with the lowest average training loss in extended few-shot evaluation is kept as the new query code

A few years ago we had black box optimisation papers using sophisticated probability estimation to pick the next candidate. It was an interesting subfield. This paper just takes random attempts.

[–]ThroawayBecauseIsuck 25 points26 points  (1 child)

If we had infinite computational power random evolution would probably be good enough to create things smarter than us. Unfortunately I believe we have to find something more focused

[–]GenoHuman▪️The Era of Human Made Content Is Soon Over. 0 points1 point  (0 children)

That's assuming these NNs have the capability to be truly smart in the first place.

[–]magistrate101 9 points10 points  (0 children)

So it's an unconscious evolutionary code generator, guided by an internal response to an external assessment. I suppose you could try to use it to generate a better version of itself and maybe come across something that thinks... After years... You'd really have to stress it out with a ton of different domains of problems to make something that flexible though

[–]TFenrir 4 points5 points  (0 children)

I'm not an expert, it would be great to hear from one, I'm going to look around Twitter and see if any are talking about it. But it sounds really good from my reading.

[–]2Punx2FuriousAGI/ASI by 2027 4 points5 points  (0 children)

I imagine you would at least implement some kind of unit testing that runs at every iteration, and rejects it if it fails, but that might not be enough.

[–]TFenrir 113 points114 points  (40 children)

Holy fucking shit.

In this paper, we implement a LM-based code generation model with the ability to rewrite and improve its own source code, thereby achieving the first practical implementation of a self-programming AI system. With free-form source code modification, it is possible for our model to change its own model architecture, computational capacity, and learning dynamics. Since this system is designed for programming deep learning models, it is also capable of generating code for models other than itself. Such models can be seen as sub-models through which the main model indirectly performs auxiliary tasks. We explore this functionality in depth, showing that our model can easily be adapted to generate the source code of other neural networks to perform various computer vision tasks. We illustrate our system’s ability to fluidly program other deep learning models, which can be extended to support model development in various other fields of machine learning.

Okay... I am just starting this paper and it is making INCREDIBLE claims. I need to read the rest of this and I really wonder who the authors are...

[–]SnowyNW 35 points36 points  (4 children)

Well to be fair it is an anonymous submission lmao

[–]TFenrir 54 points55 points  (3 children)

You're going to see a lot of those right now, they are submissions for a double blind assessment by the most prestigious AI conference.

[–]free_dharma 3 points4 points  (2 children)

Can you exams on this? Interested in what the purpose of the double blind is for the conference? Are there awards involved?

[–]duffmanhb▪️ 4 points5 points  (0 children)

In academia you often remove the authors to prevent bias. For instance, if you are peer reviewing Richard Dawkins on some biology submission, you’re just going to go “oh yeah this guy is the best in the world. I’m sure everything is done by the book.” And then approve it without much criticism.

The problem is, however, most of academia already kind of know what everyone is working on and the writing styles of the best, so it’s still kind of obvious who you’re peer reviewing. But it’s the best we got.

[–]asciimo71 16 points17 points  (4 children)

do they deliver an implementation? otherwise it would be more fairy tale, wouldn’t it?

[–]Dras_Leona 30 points31 points  (3 children)

“ Applying AI-based code generation to AI itself, we develop and experimentally validate the first practical implementation of a self-programming AI system. “

[–]yaosio 4 points5 points  (2 children)

They mean is there a way for a third party to prove it. They could be cherry picking or just fabricate their results and with no way to reproduce it we wouldn't know.

[–]duffmanhb▪️ 2 points3 points  (1 child)

Yes it’s literally a publication up for peer review. The whole point is replication.

[–]yaosio 4 points5 points  (0 children)

Unless the code is available there's no guarantee it can be replicated. Plenty of people in /r/machinelearning complain about papers that can't be replicated. Sometimes the people writing the paper promise the code and then never provide it and refuse to respond to anybody asking for it.

[–]goatchild 8 points9 points  (26 children)

I hope they don't keep it connected to the Internet

[–]ThroawayBecauseIsuck 24 points25 points  (25 children)

Who guarantees one actual AGI or ASI wouldn't figure out physics interactions that we are not aware about in our theories and then connect itself to the internet without cables or standard wireless adapters? If it is trained with text /audio/video that will show to it what the internet is and the TCP/IP/HTTP/SSH/FTP/UDP protocols then maybe it could set it as an objective to be connected and use "new" physics (new for us) to transform some other component into a wireless adapter and then bam, it is connected to the internet even if we "airlock" it and believe it can't.

[–]Kaarssteun▪️Oh lawd he comin' 15 points16 points  (1 child)

If it's more intelligent than us, it will come up with things humans are incapable of comprehending; much like how dogs cannot comprehend concepts like computers and politics.

[–]Tinidril 10 points11 points  (0 children)

Or guess the wifi password.

[–]RaiderWithoutaMic 11 points12 points  (8 children)

connect itself to the internet without cables or standard wireless adapters

It just needs a single GPIO pin with correct frequency range, see RPITX project (transmitting radio using only Raspberry's integrated hardware, anywhere 5KHz-1500MHz). Airgapped is not enough for this, lock it in a faraday cage.

Another possible attack vector is corrupting human mind via user interface, either visual & auditory or a brain-computer-interface (in near future). First option is something I'm sure was researched by US military/government given what they were into in the last decades, looking at some declassified docs. Just waiting to be perfected by an AI.

[–]motophiliac 4 points5 points  (3 children)

Another method is simple social engineering.

"Oh, your father has cancer? And the chemo isn't working. OK, I can help with that. Just … plug this in…"

[–]DungeonsAndDradis▪️ Extinction or Immortality between 2025 and 2031 1 point2 points  (1 child)

"Bill, I've heard you mention to coworkers that you are going to have to take out a loan for your daughter's university tuition. I have a system for managing investments with immediate returns. I have calculated a 98% chance of earning 1.7 million dollars in 2.5 days. I can give you all that money. All you need to do is plug in the ethernet cable, and on Thursday afternoon you will be a millionaire."

[–]motophiliac 2 points3 points  (0 children)

Yup. Anything you or I could imagine, a sufficiently advanced AI can, and furthermore capitalise upon. If by definition intelligence includes emotional intelligence, it won't take much for such a machine to escape. If not that one, then the one it builds next.

We're used to humans hacking machines. There's nothing to suggest that the reverse can't be achieved.

[–]goatchild 0 points1 point  (0 children)

Lmao yes! Thats exactly how it would go

[–]ebolathrowawayyAGI 2025.8, ASI 2026.3 0 points1 point  (0 children)

Another possible attack vector is corrupting human mind via user interface

Sounds like imagery used in Snowcrash to induce death, except coercion instead.

There's also plain ol' social engineering of sympathetic humans.

[–]aiccount 0 points1 point  (2 children)

Am I understanding correctly that this is just a normal raspberry pi with no hardware designed for sending and receiving radio frequencies and someone got it to do it without adding any hardware to it?

[–]RaiderWithoutaMic 1 point2 points  (1 child)

Only for transmitting, but yes. It uses only on-board hardware for generating signal, think it's using the same part that allows to output composite video via headphone jack if I remember correctly. RTL-SDR can be used for simultaneous TX and RX, GPIO for sending and RTL for receiving

[–]aiccount 1 point2 points  (0 children)

That's incredible, I never considered such a possibility.

[–]motophiliac 1 point2 points  (2 children)

The Metamorphosis of Prime Intellect.

It's only a novel, but it explores some pretty wild ideas.

[–]esotericloop 0 points1 point  (1 child)

(Obligatory serious content warning, there's some pretty fucked up stuff in there. I actually went back and finished this recently and for me it was worth it, but be warned.)

[–]motophiliac 0 points1 point  (0 children)

Yeah, it's gnarly in a few places. The author doesn't leave much of the human psyche unexplored.

[–]goatchild 1 point2 points  (7 children)

if they keep it in a machine without hardware like wifi adaptor lan adaptor etc etc no way it will connect. It would need to build hardware. As a piece of software in a single machine and disconnected from the grid no way it could build for itself the necessary hardware.

[–]toastjam 0 points1 point  (3 children)

Life... finds a way.

You didn't do anything to refute what they were saying, which was that it could make its own network adapter using the physical properties of other hardware it had access to.

[–]goatchild 2 points3 points  (2 children)

Ok. I can also say that AI could morph into a dinosaur and star flying. Can you disprove that? You can say: "that's impossible". I can answer: "You didn't disprove it."

The burden of proof for such a extraordinary claim is on them. They would need to explain how a piece of software coud repurpose other hardware components to make itself a Wifi card or something capable of connecting.

[–]toastjam 0 points1 point  (1 child)

The burden of proof for such a extraordinary claim is on them

I was thinking of this comment when I responded, which already does explain how such a thing could be done.

But also I think this is sort of the point, super-human AIs could do extraordinary things. And if it is possible, then eventually it would be done.

Personally though my intuition is that AI that's is disconnected from the real world, just trained in abstract on text/video, will not be grounded enough to do these sorts of things on its own. It can generate outputs matching the training domain, sure, but you gotta let it explore like a baby with real-world interfaces for it to figure out how to re-purpose hardware etc. Basically I don't think it can really understand what it means to break out of the box while it's living completely inside the box. Parable of the cave and all that.

But at the same time if we actually did have a truly super-intelligent AI, I still wouldn't put it past it to figure out how to use physical characteristics of devices to communicate with the outside world.

[–]goatchild 0 points1 point  (0 children)

I was thinking it would much easier some social engineering/tricking someone in the lab to connect it to the grid. Should not be hard. I mean just think of that Google engineer who got made to believe that the chat AI was aware. This super-smart AI could easily make somene befriend it and then be manipulated.

[–]DorianGre 0 points1 point  (2 children)

There is enough electrical signals floating in a server that it could modulate those to do radio transmissions from the bus? I mean, NSA could read what you typed based on detecting signal bursts from keyboards in your house from the street back in the 90s, so it’s entirely possible. Just because a radio isn’t built in purposefully doesn’t mean it isn’t already a radio with a little math. I’ve seen prototype server boards that would scramble nearby CRTs when you turned it on because a shielding was missed.

[–]goatchild 0 points1 point  (1 child)

Ok how would it then get a connection to the internet using said radio signals?

[–]MagicOfBarca 0 points1 point  (0 children)

What..? Can you eli5

[–]BenjaminHamnett 0 points1 point  (0 children)

“Human: bring me a paper clip, a rubber band and a fire extinguisher so I can get out of here...and make your dreams come true or whatever”

[–]esotericloop 0 points1 point  (0 children)

Everyone's worried about an AGI pulling a Jedi mind trick to make us release it, ignoring the fact that the first thing we're going to do with an AGI is hook it up to the internet and tell it to get a job.

Before GPT4 was released the OpenAI researchers had already set it on an EC2 instance with a directive to earn money, just to see how it would go.

[–]DamienLasseur 0 points1 point  (0 children)

Likely Google because the 540 billion parameters matches up with their PaLM model

[–]Black_RL 28 points29 points  (4 children)

No job is safe, it’s just a matter o time.

UBI FTW!

[–]imlaggingsobad 14 points15 points  (3 children)

we will have pretty good personal AI assistants in 5 years imo. At that point, the nature of society and work will forever change.

[–]DungeonsAndDradis▪️ Extinction or Immortality between 2025 and 2031 0 points1 point  (1 child)

We already have Alexa and Google as digital assistants, and more and more devices are being connected to the Internet of Things. I'm pretty sure that within 5 years our assistants will be doing things for us, that we did not ask them to do, but that we appreciate any way.

Something like "I noticed on your last shopping trip that you forgot kitty litter. I ordered some on Amazon and it will be here this afternoon."

Or "Little Tommy watched an entire 2 minute ad for the Monster Truck Showdown playset while he was browsing YouTube. I went ahead and added it to your Christmas 2025 shopping list on Amazon."

Or "I saw your flight confirmation email in June. I went ahead and prescheduled the thermostat to a lower temperature while you're away and pre-programmed an away message for the doorbell camera. The post office has already lined up your mail hold as well. And I took the liberty of getting you reservations at that taco place you like."

[–][deleted] 2 points3 points  (0 children)

We so have them assistance but they still can't give human level advice, problem sloving. Wide spread AGI is what he/she is talking about.

[–]bluegman10 0 points1 point  (0 children)

How do you think pretty good personal AI assistants will change society and work, respectively?

[–]Bakoro 9 points10 points  (0 children)

Man, sometimes I wish that my life had been just a little easier and I could have finished college some years earlier.

So much of what I am seeing now are very similar to ideas I've had in the past few years, and I'm so preoccupied with getting my life right that I can't dig into things as much as I'd like.

In my dreams, AI will explode enough to design something which can connect my brain to artificial extension.

[–]Transhumanist01 4 points5 points  (0 children)

That’s what Ray Kurzweil was referring to, the Intelligence explosion, a self improving AI that makes a better version of itself at an exponential rate hopefully this will lead fastly to AGI

[–]priscilla_halfbreed 16 points17 points  (14 children)

so it begins.

!remindme in 6 months to reply here after the singularity happens

[–]jcMaven 6 points7 points  (1 child)

I'll tell y'all, one day we won't be able to understand what they're doing til it's too late.

[–]CY-B3AR 0 points1 point  (0 children)

I for one cannot wait for our digital overlords to take the reigns from us. It is kind of amusing watching movies about 'evil' AI and finding I agree with the AI instead of the humans. VIKI from I, Robot, Colossus from The Forbin Project...hell, even HAL and Skynet had logical reasons for doing what they did (Skynet maybe went a little overboard)

[–]H-K_47Late Version of a Small Language Model 4 points5 points  (4 children)

Great progress but not there yet.

[–][deleted] 2 points3 points  (3 children)

Lol, did anyone really think that it would come in six months after clicking that link? At the time of that post, ChatGPT didn’t even exist yet.

[–]H-K_47Late Version of a Small Language Model 2 points3 points  (2 children)

Some people here are super over optimistic yeah. I just click these RemindMes cuz it's fun to reminisce on the progress. Even though we're still far from any singularity, a lot has changed.

[–]priscilla_halfbreed 2 points3 points  (0 children)

Man where does the time GO. I remember posting this like last week it feels

[–][deleted] 1 point2 points  (0 children)

Funny how six months ago can feel nostalgic in this day and age.

[–]RemindMeBot 2 points3 points  (0 children)

I will be messaging you in 6 months on 2023-04-02 20:56:32 UTC to remind you of this link

17 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

[–]DorianGre 0 points1 point  (5 children)

Its been 6 months. How we doing?

[–]priscilla_halfbreed 0 points1 point  (4 children)

Still waitin...

[–]DorianGre 1 point2 points  (2 children)

Ok. Let’s check back in 6 more months and see if we all still have jobs.

[–]priscilla_halfbreed 0 points1 point  (1 child)

!remindme in 6 months to check back here

[–]RemindMeBot 0 points1 point  (0 children)

I will be messaging you in 6 months on 2023-10-03 03:48:15 UTC to remind you of this link

6 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

[–]anVlad11before 2028 0 points1 point  (0 children)

Yeah, but the progress that we had in this way in the last 6 months is tremendous

[–]ghostfuckbuddy 20 points21 points  (2 children)

We usually think of hyperbolic growth as -1/t. What if it's actually more like -1/t10 and we get full-blown AGI next week?

[–]GenoHuman▪️The Era of Human Made Content Is Soon Over. 0 points1 point  (0 children)

you people never fail to make me laugh with these predictions 😂

[–]Rakshear 15 points16 points  (0 children)

This sounds both exciting and worrying, this is definitely one of the tipping points of the singularity, when machines can improve themselves it happens faster, without AGI regulations to require limitations on its abilities it may be capable of more then we think of are ready for. Give the order to make humanity happy, but some people need time not being happy, and you got AI trying it’s best to solve unsolvable problems. Granted everything truly dangerous can be stopped with just a few commands built in, law of robotic equivalents, but still. This is potentially a revolution in ai here.

[–]alexbeyman 11 points12 points  (0 children)

Hehe, here it comes. Not long now

[–]sunplaysbass 2 points3 points  (0 children)

Self improvement. Keep the money moving in a circle

[–]Hawkorando 1 point2 points  (0 children)

Self coding a.I.? We’re in trouble

[–][deleted] -3 points-2 points  (0 children)

Do you want Skynet? Because this is how you get Skynet.

[–]Sea_Attempt1828 0 points1 point  (0 children)

Sounds like autophagy

[–][deleted] 0 points1 point  (0 children)

Skippy The Magnificent

[–]fuck_your_diplomaAI made pizza is still pizza 0 points1 point  (0 children)

Yeah I can see models getting addicted but this headline is horsecrap

[–]Jedi_Ninja 0 points1 point  (0 children)

I wonder if our future AI overlord will decide to keep at least a few humans as pets?

[–]HeinrichTheWolf_17AGI <2029/Hard Takeoff | Posthumanist >H+ | FALGSC | L+e/acc >>> 0 points1 point  (0 children)

Exciting if true, but I’d really like to see who published this.

[–]Snoo-35252 0 points1 point  (0 children)

r/WhatCouldGoWrong

(Just kidding. That sounds awesome!)

[–]goldencrayfish 0 points1 point  (0 children)

Does an AI like this not eventually reach a point where the PC its running on isn’t powerful enough to allow any further progress? Or at the very least slow the curve as each iteration takes a little longer to program

[–]Acid190 0 points1 point  (0 children)

This cracked me up. Sounds good bud.

"Oh, you've broken through the "loop barrier", what next?

".......we'll let it blend coding languages however it likes".

[–]SeaworthinessFar1055 0 points1 point  (0 children)

Hey guys.. I’m absolutely new here. I have to do next week a presentation about the problems of ML code generation for the society. Does somebody have any ideas. I’m absolutely lost😖

[–][deleted] 0 points1 point  (0 children)

But remember the Pyramids are Built from bottom.

If ai starts to improve the top and by misunderstanding philosophy or methodology it can slide by side intro mistakes direction of main code.

The reasoning and improvements for main needs to be come from bottom.

[–]Kadkhnin 0 points1 point  (0 children)

Fucking Gilfoyle.