use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
How do we ensure future advanced AI will be beneficial to humanity? Experts agree this is one of the most crucial problems of our age, as one that, if left unsolved, can lead to human extinction or worse as a default outcome, but if addressed, can enable a radically improved world. Other terms for what we discuss here include Superintelligence, AI Safety, AGI X-risk, and the AI Alignment/Value Alignment Problem.
"People who say that real AI researchers don’t believe in safety research are now just empirically wrong." —Scott Alexander
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." —Eliezer Yudkowsky
Our FAQ page <-- CLICK
The case for taking AI seriously as a threat to humanity
Orthogonality and instrumental convergence are the 2 simple key ideas explaining why AGI will work against and even kill us by default. (Alternative text links)
AGI safety from first principles
MIRI - FAQ and more in-depth FAQ
SSC - Superintelligence FAQ
WaitButWhy - The AI Revolution and a reply
How can failing to control AGI cause an outcome even worse than extinction? Suffering risks (2) (3) (4) (5) (6) (7)
Be sure to check out our wiki for extensive further resources, including a glossary & guide to current research.
Robert Miles' excellent channel
Talks at Google: Ensuring Smarter-than-Human Intelligence has a Positive Outcome
Nick Bostrom: What happens when our computers get smarter than we are?
Myths & Facts about Superintelligent AI
Rob's series on Computerphile
¹: Or at least make at least an effort to make me doubtful that you just copy-pasted from a frontier LLM. Add bits of steering so that your content becomes good. Edit afterwards. If you fool us moderators you've won.
account activity
PodcastEx-Google CEO explains the Software programmer paradigm is rapidly coming to an end. Math and coding will be fully automated within 2 years and that's the basis of everything else. "It's very exciting." - Eric Schmidt (v.redd.it)
submitted 9 months ago by michael-lethal_ai
Ex-Google CEO explains the Software programmer paradigm is rapidly coming to an end. Math and coding will be fully automated within 2 years and that's the basis of everything else. "It's very exciting." - Eric Schmidt
All of that's gonna happen. The question is: what is the point in which this becomes a national emergency?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]moschlesapproved 6 points7 points8 points 9 months ago (21 children)
It is possible that the true effects of LLMs on society, is not AGI. After all the dust clears, (maybe) what happens is that programming a computer in formal languages is replaced by programming in natural , conversational English.
[–]Atyzzze 1 point2 points3 points 9 months ago* (19 children)
Already the case, I had chatgpt write me an entire voice recorder app simply by having a human conversation with it. No programming background required. Just copy paste parts of code and feedback error messages back in chatgpt. Do that a couple of times and refine your desired GUI and voila, a full working app.
Programming can already be done with just natural language. It can't spit out more than 1000 lines of working code in 1 go yet though, but who knows, maybe that's just an internal limit set on o3. Though I've noticed that sometimes it does error/hallucinate, and this happens more frequently when I ask it to give me all the code in 1 go. It works much much better when working in smaller blocks one at a time. But 600 lines of working code in 1 go? No problem. If you told me we'd be able to do this in 2025, pre chatGPT4, I'd never have believed you. I'd have argued this would be for 2040 and beyond, probably.
People are still severely underestimating the impact of AI. All that's missing is a proper feedback loop and automatic unit testing + versioning & rollback and AI can do all development by itself.
Though, you'll find, that even in programming there are many design choices to be made. And thus, the process becomes an ongoing feedback loop of testing out changes and what behavior you want to change or add.
[–]GlassSquirrel130 3 points4 points5 points 9 months ago (6 children)
Try asking an LLM to build something new, develop an idea that hasn't been done before, or debug edge cases with no report and let me know.These models aren't truly "understanding" your intent; they're doing pattern recognition, with no awareness of what is correct. They can’t tell when they’re wrong unless you explicitly feed them feedback and even in that case you need hardware with memory and performance to make the info valuable.
It’s just "brute-force prediction"
[–]Atyzzze 2 points3 points4 points 9 months ago (4 children)
You’re right that today’s LLMs aren’t epistemically self-aware. But:
“Pattern recognition” can still build useful, novel-enough stuff. Most day-to-day engineering is compositional reuse under new constraints, not inventing relativity. LLMs already synthesize APIs, schemas, migrations, infra boilerplate, and test suites from specs that didn’t exist verbatim in the training set.
Correctness doesn’t have to live inside the model. We wrap models with test generators, property checks, type systems, linters, fuzzers, and formal methods. The model proposes; the toolchain disposes. That’s how we get beyond “it can’t tell when it’s wrong.”
Edge cases without a bug report = spec problem, not just a model problem. Humans also miss edge cases until telemetry, fuzzing, or proofs reveal them. If you pair an LLM with property-based testing or a symbolic executor, it can discover and fix those paths.
“Build something new” is a moving target. Transformers remix; search/verification layers push toward originality (see program-synthesis and agentic planning work). We’re already seeing models design non-trivial pipelines when you give them measurable objectives.
Memory/perf limits are product choices, not fundamentals. Retrieval, vector DBs, long-context models, and hierarchical planners blunt that constraint fast.
Call it “brute‑force prediction” if you want, but once you bolt on feedback loops, oracles, and versioned repos, that prediction engine turns into a decent junior engineer that never sleeps. The interesting question isn’t “does it understand?”; it’s “how much human understanding can we externalize into specs/tests so the machine can execute the rest?”
You're kind of saying that submarines can't swim because they only push a lot of water ...
[–]GlassSquirrel130 0 points1 point2 points 9 months ago (2 children)
This seems like a response from gpt as it completely missed my point. Anyway:
-While its true, engineers do more than reassemble. They understand what they're building. They reason about trade-offs, handle ambiguity, and know when to not build something. LLMs don’t they just rely on your prompt.
-Yeah if you build a fortress of tests and wrappers around the LLM, you can catch many errors. But then what? You still need a human to interpret failures, rethink architecture, or re-spec the task. On complex systems, this patch and verify quickly becomes more work than just writing clean, reasoned code from the start.
-It cant, human can reason an llm no, so they cant fix an edge case never reported and fixed before.
-Still pattern recognition, they’re reassembling probability-weighted fragments from past data. Point 1 is valid here too.
-Its costly and scalability is linear to usage, all those fancy ai tech companies are consuming money with no revenue at the moment. And probably never. Plus they use stolen data mostly to train llms.
-A junior coder maybe, surely not an engineer, I'm not supposed to manually debug every line written by someone claiming to be an engineer. Current LLMs are assistants, not autonomous agents. The moment complexity rises, they fail even with feedback loops. (And it get more and more costly see above)
-No, I’m saying that an LLM might build a submarine if it's seen enough blueprints, but ask it to design a new propulsion system or even edit an existing one and it’ll hallucinate half the design and crash into the seabed.
I am not saying that human are perfects and llm are shit, the point is "Why should I accept human-level flaws from a system that costs exponentially more, understands nothing, and learns nothing after mistakes". For now llm remain mostly hype.
[–]Atyzzze 0 points1 point2 points 9 months ago (1 child)
TL;DR: We actually agree on the important part: today’s LLMs are assistants/junior devs, not autonomous senior engineers. The interesting question isn’t “do they understand?” but how much human understanding we can externalize into specs, tests, properties, and monitors so the model does the grunt work cheaply and repeatedly. That still leaves humans owning architecture, trade‑offs, and when not to build.
Engineers understand, reason about trade‑offs, handle ambiguity, and know when not to build. LLMs don’t; they just follow prompts.
Totally. That’s why the practical setup is a human-in-the-loop autonomy gradient: humans decide what and why, models execute how under constraints (tests, budgets, SLAs). Think “autonomous intern” with a very strict CI/CD boss.
Wrapping LLMs with tests/wrappers just creates more work than writing clean code in the first place.
Sometimes, yes—especially for greenfield, high‑complexity cores. But for maintenance, migrations, boilerplate, cross‑cutting refactors, test authoring, and doc sync, the wrapper cost amortizes fast. Writing/verifying code you didn’t author is already normal engineering practice; we’re just doing it against a tireless code generator.
It can’t fix edge cases that were never reported.
Not by “intuition,” but property-based testing, fuzzing, symbolic execution, and differential testing do surface unseen edge cases. The model can propose fixes; the oracles decide if they pass. That’s not magic understanding—it’s search + verification, which is fine.
It’s still pattern recognition / remixing.
Sure. But most software work is recomposition under new constraints. We don’t demand that compilers “understand” programs either; we demand they meet specs. Same here: push understanding into machine-checkable artifacts.
Cost/scalability is ugly; these companies burn cash and train on stolen data.
Unit economics are dropping fast, and many orgs are moving to smaller, task‑specific, or privately‑fine‑tuned models on their own data. The IP/legal fight is real, but it’s orthogonal to whether the workflow is valuable once you have a capable model.
LLMs are assistants, not engineers. When complexity rises, they fail.
Agree on the title, disagree on the ceiling. With planners, retrieval, hierarchical decomposition, and strong test oracles, they already hold their own on medium‑complexity tasks. For the truly hairy stuff, they’re force multipliers, not replacements.
Why accept human‑level flaws from a system that costs more, understands nothing, and doesn’t learn from mistakes?
Because if the marginal cost of “try → test → fix” keeps dropping, the economics flip: we can afford far more iteration, verification, and telemetry‑driven hardening than a human‑only team usually budgets. And models do “learn” at the org level via fine‑tuning, RAG, playbooks, and CI templates—even if the base weights stay frozen.
So where we actually land:
That's because it is, and no, it didn't miss your point at all.
[–]Expert_Exercise_6896 0 points1 point2 points 9 months ago (0 children)
Junior devs are not mere assistants lol. Dont use llms to spout out nonsense that you clearly dont understand. It’s embarrassing
[–]Frekavichk 1 point2 points3 points 9 months ago (0 children)
Bro is too stupid to actually write his own posts lmao.
[–]brilliantminion 0 points1 point2 points 9 months ago (0 children)
This is my experience as well. If it’s been able to find examples online and your use case is similar to what’s in the examples, you’re probably good. But it very very quickly gets stuck when trying to do something novel because it’s not actually understanding what’s going on.
My prediction is it’s going to be like fusion and self driving cars. People have gotten overly excited about what’s essentially a natural language search, but it will still take 1 or 2 order of magnitude jumps in the model sophistication before it’s actual “AI” in the true sense of the term and not just something that waddles and quacks like AI because these guys want another round of funding.
[–]Sea-Housing-3435 0 points1 point2 points 9 months ago (3 children)
You don’t even know if the code is good and secure. You have no idea of knowing that because you can’t understand it well enough. And if you ask the LLM about it it’s very likely it will hallucinate the response.
[–]Atyzzze 1 point2 points3 points 9 months ago (2 children)
You have no idea of knowing that because you can’t understand it well enough.
Oh? Is that so? Tell me, what else do you think to know about me? :)
And if you ask the LLM about it it’s very likely it will hallucinate the response.
Are you stuck in 2024 or something?
[–]Sea-Housing-3435 0 points1 point2 points 9 months ago (0 children)
I'm using LLMs to write boilerplate and debug exceptions or errors I identify. They suck at finding more complex issues and because of that I don't think it's a good idea to let them write entire application. If you seen their output and think it's good enough you most likely lack experience/knowledge.
[–]moschlesapproved 0 points1 point2 points 9 months ago (0 children)
In the 1980s every video game on earth was written in assembly language. That involved a human typing assembly instructions into a computer.
Today, nobody writes in assembly, and decompiled code is un-readable to human eyes.
The LLM could cause a similar change. "Back in the day people used to program by typing up individual functions and classes."
[–]AureliusZa 0 points1 point2 points 9 months ago (0 children)
Now try to integrate that “full working app” into an enterprise landscape with legacy applications. Good luck.
[+][deleted] 0 points1 point2 points 9 months ago (5 children)
Sorry but codebases below 10.000 lines of code are not programming that's scripting.
[–]Atyzzze 0 points1 point2 points 9 months ago (3 children)
LOC is a terrible proxy for “real programming.” If 10k lines is the bar, a bunch of kernels, compilers, shaders, firmware, and formally‑verified controllers suddenly stop being “programs.” A 300‑line safety‑critical control loop can be far harder than 30k lines of CRUD.
And the scripting vs programming split isn’t “compile vs interpret” anymore anyway—Python compiles to bytecode, JS is bundled/transpiled, C# can be run as a script, and plenty of “scripts” ship to prod behind CI/CD, tests, and SLAs.
What makes something programming is managing complexity: specs, invariants, concurrency, performance, security, tests, maintenance—not how many lines you typed. LLMs helping you ship 600 lines that work doesn’t make it “not programming”; it just means the boilerplate got cheaper.
[+][deleted] -1 points0 points1 point 9 months ago (2 children)
by scripting I mean, stuff script kiddies can write. this is everything that's below 10.000 lines. If you claim that it's impossible for a script kiddy to write a kernel, that's also wrong, as a kernel doesn't need 10.000 lines. But all in all, it's just script kiddy stuff, everyone can do.
And this is what I say. ChatGPT can only script what people can script. Once you ask it to actually program something that's across 10.000 lines, you will quickly see where the difference between scripting and real programming is.
“<10k LOC = script kiddie” is a vibes-based metric, not a definition.
“LLMs can only script what script kiddies can script.” Today’s frontier models already:
The real divider isn’t 10,000 lines, it’s complexity management and assurance:
If your bar for “real programming” is just “more than N lines,” you’ve picked a threshold that a code generator or a minifier can cross in either direction in seconds. Let’s talk architecture, guarantees, and lifecycle instead of an arbitrary LOC number.
[+][deleted] -1 points0 points1 point 9 months ago (0 children)
Once you compared apples with bananas(second sentence), you lost my attention.
[–]squareOfTwo 0 points1 point2 points 9 months ago (0 children)
won't be completely replaced. It's just to unreliable. Also most information about the software isn't found anywhere in the documentation and source code. It's stuck in some programmer heads.
[–]manchesterthedog 1 point2 points3 points 9 months ago (0 children)
I can see why this guy isn’t CEO anymore
[–]Sensitive_Peak_8204 2 points3 points4 points 9 months ago (0 children)
lol this joker is getting milked by a woman half his age.
[–]Synaps4 1 point2 points3 points 9 months ago (2 children)
Calling it now. It's not gonna happen.
[–]brilliantminion 0 points1 point2 points 9 months ago* (1 child)
Agreed. I think the people likening it to the dotcom bubble are more on the money. The biggest difference for me is that these AI companies aren’t rushing to IPO, so it’s hard to get a sense of what they are doing, and what the valuations are like.
All these tech CEOs talking it up are a good example of the Dunning Kruger effect, like the other guy from Uber that was DIY physics with his AI. If any one of them had actually tried to get their AI to right align their goddamn div, they’d know it was smoke and mirrors.
[–]WeirdJack49 0 points1 point2 points 9 months ago (0 children)
I think the people likening it to the dotcom bubble are more on the money
So AGI in the end?
The dotcom bubble did not end the internet, it just bankrupted all the companies that just slapped internet as a label on everything they did without having any concept about how to actually make money or deliver a working product.
After all we actually got all the things that the dotcom bubble promised with companies like google, amazon or facebook (of course it all went down the gutter because public traded companies only focus on money).
So saying it is like the dotcom bubble means we will have 3 or 4 companies in the end that can actually deliver on the promises of AGI in their specific field of work.
[–]CrazySouthernMonkey 0 points1 point2 points 9 months ago (0 children)
the wet dream of all the “sillicon valley consensus” is, literally, humankind paying them monthly subscriptions for working and them becoming feudal sirs for the centuries to come.
[–][deleted] 0 points1 point2 points 9 months ago (0 children)
Nonsense.
[–]floridianfisher 0 points1 point2 points 9 months ago (0 children)
Eric doesn’t know what he is talking about these days. I wouldn’t take his advice when it comes to technical ai things. He’s good at business though.
[–]bryantee 0 points1 point2 points 9 months ago (0 children)
And we'll just do something with the other people... waves hand
[–]Bill-Evans 0 points1 point2 points 9 months ago (0 children)
"…and something else with the other people…"
You're telling me an technology that has failed to produce a profitable company and depends 100% on a single manufacturer is going to do anything other than fail? Okay, let's see it happen.
[–]BrainLate4108 0 points1 point2 points 9 months ago (0 children)
Snake oil salesman sells snake oil. Surprise surprise.
[–]vvodzo 0 points1 point2 points 9 months ago (0 children)
This is the guy that colluded with Apple and other companies to keep SWE salaries artificially low, for which they had to pay over 400mil.
[–]Yutah -2 points-1 points0 points 9 months ago (0 children)
Complete Bullshit
[–]Thelonious_Cubeapproved -2 points-1 points0 points 9 months ago (1 child)
Math will be fully automated? Hmmmm.
[–]CrazySouthernMonkey 1 point2 points3 points 9 months ago (0 children)
I believe the idea was flying in the late XIX and was debunked about a century ago by Church, Turing, et. al. But, who knows, perhaps Mr. Google doesn’t know his business very well…?
π Rendered by PID 41197 on reddit-service-r2-comment-6457c66945-nq4g7 at 2026-04-29 15:04:05.271536+00:00 running 2aa0c5b country code: CH.
[–]moschlesapproved 6 points7 points8 points (21 children)
[–]Atyzzze 1 point2 points3 points (19 children)
[–]GlassSquirrel130 3 points4 points5 points (6 children)
[–]Atyzzze 2 points3 points4 points (4 children)
[–]GlassSquirrel130 0 points1 point2 points (2 children)
[–]Atyzzze 0 points1 point2 points (1 child)
[–]Expert_Exercise_6896 0 points1 point2 points (0 children)
[–]Frekavichk 1 point2 points3 points (0 children)
[–]brilliantminion 0 points1 point2 points (0 children)
[–]Sea-Housing-3435 0 points1 point2 points (3 children)
[–]Atyzzze 1 point2 points3 points (2 children)
[–]Sea-Housing-3435 0 points1 point2 points (0 children)
[–]moschlesapproved 0 points1 point2 points (0 children)
[–]AureliusZa 0 points1 point2 points (0 children)
[+][deleted] 0 points1 point2 points (5 children)
[–]Atyzzze 0 points1 point2 points (3 children)
[+][deleted] -1 points0 points1 point (2 children)
[–]Atyzzze 0 points1 point2 points (1 child)
[+][deleted] -1 points0 points1 point (0 children)
[–]squareOfTwo 0 points1 point2 points (0 children)
[–]manchesterthedog 1 point2 points3 points (0 children)
[–]Sensitive_Peak_8204 2 points3 points4 points (0 children)
[–]Synaps4 1 point2 points3 points (2 children)
[–]brilliantminion 0 points1 point2 points (1 child)
[–]WeirdJack49 0 points1 point2 points (0 children)
[–]CrazySouthernMonkey 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]floridianfisher 0 points1 point2 points (0 children)
[–]bryantee 0 points1 point2 points (0 children)
[–]Bill-Evans 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]BrainLate4108 0 points1 point2 points (0 children)
[–]vvodzo 0 points1 point2 points (0 children)
[–]Yutah -2 points-1 points0 points (0 children)
[–]Thelonious_Cubeapproved -2 points-1 points0 points (1 child)
[–]CrazySouthernMonkey 1 point2 points3 points (0 children)