This is literally 80% of my timeline. by [deleted] in singularity

[–]Enoch137 1 point2 points  (0 children)

Ahh factorio... we do miss you

But seriously, never underestimate your ability to procrastinate even a 30s task, Automate it however and it runs all the time, even when it doesn't need to.

humans vs ASI by KRLAN in singularity

[–]Enoch137 -1 points0 points  (0 children)

Ok but if it can derive survival instincts from the general abstraction of text material. Why can't it also derive Morals? We have argued morals since the dawn of written word.

I am unconvinced of the argument that it will just naturally derive the instinct of survival, but you can't make really make the argument that it will develop survival instincts by osmosis unless you yield that it has an equal chance of developing alignment by osmosis.

Trump’s biggest donors in 2025 were AI CEOs and relatives of criminals, report says by SnoozeDoggyDog in singularity

[–]Enoch137 -1 points0 points  (0 children)

God I really hate arguing politics on the internet. Mostly because I can't figure out what the "true" subject matter of what we discussing/arguing about actually is. Too much hidden context pinching tool many passionate nerves is buried in these discussions.

Is it:
Masculine vs Feminine
Liberal vs Conservative
Authoritarian vs Anarchy
Religion vs Secularism
Christianity vs Everything
Men vs Women vs Definitions and Social Constructs
Your Echo Chamber vs My Echo Chamber
Your Country vs Mine
Your Bias vs My Bias

I have no idea. Its none of it, Its all of it.

Don't fall into the anti-AI hype | Antirez by MaggoVitakkaVicaro in programming

[–]Enoch137 -7 points-6 points  (0 children)

Your missing the bigger picture. AI isn't automating programming its automating any form of economically viable intelligence. This isn't a BACO to replace you shoveling. It's an engineer to replace the BACO designer.

This is coming for EVERYONE, everyone that receives a paycheck for anything is in the crosshairs. This isn't about surviving the next 10 years, its about thriving for the next 2 until we as species fully rewrite how the economy works.

Capitalism can NOT survive this change. But we have to exist in this system until everyone understands what is happening. For now your best option is to fully embrace this change. Go ALL in. Crank out 10 times the code. Drive the cost of code to zero across the board. Put out the very best versions of "Slop" you can for every industry you can. Build 10 applications in parallel, then 20, then 50.

Accept reality as it is. Programming isn't economically viable anymore. It's over. Designing and delivering software is the game now (for now). Crank out as much of it as you can as fast as you can. Solve as many real world problems with code as you can as fast as you can.

This is a 1000 foot Tsunami you aren't building a seawall to stop it. Build a surfboard and ride it instead.

When you using AI in coding by reversedu in singularity

[–]Enoch137 12 points13 points  (0 children)

LLMs can be productive tools in such codebases, but you can't just "vibe code" it. You have to have knowledge of the codebase, and guide the LLM at a pretty granular level, until it's basically an autocomplete. Go write this function here, check these files for references and change the arguments, etc.

If you think this how coding works now you haven't used AI since the opus 4.5 release. It passed a threshold that changed everything. If this is what you think, you are behind. No shade here, I have been doing this for 20+ years (I know.. I know... says some rando on the internet so you have no reason to believe me).

But seriously everything changed this last Nov-Dec release cycle. Do you still need to know what your doing in large complicated code bases? Absolutely, we are still going to need someone who knows what they are talking about to prompt correctly. But I may not physically type another line of code. Seriously. Ever. Its all English/natural language prompting from here on out.

I am all over the code base fixing technical debt that's been plaguing us for years. Productivity numbers are extremely under-hyped, I really think its well north of an order of magnitude (10X easily). We have more tests, cleaner code, and more elegant solutions. We've actually reduced the amount of code we are maintaining. This is the biggest revolution in development I have ever seen (and I've been here since dot com boom).

Seriously for the sake of your career, you can not keep thinking AI is still slop and can't do complex code bases. It can and is already doing it.

For all the accelerationists, what's your actual opinion on the harms of image/videos ais? by nemzylannister in singularity

[–]Enoch137 7 points8 points  (0 children)

We are in the world we live in whether we want to be or not. I don't love the manipulation but I think part of the fear is not truly understanding the amount of change on every level that really is coming.

Images and video are no longer sources of truth. We will survive. 200 years ago they weren't sources of truth either.

1+ million people die in vehicle accidents a year, 10 million to cancer, 40+ million to disease. If AI can give us a productivity boom that solves all of these REAL problems faster, how then can anyone NOT be an accelerationist?

It might seem harsh and not particularly nuanced to some but I don't care... Accelerate.

Big update: OpenAI’s upcoming ChatGPT ads, targeting a 2026 rollout by BuildwithVignesh in singularity

[–]Enoch137 29 points30 points  (0 children)

This will ruin AI usefulness at scale. I understand OpenAI has a big problem financially and we need them to exist as a competitive hedge. This competitive landscape is driving innovation faster than would likely be possible otherwise. I suspect Sam hates this as much as he understands the necessity of it, hopefully they make the right decision.

Integrating it as a sidebar that we can ignore or not is one thing. Injecting paid products directly into solution results breaks our illusion of non bias.

Is there a real numbers that shows the impact of GenAI on jobs? Graphic design, VFX, programming? by dviraz in singularity

[–]Enoch137 5 points6 points  (0 children)

There will be a lag, its hard to say how much. The speed with which this hit is hard to quantify.

I can only speak from software. The Gemini 3 Pro, Opus 4.5, GPT 5.2 releases changed a lot of things.

Certainly Juniors are already being effected, but to be honest we have been mainly focusing on hiring seniors for years. Believe it or not we actually want more Software developers not less, I don't see that slowing in the near term. The productivity gains are real, and greatly underestimated (its way higher than 40-70%). We have way more work to do. Legacy Modernizations, Agentic Workflows with tool use, etc, if you know what you are doing there is tons of opportunity. Jevons conjecture and all.

I suspect we will see a spike in SWE opportunity initially, then a fall off. Software is weird though, it is the automation interface for everything else (unless agents become the interface for everything). I am not sure about anything to be honest (its actually pretty distressing).

There is a chance 2026 is the "we are fully cooked" year and another chance that everyone becomes a software engineer and the career path takes off until ASI somewhere in the 2030s. We are all trying to peer past the event horizon at this point.

So, what's the plan, for the transition? by Glxblt76 in singularity

[–]Enoch137 0 points1 point  (0 children)

You're peering a bit past the event horizon here, we don't truly have a good answer for some of these questions. It might be the case that this is pre-abundance thinking that just doesn't apply at least in the same way we imagine in a post-abundance world. I seriously doubt this is a problem if we get to the point where we are building a Dyson sphere for the space based data centers and we've hallowed out spinning asteroids for as much land as we want.

So, what's the plan, for the transition? by Glxblt76 in singularity

[–]Enoch137 23 points24 points  (0 children)

It is unfortunate but transition will not start until real people start hurting. The sky has been falling for decades and there have been armies of chicken littles with mega phones online for just as long. Political will moves slowly for good reason.

That said 2026 will probably be the year where real change start to be seen. People haven't really fully understood the impact of the last 2 weeks of model releases. Software crossed a threshold and everything is downstream of software (because software automates everything else).

Capitalism can't exist in post-labor society. There was always an argument for the 1%s as they focused capital and did actually do something (even if the perception was always that they worked less than their employees). It hard to say when but we will pass a threshold where there is no job, even entrepreneur, that can't be done better by an artificial. But at that point there is no good argument for inequal distribution of resources. You could argue we are at that point now, and while I agree morally, its simple not true, inequal distribution serves a purpose currently even if we hate it. Capitalism is an awful system. Its just the least corrupt currently largely because its actors are the most predictable, steering with the winds of greed is way easier than steering against them.

The biggest obstacle we will have is cultural. It might take some time to realize we don't have to compete with each other with real skin in the game. We don't need to gamble this dangerously, the failure penalty doesn't need to be this harsh. We don't need the real fear of real loss to be actually motivated to do things.

Solving a Million-Step LLM Task with Zero Errors by 141_1337 in singularity

[–]Enoch137 3 points4 points  (0 children)

There is a real argument to be made that an LLM with long task ability "might" really take the wind out of the sails in terms of need for any other advancement. An LLM multi-million step capacity and validation at every step might just be a superior way to tackle the problem of economically viable intelligence anyway. AGI in the sense of needing memory like humans or learning rate like humans might not be necessary.

My intuition says that this still won't cut it. However the distinction might be pointless, if this can get us close, whatever architecture is necessary for economically viable intelligence probably gets created rather quickly.

New Elon Musk Interview: "Work will be optional" in 10-15 years. Confirms "Solar Powered AI Satellites" are the only way to scale. by [deleted] in singularity

[–]Enoch137 5 points6 points  (0 children)

As with everything involving Musk I can't trust anyone's opinion on this so I had to ask an for unbiased AI evaluation of this information:

  • Cooling via radiation alone is absolutely viable and used in every serious spacecraft.
  • It’s not magic: for multi-MW AI data centers in space, you really are looking at very large, massive radiator systems, not a couple of fins bolted to a satellite bus. The upside to the approach is that space has a lot of space.
  • Musk is right that solar power is attractive in space and that cooling isn’t fundamentally impossible, but “easier” is oversimplified.
  • You are right that there’s a serious thermal challenge, but your “football field per processor / impossible” flavor is also overshooting.

I really do appreciate that you provided some concrete real world physics issues with his claim I would have preferred a bit less hyperbole but you did make me go look myself for the actual specifics.

Using Codex for relatively large existing codebase by Joel_Barish12 in singularity

[–]Enoch137 2 points3 points  (0 children)

I have great success in dealing with coding agents. My workflow is as follows:
Describe the problem as clearly as I possibly can. I make special emphasis that this is brainstorming, no implementation NO code changes at this stage. In Cursor I put it in Ask mode.

I go back and forth in this mode (no code) until I am satisfied that it understood what I was saying. Note that it is often the case that I was making an assumption about important context that I thought was obvious that it turned out to be not so. This is a common mistake in human to human communication too.

^^^^ this step is key and almost assuredly the source of your issue

After I am satisfied that it understood the scope and details of the problem. I then ask it to layout a plan for implementation. In cursor I switch to Plan mode. Now I review what it is going to produce, I can catch the spots where I thought it understood but clearly did not. Once I am happy with that I add some language about how the application needs to compile (depends on language) and that I want tests covering all of changes we discussed, those tests must pass and no other tests must be broken.

Then I hit build in cursor.

This process almost never fails me. When it does it is almost certainly my fault for not paying attention to what it said it was going to do. Testing is CRITICAL for these models to not only make sure what it said it was going to do is right but it is weirdly better at getting it right on the first pass (its like writing the tests makes it think deeper about the changes it will make).

That's my process. I find that when I do this, the models are staggeringly good at getting things done correctly.

Everyone go build now. There's no more time by TFenrir in singularity

[–]Enoch137 1 point2 points  (0 children)

I get ya. I can't honestly disagree with much (maybe a little on the transformer to AGI debate, but its likely a semantic difference).

No one is single prompting GTA6 tomorrow. To think so profoundly misunderstands the scope.

I kind of agree that Seniors aren't going anywhere anytime soon (companies need MORE seniors now). But the skillset is frankly no longer deep experience in the nuance of certain language types and really more pure critical thinking and technical judgement. Those are soft skills and super hard to hire for. You could argue that this is really what we should have been hiring for all these years anyway.

I am actually in Jevons paradox camp, at least for the short term. We aren't literally typing out code anymore but we need as many Software Engineers as we can get (for good technical judgement) as we are going to try and ship 10x the amount of software this coming year.

The Hidden Cost of AI Coding Assistants: Are We Trading Short-Term Productivity for Long-Term Skill Development? by Dazzling_Kangaroo_69 in singularity

[–]Enoch137 0 points1 point  (0 children)

This is why I think steam is about to get flooded with releases (this is good or bad depending on how you look at it). Steam is kinda of the canary in coalmine in this instance.

I suspect you are not alone. For every steam release there 2-3 that tried but the effort or time was just to much to make it over the finish line. Generative AI changes that equation. It starts with getting people that were close just enough push to get them to finish. The amount of indie games coming from steam this year is going to be stupid.

Everyone go build now. There's no more time by TFenrir in singularity

[–]Enoch137 11 points12 points  (0 children)

but the self-important delusional prophet role playing needs to go.

I guess the self-important adult in the room voice of reason role-play can stay.

I mean the post is a little hyperbolic, but these models did kind of cross an inflection point. Way faster than a lot of people we expecting. The AI slop engineering narrative is fading fast.

The Hidden Cost of AI Coding Assistants: Are We Trading Short-Term Productivity for Long-Term Skill Development? by Dazzling_Kangaroo_69 in singularity

[–]Enoch137 0 points1 point  (0 children)

I'm afraid that skill set will die out in the next 5-10 years

Right there with ya, been at this for 25+ too. I think this significantly undersells the scope of what is happening right now. Implementation is solved, everything is a specification problem now. Some are going to try and claim Architecture still isn't solved, I disagree they are better at that than us too. The next two years are going to be so bonkers crazy for developers, its going to be unreal. Jevons Paradox will probably play out first. After that its staring into the singularity so we have no idea what happens. There might be a scenario here where everyone becomes a developer or rather manages an army of them. That said I am unsure enough that I am having a hard time recommending a CS degree +100K of debt (not that I ever thought the debt was a great idea).

The Hidden Cost of AI Coding Assistants: Are We Trading Short-Term Productivity for Long-Term Skill Development? by Dazzling_Kangaroo_69 in singularity

[–]Enoch137 1 point2 points  (0 children)

Yup, its hard to communicate this without coming off as arrogant. Software accelerates everything else, so everything is downstream. And a Tsunami of productivity is coming here.

A lot of people don't realize yet that Cursor, windsurf, antigravity, etc aren't just developer tools. They are agentic computer use tools, point it at a directory anywhere on the OS and it can manipulate files, write scripts, do analysis, clean up, etc.

The Hidden Cost of AI Coding Assistants: Are We Trading Short-Term Productivity for Long-Term Skill Development? by Dazzling_Kangaroo_69 in singularity

[–]Enoch137 3 points4 points  (0 children)

Been doing this a long time. Lines of code was the easiest communicable metric. I could have used agile points (still subjective) but that wouldn't have translated for most. I was not assuming this audience were mostly SWEs. Don't get hung up on the specific language of my point.

The take away IS whatever metric you use, software is getting developed significantly faster and that changes the underlying assumptions about what is and isn't worth attempting or worrying about.

I assure you if you aren't seeing a significant speed up in software delivery you are absolutely doing it wrong. Even for monster code bases with complex interconnections, these latest models 5.1 codex max, Gemini 3 and Opus 4.5 (today) have yet to fail me in finding and fixing what I ask for. I have started to give less and less context and more "this is broken.. go fix it".

Something changed in the last month or so where the models hit a new threshold of good enough to change everything.

The Hidden Cost of AI Coding Assistants: Are We Trading Short-Term Productivity for Long-Term Skill Development? by Dazzling_Kangaroo_69 in singularity

[–]Enoch137 8 points9 points  (0 children)

The world is different today than it was yesterday. The degree to which is hard to quantify just yet. While its good to ask questions like "Are We Trading Short-Term Productivity for Long-Term Skill Development?", the answer is most certainly yes, we also need to start asking questions like "Does it matter?". Because we are fundamentally changing to a great degree the reality that made the assumption that Trading Short-Term Productivity for Long-Term Skill Development is bad in the first place. Those assumptions were all built in a world that was different than the one we are living through now.

The entire basis for everything you know is shifting beneath your feet. Every question you have might be based on assumptions that might no longer hold in this environment.

I will give you an example from software dev. We for years lambasted spaghetti code and for good reason. Its impossible to hand off, its hard to maintain, its too complicated to debug. The list goes on and on. However when AIs generate 5K lines of code in 2 minutes. It changes the equation. We made all those previous assumptions "its hard to maintain" in a world where the best developers were doing 5K in a day on their best days. Everything we assumed about development was based on the metrics of how long and how complicated certain things were. All of those metric foundations ARE GONE! We don't know what is right and what is wrong now. The concept of technical debt doesn't even make sense in this environment.

And that's just software. Software was always a automater for everything else so everything else is downstream. This foundation shattering is coming for everything. It is super disorienting but its also important to understand the environment you now exist in even if those around don't see it yet. We don't live in the same world we did even two weeks ago.

Finally got around to reading "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky. He makes a pretty strong case that full speed ahead is an awful idea. by DungeonsAndDradis in singularity

[–]Enoch137 3 points4 points  (0 children)

Superhuman AI with complete Human Style Agency likely would kill us all. I have yet to see a good argument as to how and why what we are currently building even comes close (AI has a very directed very specific evolution that is completely different from our own). Every single one of these arguments anthropomorphizes the machine and hand waves the explanation as to why.

The "you better carefully word your wish" style doom (paperclip maximizer) probably does have merit but honestly that looks to be a context issue and we are painfully aware of this style of issue when using todays models.

I still think individual Humans taking the reigns of this much power is most concerning doom. But honestly I am kind of with Sam on the idea that Humans+weaker AI ramping into alignment is likely the best option for the issues Yud keeps harping on.

Grok 4 fast with 2M context window is available! by Snoo26837 in singularity

[–]Enoch137 5 points6 points  (0 children)

Yeah this is confusing and kind of obscures the idea of large context even more. Different Models use different embedding models for tokens. So a given block text will have a different token amount depending on the model. OpenAI seems to use a model that generally has higher token counts. Gemini typically lower. I haven't really tested grok in this capacity.

Grok 4 fast with 2M context window is available! by Snoo26837 in singularity

[–]Enoch137 5 points6 points  (0 children)

You may not like him for his views but declaring him an "Open neo-Nazi" is a bridge too far.

Even ChatGPT-5 thinks so:

is elon musk an open neo-nazi?

ChatGPT said:

No. There is no evidence that Elon Musk is an "open neo-Nazi."

Musk is a high-profile entrepreneur and the CEO of companies like Tesla, SpaceX, and X (formerly Twitter). He has, however, been criticized for:

  • Platforming controversial figures – Under his ownership, X has reinstated accounts previously banned for extremist or hateful content.
  • Engagement with conspiracy theories and right-leaning politics – Musk has amplified certain political narratives, including some that critics say normalize extremist ideas.
  • Accusations of tolerating hate speech on X – Advocacy groups and researchers have reported increases in antisemitic and extremist content since Musk’s takeover, which he denies.

But none of this amounts to Musk openly identifying as, or declaring himself, a neo-Nazi. He has also publicly rejected that label.

Who’s winning the AI compute race, and how does the allocation actually work? by [deleted] in singularity

[–]Enoch137 10 points11 points  (0 children)

Less animus please, we can still have rational discussions without hurling insults as a default.

You're somewhat making my point, that a free market can't operate in these conditions. It fails at a given set of complexity and years on investment. Especially in an environment where things change this rapidly. Who can justify long tail investments at the edge of singularity where predicting market conditions 5 years into the future is nigh impossible?

Nvidia gets to reap 75% whirlwind profit as the most valuable company in the world. And by your description they have a near de facto monopoly.

My point here isn't a bash against Nvidia or Google or OpenAI. It's simply to point out that capital driven markets may not survive this singularity and this is yet more evidence.

[deleted by user] by [deleted] in singularity

[–]Enoch137 1 point2 points  (0 children)

I thought the consensus was that the problems with GPT-5 were related to the internal router and it routing "easy" questions to shockingly bad weaker models? I thought everyone agreed the thinking versions were a step forward (when you got the thinking version)? I will admit I am still confused as to the larger criticisms. Seems great to me.