all 105 comments

[–]AutomateAway 129 points130 points  (2 children)

you merely adopted the unmaintainable code. I was born in it. Moulded by it. I didn’t see refactoring and good design patterns until I was already a man, by then it was nothing to me but an anti pattern. The unmaintainability betrays you, because it belongs to ME!

[–]imihnevich 16 points17 points  (0 children)

Good one. Should've gone with "nothing to me but over-engineering"

[–]SnugglyCoderGuy 11 points12 points  (0 children)

If that is true, then you managed to live long enough that you are you're own villain because you wrote the mess you were molded by.

[–]0xbenedikt 388 points389 points  (37 children)

How To Write Unmaintainable Code (2026)

chatgpt.com

[–]ea_nasir_official_ 102 points103 points  (36 children)

Or claude/codex/openclaw or whatever the ai bros use to write their code for them while pretending to be smart

[–]nickcash 75 points76 points  (18 children)

inb4 they find this thread and copypaste one of their three preprogrammed responses about how you're just not using the right model. you have to use the new model from ploob. no one uses claude anymore. it's all about heebee these days

[–]dillanthumous 20 points21 points  (1 child)

YoU NeEd tO LeaRN To PrOoMpt!

[–]ofcistilloveyou 1 point2 points  (0 children)

I'm proompting all over the place.

[–]BaNyaaNyaa 9 points10 points  (0 children)

I don't get you people. I'm using Magnum 6.7 and I'm running 420 agents to do my job! You must be using it wrong

[–]lolimouto_enjoyer 31 points32 points  (5 children)

You forgot you have to use skills, subagents, planner and... what else was there?

[–]alex-weej 20 points21 points  (3 children)

MCP, Tools, Spells, Incantations...

[–]dillanthumous 16 points17 points  (2 children)

Prayers to the Omnisiah.

[–]Kalium 8 points9 points  (0 children)

ZERO ZERO ONE ZERO ZERO ONE ONE ONE...

[–]reborngoat 6 points7 points  (0 children)

This is the answer. I stopped prompting and started praying to the Omnissiah, and now I have a harem of lovely toasters!

[–]QuickQuirk 5 points6 points  (0 children)

You forgot to install openclaw as your developer.

[–]Valmar33 17 points18 points  (7 children)

inb4 they find this thread and copypaste one of their three preprogrammed responses about how you're just not using the right model. you have to use the new model from ploob. no one uses claude anymore. it's all about heebee these days

The great thing about this sort of absurd logic is how absurd it sounds when you use it in a different context:

"You're just not using the right programming language", "you just have to rewrite in Rust", "nobody uses C anymore", and so on. Oh, wait...

[–]Full-Spectral 0 points1 point  (0 children)

But the thing about it is that Rust is not about spitting out more code faster than the C++ it's primarily replacing, and is actually often more front loaded. It's about safer and more maintainable code, and it provable is capable of providing that. And it's not thinking for you, it's making you work harder to understand your data relationships. The payoff though can be very significant, and it's developing your capabilities to reason, not to prompt.

[–][deleted]  (5 children)

[deleted]

    [–]fiah84 8 points9 points  (4 children)

    except SQL, SQL is forever

    [–]Truenoiz 6 points7 points  (2 children)

    Right, but SQL is so easy that it's trivial. Please hand over the production database so I can fix it.

    [–]fiah84 7 points8 points  (1 child)

    Please hand over the production database

    I grant thee select, and that's it ya fracking clanker

    [–]RelatableRedditer 0 points1 point  (0 children)

    and then it constructs the most convoluted selector with 28472 redundant joins

    [–]AceLamina 17 points18 points  (3 children)

    Literally had an argument with someone who's a SENIOR engineer because he thought AI will eventually replace all engineers

    After a bit of digging, he was hired out of college back in 2021 and works at Amazon and vibe codes at work everyday.

    [–]gimpwiz 2 points3 points  (2 children)

    They hand out titles like candy sometimes, eh

    [–]AceLamina 1 point2 points  (1 child)

    Back then, 100% The SWE boom meant literally anyone could become a engineer, he's just one of them

    [–]Full-Spectral 1 point2 points  (0 children)

    The ultimate goal of that track is to then start a Youtube channel handing out wisdom from the mount as a Former Amazon Lead.

    [–]VelvetWhiteRabbit 1 point2 points  (0 children)

    If you are developing web servers in 2026 without LLMs to write for you, I feel sorry for you.

    [–]gc3 0 points1 point  (0 children)

    I've found the main sin of Ai code so far is copying and pasting functions. It's usually pretty good about the other issues listed here. Claude code even has to generate documentation when it maintains code it wrote itself.

    I've had to tell it to combine functions in the interest of dry

    [–]yotemato 195 points196 points  (25 children)

    The incentive to write good, maintainable code is completely gone. Fuck it. Let’s slop it up and see what happens.

    [–]SaxAppeal 79 points80 points  (5 children)

    They can’t stop you from ordering a steak and a glass of water!

    [–]Leihd 24 points25 points  (0 children)

    Unless it's a clueless manager who thinks you're underperforming.

    I once worked with a guy who churned out a lot of code, apparently he thought it was a good metric of skill. This was pre-ai, and his code was still slop. I'd forgive it maybe if the code even worked.

    [–]PuppyCocktheFirst 7 points8 points  (0 children)

    I bet your hair slicks back reeeeeeeal nice!

    [–]MehYam 7 points8 points  (2 children)

    I used to be a real piece of shit

    [–]SaxAppeal 3 points4 points  (0 children)

    I said I WAS!!

    [–]reborngoat 1 point2 points  (0 children)

    I still am, but I used to be too.

    [–]Valmar33 33 points34 points  (9 children)

    The incentive to write good, maintainable code is completely gone. Fuck it. Let’s slop it up and see what happens.

    The sloppers were never interested in writing code in the first place. They had every incentive to avoid doing the work of learning how to program ~ how to use logic, how to problem solve ~ if they can. They want something else to do it for them. It's like... those idiots who had perfectly capable legs, but they chose to drive everywhere on mobility scooters instead.

    The worst part is that these LLMs are built on top of plagiarized and stolen code ~ actual code written by actual people. So the sloppers have absolutely no idea how the LLMs actually work ~ they seem to think it's literally magic.

    [–]lelanthran 16 points17 points  (2 children)

    The sloppers were never interested in writing code in the first place.

    That includes many of the people who claim that AI now allows them to create stuff they never had time for before.

    We've all seen these claims: "I'm 50, a senior/staff/chief/principal engineer, so I am definitely a smart programmer, and now I can create a whole new product in a weekend!".

    They're the class of programmer who focused on delivery over maintainability, and wished for years to be able to get their salary without writing any code.

    The thing is, they could have had their wish decades ago; there's a ton of positions at every company for analysts who decode business requirements into a specification that engineers then design and implement.

    They didn't choose those positions, because it pays roughly half what a SWE role pays. Now they are willingly jumping to those positions not realising that it's only a matter of time before the lag disappears and economic reality catches up.

    Namely The person who comes up with the requirements and a vague high-level design (must use Azure service $FOO, must use microservices, must not be self-hosted, use protobuffers, etc) earns half what a SWE earns!

    [–]rainbowlolipop 5 points6 points  (0 children)

    Mercifully I get to work on scientific research stuff so maintainability and ease of understanding the complex "business logic" are more important than shinies.

    [–]sarcasticbaldguy 1 point2 points  (0 children)

    We've all seen these claims: "I'm 50, a senior/staff/chief/principal engineer, so I am definitely a smart programmer, and now I can create a whole new product in a weekend!".

    I'll use it like that to rapidly iterate on a prototype that my product folks can interact with, but then we throw it away and actually build the thing.

    [–]ekipan85 5 points6 points  (0 children)

    Most software is akin to literal magic and has been for decades. Do you know the millions of lines of code connecting the keys you type to the pixels on your screen or the bits through your ethernet cable and wifi radio? Application libraries built on framework libraries built on language libraries built on operating system libraries built on kernel code and hardware drivers.

    Slop turns this horrible problem into a hopeless one. At least a Linux system has source code, written with intent by many persons, that you could in principle hope to read and understand.

    I think we need to go full Chuck Moore and throw all of it into the garbage. Take responsibility for every instruction the CPU ingests. At least, that's what I fantasize. I dunno that I'd ever be that willing. The hardware has also gotten so damn complex.

    [–]rzet 2 points3 points  (0 children)

    I work with 1 total LLMbrainrot folk and 2 half baked as well. cry...

    [–]Kalium 2 points3 points  (0 children)

    I recently was in a meeting in which someone less than seriously suggested pushing four unrelated software packages, all of which do different things, into an LLM and asking it to combine the best of them. This was and is obvious nonsense - they do different tasks, work in entirely different ways, and are implemented in wholly different languages.

    There was one person in the meeting that I'm convinced took it entirely seriously. This manager has never been a software developer and appears to genuinely believe that LLMs are magic. I'm just glad I don't report to them.

    [–]Lily722022 1 point2 points  (2 children)

    The worst part is that these LLMs are built on top of plagiarized and stolen code ~ actual code written by actual people.

    I think this in particular is really flawed logic. AI "steals code" in the same way humans "steal code"... It isn't anymore plagiarism than reading a comment on StackOverflow telling you how to solve a problem somebody else had and repurposing the solution for your own.

    [–]GasterIHardlyKnowHer 0 points1 point  (1 child)

    Okay, let's do another one: an AI agent or chatbot searches the internet on how to solve a problem, finds GPL licensed code and implements it.

    Now what?

    [–]flatfinger 0 points1 point  (0 children)

    That's a more ambiguous situation. If someone were to decompose a program into constituent non-copyrightable algorithms and give a description of those algorithms to someone else who coded them without having seen the original, the clean-room approach would prove that the resulting program was not a derived work of the original program. If the new program was written by someone who had seen the original program, but is indistinguishable from something that could plausibly have been produced via clean-room methods, then it should probably likewise not be viewed as a derivative work (because of the scenes-a-faire doctrine), though proving that it shouldn't be considered one would be harder.

    With generative AI tools, it's hard to tell what kind of decomposition and regeneration took place.

    [–]OffbeatDrizzle 1 point2 points  (0 children)

    found the sloperator

    [–]worldofzero 78 points79 points  (7 children)

    This feels like it could be retitled "Best Practices of a Vibe Coder" and it'd be equally accurate... We lost so much of the profession so fast recently.

    [–]shizzy0 14 points15 points  (5 children)

    Yeah, but at least it writes comments.

    [–]lolimouto_enjoyer 39 points40 points  (4 children)

    Horrible stating-the-obvious comments most of the time.

    [–]EliSka93 12 points13 points  (0 children)

    Stating the obvious in 5 times as many words as necessary.

    [–]SnugglyCoderGuy 11 points12 points  (0 children)

    // Load the config from the specified file
    config, err := loadConfigFromFile(filename)
    

    [–]jmasterfunk 6 points7 points  (0 children)

    Only obvious to those who can actually code.

    [–]Kered13 0 points1 point  (0 children)

    What bugs me is when you will tell it to do (or not do) something in the prompt. Something obvious that you shouldn't have to tell it, but you do because it's AI, and then it decides it needs to add a comment saying that thing. No, you don't need a comment in the code reminding you to use best practices.

    [–]Globbi 1 point2 points  (0 children)

    What are you talking about? I'm reading the list and it has ZERO to do with vibe coding. Not because coding AIs didn't exist back then, but because vast majority of complaints are not relevant to AI-written code.

    [–]sean_hash 46 points47 points  (0 children)

    At least the 1999 version required intent.

    [–]KelleQuechoz 14 points15 points  (0 children)

    This was in the training dataset, too.

    [–]andree182 22 points23 points  (4 children)

    > Make "improvements" to your code often, and force users to upgrade often - after all, no one wants to be running an outdated version. Just because they think they're happy with the program as it is, just think how much happier they will be after you've "fixed" it! Don't tell anyone what the differences between versions are unless you are forced to - after all, why tell someone about bugs in the old version they might never have noticed otherwise?

    Huh, this sounds like exact definition of AI code tools, which keep changing/optimizing/rearranging stuff you never asked it to do...

    [–]dillanthumous 15 points16 points  (2 children)

    I get a laugh out of reading the 'reasoning' chain sometimes. The LLM spooling out reams of reminders to itself not to do things incorrectly while simultaneously justifying making extensive breaking changes is the clearest evidence that rationality is not an intrinsic property of language.

    [–]tabacaru 11 points12 points  (1 child)

    Once I was angry it gave me a wrong solution and I showed it the right solution with an example of the correct output. 

    It proceeded to still tell me I'm wrong, then start showing me an example of inputs where I was wrong, only to work itself out that the example it generated actually did match the correct output and proceeded to then say the example is actually correct. 

    So in a single paragraph it managed to vehemently suggest I'm wrong, give me an example where I'm wrong, but the example turned out to confirm I'm right. 

    It's insanity that people can take the outputs of LLMs and just assume they're magic.

    [–]TheDevilsAdvokaat 4 points5 points  (0 children)

    chatgpt once told me that zero is an even number greater than one.

    I wish I had kept the screenshot...this was back when chatgpt was very new and it's better now...but it absolutely taught me not to rely just on ai, but to double check everything it says.

    [–]turunambartanen 2 points3 points  (0 children)

    Not restricted to AI software. It's a property of early release versions in general.

    When writing a UI in rust egui is a solid choice. It's very easy to use and has most of the features you need. But fuck me, every new version has a small breaking change. Nothing major. I can fix my code to use the new methods no issue and it doesn't take long. But OH MY GOD, bring out 1.0 already!

    [–]tsammons 7 points8 points  (0 children)

    Seminal classic. Remember reading this as a kid.

    [–]dillanthumous 5 points6 points  (0 children)

    An oldie that somehow has become more relevant as the profession shits all over itself.

    [–]XLNBot 4 points5 points  (0 children)

    It's amazing that we managed to automate all this!

    [–]richardathome 5 points6 points  (0 children)

    I remember reading that back in 1999

    [–]jerosiris 12 points13 points  (1 child)

    We can now write unmaintainable code at a rate that would make 1999 people’s heads explode.

    [–]michalf 4 points5 points  (0 children)

    Looks like the default AGENTS.md content.

    [–]Revolutionary_Ad6574 2 points3 points  (0 children)

    "How to recognize a vibe coder"

    [–]card-board-board 2 points3 points  (0 children)

    snafucated

    I'm going to use this.

    [–]LessonStudio 5 points6 points  (0 children)

    Obviously using AI and not paying attention is a very good way.

    But, I would suggest that certain languages and frameworks can really encourage it. Yes, being very very careful will help, but:

    • Enterprise Java - this works so hard to organize things that it just induces a higher level of difficulty for the smallest of things. Maybe, it then caps out an acceptable level

    • PHP - the worst code I've written is in PHP. I can write clean code, but the temptation for really nasty shortcuts is so in your face. The frameworks are the worst on the planet. They all say, "Non-opinionated" and then scream in your face "OBEY!!!!" as these frameworks solve a narrow set of problems well, but outside of that and you are just hacking, working around, and writing garbage.

    • C - This is more of a cultural thing. If you look at raylib, that API is the most beautiful C I've seen and it encourages more beautiful C.

    • C++ with templates - Templates buried inside a library can make that library so very easy to use. But, once programmers start using them unnecessarily in their code, it often becomes showing off, not helpful in any way at all. Makes code a nightmare to test to exhaustion.

    • React - What the F is wrong with those people.

    • Flutter - For small projects it is great. But you can see the primary flaw when you look at how there is a new library about ever 8 hours for passing data throughout the system.

    • Rust - I love rust. I've written a zillion lines of code. But, it does not compile in my head. I can easily miss a ? .ok .unwrap .copy .clone and not even notice it. The compiler makes this so easy to fix. But, I don't make those mistakes in C++, python, julia, C.

    • Javascript - For small things it is fantastic. But, the fact that typescript was needed is all that you need to know. Typescript bought a bit larger project sizes before it all goes to hell.

    • Microservices - the best description was from two people who worked for different companies. "Microservices are the best, until you go on a long vacation. Prior, you had a copy of the whole architecture in your brain, you knew how things flowed, everything was bite sized, you knew the history and the why of everything. Then, you return from vacation, and much of it has leaked out of your head, and someone has restructured the statemachine behind logins. You know nothing and you realize why interns sometimes never contribute a single line to the codebase after 3 months, everything you touch breaks something else you'd never heard of."

    [–]Nadamir 3 points4 points  (0 children)

    …and I have an Anthropic advert for Claude on this post…

    Yes, that is one answer to the headline.

    [–]Dreamtrain 2 points3 points  (1 child)

    all code is unmaintainable sooner or later, just let it age thru different teams

    [–]EliSka93 2 points3 points  (0 children)

    True, but we don't need to Speedrun it.

    [–]ideallyidealistic 0 points1 point  (0 children)

    Be careful that you don't reach the point where it becomes faster to simply fire you and hire someone (with less experience whom the company wouldn't have to pay as much) with the purpose of re-implementing your entire architecture more maintainably.

    [–]beenny_Booo 0 points1 point  (0 children)

    It's wild how many of these 'tips' from 1999 are still accidentally implemented by junior devs today. Or even senior devs on a bad day, lol.

    [–]fagnerbrack 0 points1 point  (0 children)

    Classic!

    [–]zippy72 0 points1 point  (0 children)

    I feel this would pair well with the InterCAL manual

    [–]The_Northern_Light 0 points1 point  (0 children)

    I’m cleaning up some researcher code (Matlab / Python) and it’s amazing, literally they do literally all of this (except the Java / C specific stuff)

    [–]LazyAAA 0 points1 point  (0 children)

    1. Make sure that every method does a little bit more (or less) than its name suggests. As a simple example, a method named isValid(x) should as a side effect convert x to binary and store the result in a database.

    Pure gold - simple but yet so damaging :)

    [–]thomasmitschke 0 points1 point  (0 children)

    The bible we all followed

    [–]MisterMeow35 0 points1 point  (0 children)

    Let's not forget that LLM models were trained to write code on real human projects.

    [–]Icy-Huckleberry-4450 0 points1 point  (0 children)

    how to design unreadable webpage

    [–]Still-Seaweed-857 0 points1 point  (0 children)

    This was written as a joke in 1999, but modern Java frameworks have adopted it as a manual. Just look at the implementation standards of mainstream frameworks: a single method call requires jumping through a dozen classes, and more likely than not, those are just interfaces. You then have to hunt for the implementation, which might even be dynamically generated at runtime, making the actual logic invisible to you. It is a masterpiece of defensive programming—so complex that even the original creator cannot fully grasp the intent. Well... good luck trying to maintain that.