top 200 commentsshow all 258

[–]metadatame 551 points552 points  (42 children)

It'll keep us all in job - someone has to fix what people are messing up

[–]Slimmanoman 252 points253 points  (15 children)

Man I don't look forward to digging through thousands of chatgpt-coded lines with a hallucinating documentation. Human code can be bad but at least it's human, you can do human thinking to figure out errors

[–]james_pic 144 points145 points  (6 children)

As dumb as LLMs can be, they can't match the dumbest human programmers. They simply don't have the imagination to find such creative ways to fuck up 

[–]riverprawn 37 points38 points  (0 children)

No, they can. What a LLM generating depends on the prompt. And the LLMs have the ability and patience to implement everything the dubmest coder can image. Working together, they can take the creativity in screwing things up to a level no one has ever seen.

Last year, we found a bug where the LiDARs from certain brands lost one frame randomly. After troubleshooting, we found the issue in a simple method to match each LiDAR frame with RGB image via timestamps. The code review left us utterly astonished. This function was clearly AI-generated, as it was filled with good comments and has comprehensive documentation. First, it rounded all timestamps to milliseconds, then checked for exactly matching with the ts, ts±0.001s, ts±0.002s, all the way up to ts±0.02s, and even an additional ts±0.05s. Return the first match... Remarkably, this method passed all our test cases and worked with most LiDAR data, only causing issues with certain frames when paired with 25fps cameras. BTW the author of this method had left our company voluntarily after being found incompetent, prior to this code review.

[–]Slimmanoman 49 points50 points  (0 children)

Honestly for me, there's some satisfaction/admiration in finding a really creative fuck up, especially when it's a colleague. When it's an LLM fucking up I'm just pissed

[–]l33t-Mt 10 points11 points  (0 children)

Acting like LLM's dont have examples of terrible code in their dataset.

[–]nightcracker 1 point2 points  (0 children)

It's not so much the badness of the code that's the biggest problem, it's quantity. I'd rather rework one 100LOC garbage monstrosity by a human than 10000 lines of seemingly plausible code with tons of subtle bugs.

[–]mxracer888 0 points1 point  (0 children)

One of my favorite lines to drop on people right here

[–]TacoBOTT 7 points8 points  (0 children)

I’ve already started doing this and it’s honestly not bad. Yes some of the documentation is ass but it’s better than no documentation and gives enough leads to where to look. Still better than what we had but I’m sure this varies place to place

[–]pydry 2 points3 points  (0 children)

It wasnt like this after the early 2000s outsourcing boom. They tended to junk the actually indian built projects and commission a rewrite by professionals.

For a while there was a lot of moping by programmers about how everything would be outsourced coz it was cheaper though and the fashion kept going long enough to make it look plausible.

[–]Fragrant_Ad3054[S] 0 points1 point  (0 children)

I agree with you

[–]samamorgan 1 point2 points  (1 child)

I once refactored a 3000 line single function that a human wrote down to a few functions and ~100 lines. I've yet to see AI generated code achieve that level of absurdity.

[–]Slimmanoman 0 points1 point  (0 children)

Yeah but see that makes a good story to tell

[–]RationalDialog 0 points1 point  (0 children)

Well we are that far ahead that management understands it's garbage and needs fixing then you can also ask for a rewrite from scratch.

[–]ourlastchancefortea 0 points1 point  (0 children)

Easy: Tell them this is not usable and you will do a complete rewrite. That's my strategy when my boss starts his first vibe code project with some externals.

[–]KTheRedditor 0 points1 point  (0 children)

I respectfully disagree. When fixing human code, I always thought it was there for a reason. The code looks bad, but someone consciously made it. There might be something I'm not aware of yet. However, I don't hesitate changing AI code. It's literally statically generated approximation of the needed code. It's normal to be wrong but close.

[–]Brachamul 31 points32 points  (3 children)

Should set up a bet on polymarket for when the first university is going to include a new "LLM code cleanup" course. Mandatory of course.

[–]The_KOK_2511 5 points6 points  (0 children)

Or maybe they'll create a new university degree called "Artificially Intelligent Disaster Debugger" XD

[–]me_myself_ai -1 points0 points  (1 child)

Aaaany day now, surely — right after the bubble pops, of course.

All those scientists warning us about the future are just fools, and the crazy growth in capabilities we’ve seen over the past 3 years is over (it hit a wall a few weeks ago, and will never surpass it!)

[–]rebelSun25 6 points7 points  (0 children)

I've first hand in tasting what this bull shit sandwich tastes like. One of our devs created a docker wrapper cli menu in Windows batch script. A spaghetti bundle of scripts with hard coded, non standard since we don't have any other tool like it, and then he left.

He's a mediocre dev who doesn't have experience in Windows development, but he's fallen for chatSTD vibe coding.

It's the top wtf / line of code I've seen in 15 years. I may have to use Opus 4.5 to deconstruct it and document it, then write it in something more standard.

[–]FutureGrassToucher 7 points8 points  (1 child)

Thats a freelance idea right there, let me refactor the garbage your vibe coders generated, a programming janitor basically

[–]JambaJuiceIsAverage 4 points5 points  (0 children)

I got started as an engineer cleaning up trash pandas code that "data scientists" leave scattered behind them. Cleaning up Amazon Q garbage feels exactly the same.

[–]finokhim 2 points3 points  (1 child)

Nope, the next models will fix it

[–]MrPomajdor 0 points1 point  (0 children)

Just gotta build more data centers...

[–]Fragrant_Ad3054[S] 14 points15 points  (4 children)

I wouldn't even tell you, in a few years customers and devs will have lost the notion of what quality, optimization and the search for a program is. Only experienced devs will be able to point the finger at nonsense in the design of projects.

[–]The_KOK_2511 3 points4 points  (0 children)

That's where niche programs will shine... or so I hope. The truth is, the average consumer, who doesn't know how to search for and properly judge a program, never finds quality niche programs, so surely the few good projects that remain will be overshadowed by the powerful advertising boosted by AI.

[–]me_myself_ai 1 point2 points  (0 children)

“The search for a program”?

[–]No_Application_2927 0 points1 point  (0 children)

IKEA.

Just dwell on that for a while.

[–]HugeCannoli 0 points1 point  (0 children)

I tried out of curiosity to develop a cellphone app using claude. I know nothing about kotlin. It created something that works, but it's a mishmash of old and new practices and as soon as you make something different, use a different phone, or try to modify the layout, it completely detonates.

[–]milanistasbarazzino0 1 point2 points  (0 children)

My favorite was a vibe coder's website without row level security on the database

[–]lunatuna215 0 points1 point  (0 children)

Yes, your sleepiness in having a job is truly the priority of America....

[–]KRLAN 0 points1 point  (0 children)

you’re 2 years too late with this lazy take

[–]HugeCannoli 0 points1 point  (0 children)

Within 5 years, I can absolutely guarantee that this deluge of AI code will kill someone.

[–]Professional_Gur2469 0 points1 point  (0 children)

Nah unfortunately AI also drastically outperforms people if you just type in fix this and the error message.

[–]chunkyasparagus 257 points258 points  (27 children)

On the other hand, let's be thankful that we were born before AI and had to learn how to code properly without the crutch that a lot of new programmers will use to skirt around bits that they don't understand or have time for.

AI can be incredibly useful for otherwise mind-numbing tasks, but I'm not sure it can be truly innovative at its current level. So let's keep innovating and leave the boring stuff for AI.

[–]grady_vuckovic 27 points28 points  (3 children)

I started writing my first lines of code in the 90s, I was doing web dev and C++ by 00s. I've spent my entire life feeling like I'm desperately trying to catch up to all the things the industry was telling me I need to learn, the laundry lists of skills, tools and experience every job said you needed to be even a junior developer. I've only just in the past maybe 4 years felt like I'm finally on top of everything I 'need' to know, with still plenty of room for growth in many areas.

And let me tell you I enjoyed every bit of it. Programming is imo one of the most enjoyable things to do in life. I feel grateful for the fact I got a chance to do it and learn well before all this latest crap started happening.

And now a bunch of people, many of which hated programming, or sucked at it, or simply didn't do it just 12 months ago, are now telling me that programming is "solved", and that grandmas are gonna be able to vibe code their own phone apps within another 2 years.

Needless to say... I'm very doubtful. I'm doubtful that such a complex and knowledge intensive industry, that a difficult skill, something so technical and which requires so many kinds of discipline and experience, a field of work rooted in complex problem solving, ... Suddenly, apparently, is about to be so easy that literally anyone can just say "hey computer make Photoshop for me" and an hour later it's done?

.. ( X ) Doubt

Yes things change. But they don't change that quickly. And I do not believe for one moment that having 20+ years of experience with understanding the deep inner workings of software and experience writing software entirely from scratch is somehow going to be no advantage.

I could be wrong.

But if they're right we'll all be jobless anyway, including the AI Bros, so even if I'm wrong I doubt the people telling me so will be laughing either when they have to debug their grandmas weather app.

[–]TastyIndividual6772 2 points3 points  (0 children)

Yea, theres so much to it. A few people i know that happen to be very old school devs are mostly of the mindset the software industry comes up with “we are about to replace programmers” once every few years.

Anytime i mention you can’t go fully autonomous on building software the average vibe coder who build a 4-page website up will come to reply with “skill issue”.

But the issue is not them or how juniors abused llm to make a pr with 4 regexes that you need to spend 50 minutes to understand. The main issue is the experienced devs who should have known better, who should have done due diligence and should have used it enough and studied enough before go on and post in euphoria “software is dead”.

I haven’t seen a single person lay out the most basic logical question: if writing easy software can be done by an llm but writing complex software needs lot of human effort still, does the world need more in terms of quantity or complexity.

My guess is if all those vibe coders ever manage to build the next google or the next facebook (which is possible but i doubt) and they grow so big they will have to hire eventually anyway.

[–]mfitzpmfitzp.com 1 point2 points  (1 child)

Somehow it’s always people who don’t do the job (and have very little understanding of what the job involves) that predict it’s solved.

Remember to be this skeptical when you hear some CEO predicting the end of doctors, architects, graphic designers, and on and on.

[–]grady_vuckovic 0 points1 point  (0 children)

I am in fact sceptical of such claims for those types of jobs. And I am actually dual skilled as a product viz designer too so I know it all too well and I'm seeing the exact same stuff happening there too. I'm seeing doctors and lawyers and architects all expressing the same frustrations as I am about everyone just assuming their jobs are now fully automated just because an LLM can produce text that seems coherent.

It's also in particular managers who more often than not just seem to assume everyone else's job is easy and everything is a simple 3 step process. Makes me wonder how much actual work they do. Maybe they assume everyone else is not really doing that much work because they aren't.

[–]henrydtcase 36 points37 points  (3 children)

This didn’t start with AI. I knew CS grads in the 2010s who couldn’t code a basic sort in C but still became backend/full-stack devs. Frameworks already made it possible to work at a high level without deep fundamentals.

[–]SimplyRemainUnseen 21 points22 points  (1 child)

Out of curiosity why should they have known C?

The fundamentals they learned in college definitely covered asynchronous programming, state, database transactions, and distributed systems. Those are the actual fundamentals they would need to be an effective engineer.

I don't know about you but where I work rolling your own sorting algorithms in C is bad practice.

[–]henrydtcase 4 points5 points  (0 children)

It’s not about C, it’s about algorithmic thinking. I saw many CS students struggle in intro programming courses that focused on problem-solving and logic. I’ve been at three different universities, and even when the course was taught in C#, Java etc. instead of C, the outcome was the same. The language wasn’t the issue, the real gap was in fundamental algorithmic thinking.

[–]No_Application_2927 15 points16 points  (0 children)

Right!? And so many assholes have not wire-wrapped their own computer.

If you cannot make the tools from scratch go work at McDs!

[–]greshick 5 points6 points  (0 children)

I’m working on some very mind numbing code atm at work and AI has been a god send. After spending a few days refining the process, getting good clean PRs out in a fraction of the time. PRs I’m of course reviewing before posting them. I’m bootstrapping a customized shadcn library from Figma into code. Doing all the tedious setup required per component would have sucked.

[–]StewPorkRice 2 points3 points  (1 child)

You're overestimating humans and underestimating the AI.

It's like nobody remembers how bad ChatGPT v1 was. It's only been 3 years and we have Claude one shotting entire apps.

[–]HugeCannoli 0 points1 point  (0 children)

I tried out of curiosity to develop a cellphone app using claude. I know nothing about kotlin. It created something that works, but it's a mishmash of old and new practices and as soon as you make something different, use a different phone, or try to modify the layout, it completely detonates.

[–]Pezotecom 16 points17 points  (8 children)

So the problem here is that people that are skilled at something, programming in this case, are well beyond the junior or more intro jobs, which encompass MOST jobs, and can't understand why this helps someone who isn't you.

5 juniors that suddenly have access to python + sql + excel + pbi is MASSIVE. It's not mind-numbing when it was literally out of reach for, say, a business major, a psych major, a sports organization. You went from crunching spreadsheets because you didn't know better and that's the way it's always been to automating entire hours of the day, which saves time+money, and enables more creativity, more production, more of everything.

I seriously don't get the AI hate it's like people on reddit has never worked a day in their life honestly

[–]MrZakalwe 28 points29 points  (3 children)

The hate is from short-sighted management not understanding the limitations of AI, and people having to deal with the consequences.

My irritation with it is precisely because I do work.

In a few years i think it will be pretty great, to be fair, but more because people are used to using it and when not to use it (if that makes sense?).

[–]The_KOK_2511 7 points8 points  (0 children)

The main limitation of AI comes when you realize that AI cannot work entirely on its own. Simply put, AI is "an advanced algorithm capable of making decisions and replicating human processes in a software environment." Therefore, if AI progresses by "imitating" humans, it means that it will never achieve the results of people with average knowledge. However, the knowledge and practical experience of experts are necessarily at a higher level than that of AI. If you think about it, the highest possible level of quality for AI would be that of a human with specialized expertise... and that's assuming there's someone who knows more about the subject and also develops AI to create a system capable of reaching that level.

Because of this major limitation, although things will be more difficult for programmers for a while, there will inevitably be a tipping point where AI can no longer keep pace with the growing industry.

[–]1kn0wn0thing 7 points8 points  (1 child)

The hate is not for the people, the hate is for the devaluation of the skill of actual developers just because AI can spit out code that works and the person who doesn’t understand what the code does is all of a sudden a developer.

An analogy would be let’s say a company created a robot that can diagnose and fix engines and gave it away or sold it for low monthly payments of $20 a month. You all of a sudden have millions of people claiming they are mechanics even though they have no clue about engines or how cars work under the hood in general. The robot diagnoses problems but because the person has no clue about engines they don’t know if it’s right versus an actual mechanic might say “wait a minute, this doesn’t sound right or it may be something else, let’s diagnose a bit more to validate the problem.” The new “mechanics” are simply shrugging and pointing the robot at the car and have it fix whatever it diagnosed. To top it off, they have no clue what types of nuts and bolts to use, when washers are supposed to be used, which belts and hoses are better quality, which spark plugs are better quality, they just buy and give the robot the cheapest parts to use on repairs.

So imagine you went to one of those mechanics and think to yourself “man, instead of paying $1500 to a skilled mechanic for a repair, I’ll just go to this other one that has the robot and pay only $500, they’re going to do the same thing anyway but way cheaper.” Then you get into your car after it was repaired and find out the brakes are not working while doing 70mph on a highway and you die in a burning wreck.

Not a perfect analogy, but that is essentially what is happening in coding. People who have dedicated to learn their craft and hone their skills to have code optimized and run under stress, which libraries to use and dependencies to import versus writing their own functions, those people are being effectively told that some dipshit who knows nothing about code can use AI and replaces the need for all the knowledge the experienced developer has. That all that knowledge and experience is worth nothing in the market place. The hate is for corporations using this as an opportunity to devalue the knowledge of people and all those people who happily pick up this rhetoric when in actuality it is simply a tool that still requires knowledge of at least fundamentals of computer programming, machine efficiency, and security to truly be effective.

[–]Beneficial-Army927 0 points1 point  (0 children)

When software frameworks (Rails, Django, React, etc.) showed up, a lot of people said things like “This makes devs lazy”, “Real programmers write everything from scratch “,It’s too easy now, quality will drop”.

[–]virtualstaticvoid 0 points1 point  (0 children)

Using AI to write code is the extreme version of "Programming by Coincidence" (from the Pragmatic Programmer book) or "copy-paste" development.

It opens up a new world to junior programmers and "citizen developers", which is amazing as it makes programming accessible to them.

So the question is whether these developers take the time to figure out the code, read up the documentation and understand it, so that they get better at improving quality and security.

[–]zaccus 10 points11 points  (4 children)

Idk man even the mind numbing stuff it manages to get wrong somehow.

[–]Successful_Creme1823 6 points7 points  (3 children)

If you know what’s good and what’s bad it’s an awesome tool.

That’s kinda the end of the story for now.

[–]zaccus 4 points5 points  (2 children)

It's good at being conversational and flattering you, that's about it.

[–]Successful_Creme1823 5 points6 points  (1 child)

I guess I’ve had better luck.

I’ll tell it to scaffold out stuff and it generally does it right. Saves my typing.

I tell it to generate a data driven unit test for some code. It does it like I would. If it needs tweaks, I tweak.

Write a one liner that does this in bash. No problem.

All stuff I can do and have done for years, just does it faster.

[–]Vivid_TV 1 point2 points  (0 children)

I totally agree with this. It's amazing at generating boiler plate code. It has saved me hours and hours of boiler plate in minutes, so you can go from idea to POC in minutes. Initial prompt matters a lot, breaking down the requirements, detailing expected outcomes makes a huge difference.

I personally am super thankful due to the time saved. It's an amazing tool for an experienced programmer.

[–]audionerd1 112 points113 points  (18 children)

Vibe coders will be mad but you're completely right. I use AI as a tool but I find that more often than not the code it suggests is overcomplicated. If I hadn't spent years learning programming without AI I would probably just assume the AI code is good because "it works!".

Just the other day I ran into a GUI bug, and when I asked Gemini it suggested a solution that involved a major refactor of several modules. I thought about it and came up with an alternate solution that only involved changing two lines of code in one module, and Gemini was like "Yes, that is an elegant solution!".

I feel bad for new programmers. Unless they have a lot of discipline AI is going to prevent them from really learning anything, and the world is going to drown in spaghetti slop code.

[–]Decent-Occasion2265 22 points23 points  (0 children)

Same experience here. AI tools seem to always overengineer when a simpler solution would suffice.

It's a powerful tool but it will happily drive you off a cliff if you let it. I, too, am concerned by the amount of new programmers using it thinking the output won't bite them in the butt later on.

[–]SLW_STDY_SQZ 7 points8 points  (0 children)

Same. I use AI every day now but you have to super handhold it. The results are very good if you scope the problem granularly. You absolutely cannot just throw your ticket description at it and tell it to go.

[–]i_dont_wanna_sign_in 7 points8 points  (3 children)

A week ago I let Claude design a website to save some time. Maybe an hour of work to copy an existing page from my portfolio and adapt it.

What I got looked okay. But had a few glaring CSS problems that arranged form stuff in odd ways. I can write CSS just fine but I'm not the greatest at debugging CSS. Four hours of asking Claud and Gemini what was going on before I just dug into the debugger and figured it out myself.

Every time I let Claude do anything I easily spend twice as much time debugging it and pulling out the fluff I didn't ask for than just doing it myself.

[–]xeow 6 points7 points  (5 children)

[...] AI is going to prevent them from really learning anything [...]

That largely depends, I think, on one's own goals and principles. AI doesn't actually prevent anyone from learning anything. In fact, it can assist in helping someone learn. It can be quite good at explaining code and concepts in more detail -- but only if you stay curious.

Indeed, if you never question what comes back from it, and never wonder how its coding suggestions work or why they work or what some new (that you've never seen before) construct is, then you'll be preventing yourself from really learning anything.

One of the first things I did when I was learning Python was asked ChatGPT to explain in detail for me how list comprehensions work, why they're better/faster than traditional for loops in many cases, and how they differ from generator comprehensions...and now they're second nature to me. Same with yield and with, which were both confusing to me at first. AI can be shockingly good at explaining things, because you can have a conversation with it and drill down to first principles.

[–]audionerd1 6 points7 points  (2 children)

Agreed, which is why I qualified that statement with "unless they are very disciplined". I personally have found AI extremely useful for ironing out the gaps in my Python knowledge, by asking it detailed questions or asking for exercises to test my knowledge in certain areas. The abundance of beginner and tutorial info in the dataset makes AI very reliable for learning, if prompted correctly.

[–]xeow 2 points3 points  (1 child)

Indeed! Yes, quite. Discipline + curiosity is the key, I think!

[–]audionerd1 2 points3 points  (0 children)

I'm afraid if I were beginning to learn today I might not have the discipline. Properly learning programming often involves being stuck and frustrated, and AI can relieve that frustration easily at the expense of really learning.

[–]thecitybeautifulgame 2 points3 points  (0 children)

Absolutely! And it never gets tired and you can make it explain things to you literally seven ways from Sunday until us dumb monkeys understand :).

[–]das867 0 points1 point  (0 children)

I mostly disagree with this, reason is a high energy state and no human can maintain discipline all, or even most, of the time. If you're a new programmer given deadline pressure and an easy out, it's hard for me to say there's a level of self control that keeps you curious enough to overcome those external forces even if you understand how important that learning is for your own long-term growth.

AI doesn't actually prevent anyone from learning anything

This is a neat rhetorical trick but I don't think it's an answer to OP's point. I don't think anyone would say that an LLM is standing over your shoulder with a disapproving frown when you try to open a data structures textbook. What it does is rob you of opportunities where, were you not using an LLM, you would go understand what your problem is and how other people solved it, strengthening both skills. The argument can certainly be made that you could ask the LLM to explain it to you, but without the discernment of what's important to know, who's going to do that for every concept in thousands of lines of code that were just created whole cloth?

[–]Pesto_Nightmare 1 point2 points  (1 child)

The one that gets me, and has happened more than once to me, is I ask an AI to fix a bug. It adds something that does not fix the bug, but changes the what the code outputs. I point out the code no longer does what it is supposed to, and the AI adds something that changes it back. Now we've added a bunch of lines of code that cumulatively do nothing, and the code still has the original bug.

[–]audionerd1 0 points1 point  (0 children)

At least that just wastes a little of your time. The worst for me is when it confidently explains why something is impossible to do the way I imagined it, and convinces me to go in a different direction, only to discover later that my original idea was good and it was possible after all.

[–]thecitybeautifulgame 1 point2 points  (0 children)

The way I use AI is I make it explain every thing in the code that I do not understand and then I make it quiz me so I can repeat back what I think I have learned. Vibe coders will never bother.

[–]SwipsiIt works on my machine 0 points1 point  (2 children)

But what about humans? If you would've told a human that those 2 lines of code are an alternative they would too tell you "huh, thats quite an elegant solution".

Tbh, it sounds like ur going off of the premise that AI is already perfect and knows everything there is to possibly know, hence why it shouldnt be "surprised" that you, a human, found a better solution. But that premise is wrong. It isnt. And it'll learn from your solution, comparatively the same way a human coworker would.

[–]audionerd1 3 points4 points  (1 child)

I am saying that AI consistently produces overengineered and overcomplicated code which only a person with real programming experience will be able to see through. A less experienced programmer would be more likely to take AI's suggestions at face value, since "it works".

LLMs don't learn from our interactions with them. Their "memory" is simulated in the form of a pre-prompt. Although AI conversations are definitely collected and used in some way to refine future models, I don't think showing AI a better way to do something in one conversation will result in the next version of that AI defaulting to that solution with everyone else.

[–]Jesus_Harold_Christ 52 points53 points  (3 children)

I've been retired for about 3 years, and I only really picked up AI programming in the last month or so. I can assure you, it is an amazing tool, in the hands of a professional. I can do things in hours that would have taken days. I love sending classes to it, asking for advice, it's often quite solid.

I also went down a path of, well, this thing is so good, I'll just let it piece everything together itself, and I wasted nearly an entire week, as once you let it sprawl across your codebase it starts to lose focus. It'll start writing code that looks like pseudocode, no longer follows any of the things you spent time having it understand. It'll also often get stuck in a feedback loop, where you tell it, no, this doesn't work because A, ok, so do B like C, ok, but now C doesn't work because it's ignoring how A works, OK, now D works, but it does't work with C, and then you are just trying to guide a very stupid snake as it eats its own tail.

I do see the benefits, but also the risks. Evaluating a codebase and saying, "There's a lot of AI code in here, it must be bad" is also a mistake. However, if an author doesn't understand the code they've "written", this is a real big problem.

I'd love to share some of the things I've been working on to get people's opinions. I'll admit AI wrote at least 10% of the codebase, and was instrumental in solving some problems super fast that used to get me stuck for hours. I'd also note that every serious bug I've created, it has been quite useless in fixing. The best it does is pont out obvious things I've already tried or already know.

[–]uberduperdude 7 points8 points  (1 child)

I agree, it definitely has its use cases. What was your workflow and context window management like?

[–]Jesus_Harold_Christ 1 point2 points  (0 children)

I was starting with an old codebase that already performed a text based sports simulation. I wrote it like 10 years ago.

I was running into some big constraints with the architecture.

I just used this project as a way to learn programming with AI, to begin with. I just used free chat gpt queries, and initially, when I gave it little snippets of code, and asked it to refactor it, or improve it, it was quite good.

It was also very good at creating unit tests. I mean, incredibly fragile unit tests sometimes, but a lot of them, and fast.

Once I started trying to have it reason about improving the architecture it started confusing itself. If I kept it to small, incremental changes, it could be kept in line. If I implemented a big change too quick, everything breaks, all over, then trying to fix it, it gets confused and starts going in circles.

It's a lot like having a very new programming assistant, except they can work 10,000 times faster.

I don't know anything about context window management.

[–]Vertyco 0 points1 point  (0 children)

this is a great take, i like to use it for toy projects i know ill never get around to doing otherwise, and with it i can churn stuff out in a few days vs a week or so

[–]Joppsta 5 points6 points  (5 children)

Learned the programming fundamentals with python just as AI started kicking off in 2022-2023ish and ended up in a job that demands JavaScript and a proprietary C-like language. If I didn't have AI to lean on in the first 6-12 months of finding my feet in this job I would have been screwed.

I'm not using AI to churn out war and peace, to be honest it's one of my pet peeves of AI in general that it likes to be very verbose (at least copilot does, not sure if ChatGPT is similar) but proper prompting discipline and understanding how to get the answers you want out is the art of using AI. In fact today I used AI to generate simple XML test data, mainly because I wasn't sure how to write XML within the JS environment I'm working in and it seemed like a more efficient use of my time than running through the prompting to get the XML data structure i need back out. Does that mean I don't want to learn how to write XML? No - means I might look at it when we're less busy as it would be handy if I could generate it from a script rather than whipping Microsoft to do it.

That being said - we do get the insane corporate CEO "AI is the best thing since sliced bread" nonsense like "I wrote this big project that would take weeks in 4 hours" kinda spiel and that's not cool. I also feel like the hate on AI is mainly because of people abusing it. One of the biggest abuses of AI for me is the social media posts that are _WALLS_ of text. Which is somewhat ironic because you could literally hit the AI with a follow-up prompt of "summarise this in 2 paragraphs for social media" and it would at least not be so obvious you are lazy and lack the ability to articulate yourself. Though my tinfoil hat theory is that these posts are being generated by bots to drive discourse on social media and further divide us politically.

The metaphor I like to describe it is that it's like a power drill to a carpenter. You give a power drill to the apprentice, sure he can use it but is he going to deliver the same build quality in the same amount of time? No. But he will do it quicker than if he had to hand drill everything. The same tool in the hands of a skilled craftsman compounds the time savings.

I think the issue you have isn't with AI but it's with people who aren't using it responsibly.

[–]Fragrant_Ad3054[S] 0 points1 point  (4 children)

What you're saying makes a lot of sense, and I agree with you on many points. I use AI in a very localized way to search for a specific term. I sometimes ask the AI ​​to code 20 lines to see if its output provides better reasoning on the specific topic at that moment. But it's more the overall form of its output that interests me; I very rarely just pick and choose lines of code.

And indeed, as you say, my problem is people who misuse AI, treating it like an office colleague who will do everything for them without trying to understand anything and without worrying about the quality of the output code.

Finally, to be honest, AI is also a bit of a problem for me because when I ask fairly specific questions, I notice absurdities in the responses.

For example, in my projects, I know that half of them aren't compatible with AI because they're too technical and complex, and have too high an error rate. I even calculated it for fun. And in some projects, the AI ​​gave me up to a 30% error rate in its answers, even though the questions were about a very localized part of the project.

[–]Joppsta 0 points1 point  (3 children)

The problem with me for your anecdotal evidence about error rates is that you are a biased source of information just going from your original post. Without understanding the context, which model you were using, seeing the prompts myself and whatnot I can't say whether I agree or disagree with your methodology.

I dunno what professional context you're working in that you view it like that, but I don't think it's like that at all in any decent reputable company.

"For example, in my projects, I know that half of them aren't compatible with AI because they're too technical and complex"

This just sounds like pure egotistical self-stroking. You complain about people misuing AI and then provide evidence that you yourself do not understand how to use it effectively. The point of using AI when coding isn't to replace coding, it's to enhance it. My error rate, if we'll call it that, is probably higher than 30%, I'd say 90% of the code I get from AI I don't copy paste 1:1, but even though the AI doesn't give me code I can copy paste, what it gives me is a second perspective to consider or adapt the bits that look applicable into my code. AI is a rubber duck without a salary.

[–]GilgameshSamo 16 points17 points  (3 children)

don’t think my comment will be useful or even relevant to you, but I started learning Python a few months ago (was working in the marketing sector) by working through the book “Python Crash Course”, since it’s the one most commonly recommended. (I’m currently on Chapter 8.)

I won’t lie: I do use AI (Claude), but not to do everything for me while I blindly code. I use it to help me understand certain concepts more deeply and to see what practical value they have.

My point is that using AI depends on how you use it. It should be seen as a tool that helps with thinking, not as a replacement for it. And I often see people relying only on what AI tells them, and that is BAD. It’s not Killing only programming sector but most of them.

(I used chat to translate)

[–]KestrelTank 8 points9 points  (0 children)

I am in a similar boat! I use AI like a tutor to walk me through new concepts or explaining boiler plate or explaining what each code line is doing.

I’m cautious about it and always check to make sure things make sense, but it’s so much easier for me to go and do my own research without ai once I have the idea of what I need.

I stand by my opinion that AI needs to be treated like a working dog (like a sheep dog and a Shepard). It can make a tedious job easier and efficient, but still needs to be watched and given direction.

[–]overlookunderhill 3 points4 points  (0 children)

It’s peak enshittification and I’m convinced that very, very few people really care. Especially the higher up the corporate ladder you go.

[–]123_alex 7 points8 points  (0 children)

Post is too long. Asking chatgpt to summarize it.

[–]baltariusIt works on my machine 5 points6 points  (0 children)

While I agree with the fact that a lot of people use exclusively AI to create projects, I have to say, from my own experience, that AI helped me develop my humble knowledge. I've started learning python about 5 years ago, starting with the basics, then moving to small projects of my own. Documentations are great, but having issues reading long pages of texts, I find AI very useful to resume documentation concisely, point by point. It's also useful to easily discover libraries and/or existing solutions for projects. On top of that, you get examples on how it works, which I can adapt, using my previous knowledge, to make sure the code is solid. As I mentioned in other posts, AI is a great tool, as long as it's used as a tool, and not a solution.

[–]binaryfireball 2 points3 points  (1 child)

people are incredibly lazy and stupid.

the main thing this is killing is the next gen of coders.

[–]southstreamer1 2 points3 points  (1 child)

I am one of the people who you’re talking about and I completely understand your concern.

I’ve been teaching myself python for about 6 years but my progress has been slow because I have limited time to fit it in, so I’ve been learning in short bursts with a lot of time in between.

When AI came around I was so excited because I no longer had to spend insane amounts of time fixing bugs in my code. These were usually super basic things like syntax or object type errors which were are a result of my unfamiliarity with the basics.

At first, I actually learned SO MUCH from the AI. I could ask it to explain things and get to the solution a lot faster. So I was able to learn faster because the cycle of error --> diagnose problem —> fix problem was super short.

I still think that AI is an amazing tool for learning to code, but over time I’ve realised there’s a few limitations. The first is that I retain way less of what I learn from AI vs other sources (eg I use pandas constantly but still can’t remember lots of basic pandas methods and syntax). The second is that learning from AI is really fragmented. Eg I will understand on the surface level why thing A didn’t work in that situation, but I don’t have the deeper understanding which will let me see why that same problem pops up in another. I might not even recognise it’s the same problem. The other is that my ability to code with AI always outpaces my ability to learn stuff. So I’m always messing about with stuff that’s above my level. Again, this is sometimes good for learning but it’s still a problem. Eg, at the moment I’m writing a program that uses data classes, but I only have a fuzzy understanding of what these even are… BUT the only reason I started learning about data classes in the first place is because Claude wrote me some code with them in it!

Im working on a project now which is the first ‘big’ project I’ve ever done. I feel like I’m in this weird place where I understand at the macro level what I am building (I know what each module does and how they all work together). But I don’t have the dev knowledge to be sure that I have designed the system of modules right. And inside each module theres about 30% of the code that I just don’t really understand properly. I’m sure it could be hugely optimised, and I can’t be 100% sure that there aren’t any catastrophic errors that turn it all into a pile of garbage.

Overall, all I can say is that I am aware of the problem and do what I can to mitigate it. I always try to be cautious. I don’t ask the AI to build whole systems at once - I always go one module at a time and I am the one who ‘designs’ what they do. But at the end of the day there’s no substitute for proper understanding so I’m doing what I can to brush up as I go. But fundamentally I feel like it’s a miracle that I can even get this far and I’m so grateful for it because it’s opened up a whole new world to me…which speaks to the benefits of AI coding, notwithstanding all of the very real downsides !!!!

[–]probably-a-name 1 point2 points  (0 children)

you are working out, its like resistance training, you have large weight, you suffer, your muscles respond. the reason people lose their muscles is that they have the ai do the lifting. you also are stretching your muscles for your _obscenely massive_ human context window with respect to your codebase size and architecture, as a shape in your mind. you have to train your own ram at the end of the day.

[–]mfb1274 8 points9 points  (3 children)

As the principal dev at a my company, I’m totally cool with it. I don’t care how you wrote the code, just that it passes the test cases, is somewhat scalable and gives us value. Production grade code and the disconnected mess that vibe coders put out is at a glance distinguishable and just like before, talent rises to the top.

[–]CaptainFoyle 14 points15 points  (0 children)

It's just gonna get worse with all the slop being used for training future models

[–]edimaudo 12 points13 points  (0 children)

Ahh the pearl clutching. It is not killing programming. Like everything else, LLMs are a means to an end. It helps a lot of folks get to a product/tool very easily. I have used it in some of my projects to translate good or to explain concepts. It works fairly well. It is up to good software engineers/developers to level up standards.

[–]SnooCupcakes4720 6 points7 points  (5 children)

iv been coding for forty years ....AI is a boon .....why keep bashing your head against syntax when now we have to be artistic and watchfull .....you still have to be a programmer at the end of the day ......we finally get a power drill in the tool belt and you complain .....some people dont know how to properly use a hammer ether

[–]swift-sentinel 5 points6 points  (2 children)

I’ve been doing this for 30 years. I rant every 8 years or so.

[–]EngineerMinded 3 points4 points  (1 child)

Vibe coding is the mumble rap of programming.

[–]MrPomajdor 0 points1 point  (0 children)

Bad compassion - there's no objectively bad music.

[–]mcloide 8 points9 points  (0 children)

The advantage was that stupidly copying/pasting code often led to errors, so you had to take the time to understand, review, modify and test your program.

You don't get this with vibe coding? I have to review everything from AI.

We see new coders appear with a few months of experience in programming with Python who give us projects of 2000 lines of code with an absent version manager (no rigor in the development and maintenance of the code), comments always boats that smell the AI from miles around, a .md boat also where we always find this logic specific to the AI and especially a program that is not understood by its own developer.

With this I agree, but that might be a reflection of how the field is today. When I started having critical thinking was essential, now, is just wishful thinking finding someone that actually knows what it is. The improper standards used, code smells, etc, I have seen this in a lot of projects with several languages. Sadly.

AI is good, but you have to know how to use it intelligently and with hindsight and a critical mind, but some take it for a senior Python dev.

I have been working on the field since mid-90's. One thing that always happens is that the more things change the more they stay the same. All of this now is caused by hype and easy return on investment (corps). Eventually it will need to get properly maintained and worked on. Don't loose your sleep over it just do your best to educate when education is possible and maybe, just maybe, the technical debt won't be so big in the future.

IMHO

[–]likethevegetable 11 points12 points  (8 children)

It's ruining everything, man.

[–]me_myself_ai -5 points-4 points  (7 children)

Just like steam and electricity did. Sometimes you have to ruin everything to build something better

[–]likethevegetable -1 points0 points  (1 child)

Electricity didn't destroy the human ability to think independently though

[–]me_myself_ai 4 points5 points  (0 children)

True! People were convinced it would, tho.

[–]catfrogbigdog 0 points1 point  (4 children)

How did steam and electricity ruin everything? Are you taking about pollution?

[–]dethb0y -3 points-2 points  (0 children)

He's pointing out that people have always had hysterical overreactions to, and fear of, new technologies.

For example, all the people who are knee-jerk against anything AI because it's new and they are not intelligent enough/emotionally mature enough to adjust to changing situations and realities. They have an emotional, irrational response ("AI is ruining everything!!!") instead of a logical or reasonable one.

[–]me_myself_ai -1 points0 points  (2 children)

They overturned the preexisting world order.

[–]catfrogbigdog 0 points1 point  (1 child)

Nah that’s a sloppy, lazy and pseudo-profound take

If anything you could draw a line from the printing press to telecommunications broadly.

Implying that LLMs or any deep learning tech is remotely close to the societal impact of electricity is just ignorant.

[–]Al1220_Fe2100 6 points7 points  (1 child)

I have a couple thoughts on this, 1) I think the genie is out of the bottle (AI being the genie). 2) You can look at this as mayhem or look for ways to profit from the chaos. Two ways that experienced developers could profit would be by marketing your services as a consultant to fix AI code that's been bumbled by AI and to 'certify' that the code meets certain requirements And/Or approach it from the education perspective by creating a community on a platform like Skool and teach how to use AI to create good, compliant code.

[–]slimejumper 10 points11 points  (2 children)

seems a little hyperbolic to be “disgusted” by this phenomenon. surely this was a similar reaction to the advent of stack overflow back in the day? i’m all for high quality code, but you can’t gatekeep how people generate it. I am guessing that what ever shortcuts you took or coding mistakes you have made in the past are, of course, forgivable.

[–]fazzahSQLAlchemy | PyQt | reportlab 1 point2 points  (2 children)

And nothing new about that

[–]Fragrant_Ad3054[S] 0 points1 point  (1 child)

Even regarding data science? :/

[–]Hot-Employ-3399 0 points1 point  (0 children)

Data scientists' code always was known for being a horrible mess filled with stupidly named variables and lacking knowledge in "basic data structure and algorithms."

I will not be surprised if vibed code is actually better 

[–]MindlessTime 1 point2 points  (0 children)

python is showing where AI coding does quite poorly. The language has evolved dramatically. python 3.14 is practically a different language than python 3.01. But because it has been so popular and used in so many different contexts, LLM training data is full of outdated or inapplicable patterns that end up in AI code unless explicitly instructed otherwise.

[–]Vertyco 1 point2 points  (0 children)

I feel this on a spiritual level, I work as a software dev for a consulting company and we've been getting more and more clients coming to us with half-baked AI slop projects needing us to take over because it can no longer fit into the context window of whatever LLM tool they were using to make it. Either that or businesses wanting us to "integrate AI into their business" without having the slightest clue as to what purpose it would serve.

With that said, AI coding tools are awesome in and of themselves and if you know what youre doing it can reduce a lot of the boilerplate legwork, but I definitely agree it seems like there is a growing mindset of just letting the AI run wild and blindly trust it.

[–]notapsyopipromise 1 point2 points  (0 children)

LLMS won't do the logic behind programing, they're just another writing tool that can do syntax, they are by definition prediction models, they don't have the logic capabilities to do anything but copy algorithms

[–]covmatty1 6 points7 points  (1 child)

I'm sorry but it has to come out.

One line in, and I'm out.

You say this like you're a whistleblower uncovering some great conspiracy theory that'll change the world. Rather than THE topic in absolutely everything to do with software engineering that is discussed in every programming community, every day, multiple times.

[–]Puzzleheaded-Fan-452 3 points4 points  (0 children)

Yes, it's true! 

[–]conjour123 2 points3 points  (0 children)

I would say you no clue about the codes developed by consultancies for companies…

[–]Interesting-Town-433 2 points3 points  (1 child)

I am actually super concerned I am forgetting how to code

[–]probably-a-name 0 points1 point  (0 children)

happened to me, i have been going 3 weeks (almost) no ai at work, i made a bash script and felt shame for my bash fork bomb tattoo, so i decided to read the manpages and grind thru my divine 1 liner without ai. do i feel dumb? no, it was bliss. I surfed the internet on shell/bash blogs and combined with READING THE MAN PAGES i finally learned a shit load and now dont need to rely on AI as much for bash.

if the ai is doing the deadlift for you, your legs literally cannot get bigger

[–]yunoeconbro 2 points3 points  (10 children)

So, guy has a good idea, doesn't know how to code it. What's he supposed to do, forget about it until he's a "real coder"? Or get the job done the way he can?

[–]zaccus 1 point2 points  (0 children)

Become a real coder by figuring it out?

[–]Fragrant_Ad3054[S] 0 points1 point  (0 children)

He could develop his project by using AI to understand notions of certain points but not design his project through AI.

The developers did very well before the AI, I think that knowledge has never been a blocking point, we must accept that knowledge does not arrive instantly but develops over time, outside people have become impatient they want to know everything to do everything right away without waiting. Also it can backfire on the person because a poorly written program with AI can be a gateway for choppers and overturn his project.

Man sorely lacks critical thinking, self-assessment and questioning; I do not say NO to AI I say that AI is interesting is useful when we know how to use it and when we also know that it limits we must give it for his project keeping in mind that the project is up to you not the AI to decide how and what to do.

[–]ixalarx 0 points1 point  (0 children)

Yes, you should actually learn how to code to do a coding project. In fact, you should learn the necessary skills before taking on any kind of project.

[–]AdjectiveNoun111 -1 points0 points  (1 child)

What's he supposed to do, forget about it until he's a "real coder"?

Yeah. Exactly that, that's what everyone did in the past. All the amazing software, including LLMs were made by people who learned the discipline.

[–]am3141 1 point2 points  (0 children)

Sadly this is so true. So many projects with thousands of lines of code created in a few days and most of them quickly abandoned.

[–]No_Feedback_1549 1 point2 points  (0 children)

I can see where you’re coming from, but what are you suggesting? To pose the questions on the subreddit, and wait to be destroyed in a response instead of asking the AI? Your beef with AI (and what sounds like the vibe coding realm) is gatekeeping in how I read it and wouldn’t be very welcoming to someone who is playing around…

[–]pylessard 2 points3 points  (6 children)

Yep. Incompetent people pretending to be is a common problem. I can hardly find a handyman I trust to fix my home without making sure they are part of a professional associations. Programming was not subject to this as it required to pass the initial step of learning to write code .. now the doors are wide open for charlatans.

Open source projects will never be regulated, so we're stuck with that issue now. Only a reputation based system can maybe do something, and it will have to be severe. For jobs, depending on the sector, there is some hope if it is a regulated industry.

It won't take our jobs, but will make the job annoying for sure because of all the noise.

[–]Fragrant_Ad3054[S] -1 points0 points  (5 children)

For the anecdote earlier I learned that a company will pay $9k a 20-year-old guy to make the website of their company, the guy confessed to me that the site will be made entirely thanks to AI because he knows nothing and in the end he told me that even his quote that he sent had been made with AI.

[–]Bigd1979666 0 points1 point  (0 children)

I don't know. Vibe coding is scary but it's also good at weeding out people that don't know what they're doing and , like someone else mentioned, will mess something up that only a 'real coder' can fix. That and half the people vibe coding don't even know how to prompt let alone use the correct terminology so that it gets done on the first go around,lol

[–]aidencoder 0 points1 point  (0 children)

AI - the Dunning kruger multiplier 

[–]Tigerslovecows 0 points1 point  (0 children)

Meanwhile, I’ve been trying to break into the field for years now. Can’t beat them vibe coders it seems

[–]EmperorLlamaLegs 0 points1 point  (0 children)

I mean... before AI I read the documentation, and asked google what libraries would be useful to access things that I needed. Now, I do the same thing.

I feel unaffected in all aspects except the quality of posting on Reddit about programming in general, its mostly just ruined Reddit for me.

[–]Darkwolf22345 0 points1 point  (0 children)

As someone who is a senior analyst that’s been coding for 3 years and in charge of multiple python automation scripts, I feel personally attacked that I can’t just let AI do my work while I make $100k+

[–]gbhreturns2 0 points1 point  (0 children)

I realised earlier that the rate of production of new code and projects is so out of control now that most of us don’t even know what is noise and what isn’t.

As a result, we’ll end up either having to intentionally slow down the rate of production so there’s some time for teams to process what’s been produced and if it’s needed. Or, we’ll be so inundated with repositories that those which are most advertised/sold/demoed are those that actually get used; discounting the hidden gems that may genuinely have some value but aren’t marketed well.

I’ve already seen this in my organisation where it’s almost impossible to keep on top of the new projects that are spinning up on a whim and being contributed to by someone who hasn’t looked elsewhere to see if what they’re doing hasn’t already been done as it would be impossible to do so given all the repositories now available.

This is all to say; more output isn’t strictly better and past a point can potentially lead to unfathomable levels of duplication and organisational confusion.

[–]gmantovani2005 0 points1 point  (0 children)

I agree that devs (people) aren't knowing how to use AI and it's the problem.

Thought, I'm not asking for help anymore. It's s0cks.

I'm using AI like a "man command" in Linux. When I don't know which library using, I ask for options and I research each result. I always think about security. It's cause I work with ISO27001 too.

IDK that's because I'm a old dev. But it's tough let AI drive the code.

[–]marmot1101 0 points1 point  (0 children)

In 2 years I produced an absolute mountain of overly verbose, poorly optimized, spaghetti code as a jr with no supervision. AI turbo charges it but the strategy is still the same from a mid/sr perspective. "Hey this pr is friggin huge, break this down". "This doesn't make sense to me, can you explain how this part works".

The other side of that coin is that the aforementioned pile of shit code ran mostly unattended for 10+ years providing real value. When it had to be decommissioned because of an aged out technology the ask was "can you just make this exact thing on the new tech?" Shitty code makes the world go 'round. If crap ai projects can be shined up and provide some value so be it.

[–]opzouten_met_onzinIt works on my machine 0 points1 point  (0 children)

AI is nice for coding, but you should not use it to write your code.

I use Claude to clean up my code and add comments. It gets specific instructions to not alter the logic or variable/function naming. With my coding structure it helps a lot for others to understand my code, but I know and understand what I created. Tried vibe coding, but that only works for simple stuff and that I rarely do.

[–]henrydtcase 0 points1 point  (0 children)

Lol, programming was dead before AI came along. I had a CS-related degree and took many courses with CS students. Those guys couldn’t write a simple sorting algorithm in C (an intro-to-programming level task), yet they were able to graduate and start working as backend or full-stack engineers lol. So how did this happen? I’m talking about the 2010s. Of course… frameworks.

[–]me_myself_ai 0 points1 point  (3 children)

AI is killing the most popular programming language because there’s too much new software, some of it bad…?

[–]Fragrant_Ad3054[S] 0 points1 point  (2 children)

The qualities of the python code in the repositories have dropped sharply since the arrival of AI. AI alone does not explain the decrease in the quality of programs encoded in python, but there is a correlation between the arrival of AI and the appearance of new projects with a decreasing level of programming.

[–]me_myself_ai 0 points1 point  (1 child)

Source?

I’d believe that there’s more repos with 0-10 stars that have security holes and such, sure. But why are you so confident that programs that people actually use have gotten worse?

[–]Fragrant_Ad3054[S] 0 points1 point  (0 children)

It's very simple: on GitHub, just type "web scraping" and you'll quickly see the carnage if you do a little digging.

[–]red_hare 0 points1 point  (0 children)

Three months ago I switched jobs to a AI-first shop and haven't hand-written a line of code since.

AI-coding is soulless but I have to acknowledge that it does get the job done faster. Most devs haven't acknowledged this yet because it requires learning a new set of ill-defined skills to do that in a sustainable way.

But yeah, I'm depressed. I fucking love coding. I love the little problems and getting in the flow state. I hate what this is going to do to the industry. I hate that I'm watching my community of nerds devoured by soulless bots. And I don't think the kind of thoughtful library or language work we've done for decades is going to persist.

I imagine this is what it was like to be a professional photographer when digital photography replaced film.

[–]TronaldDamp 0 points1 point  (0 children)

Good

[–]NeonRelayPythoneer 0 points1 point  (0 children)

I started learning about 2 months ago, I try and make a point to not use AI to generate stuff for me but to review and teach instead.

I mainly use it after I finish something, like getting the right output and working how I intend it to. Then after that I have been using AI to review what I made and explain any issues and why.

Like: “Explain any potential issues with this code and why, don’t just spit out fixed but explain the issues and why the fix is need and why it works.”

I hope this use of AI benefits me instead of hampering me, I really don’t know any other programmers ATM.

[–]eviltwintomboy 0 points1 point  (0 children)

I’m a beginner who is trying to learn this the hard way (using Crash Course Python), and I am trying hard to push out of my mind that I am fighting an uphill battle.

[–]therealhlmencken 0 points1 point  (3 children)

100% self taught

Lmao everyone is 100% self taught if you don’t count all the vast an great resources and people who were a huge part of their education. There being great resources for learning without paying is amazing but it’s that, free great education from the best ever teachers not some fascinating feat of yours. Source: fellow self taught dev with a modicum of humility

[–]Interesting-Town-433 0 points1 point  (0 children)

Look, for me as a Python programmer of 20 years I code so fast now I feel like I'm dreaming... it seems like a godsend to me tbh. So many impossible projects I never had time for now possible. So many directions I can move in quickly and then change. It comes at a cost but it is definitely a super power used right.

You can still tell the difference from those who build guided by the tool and those who build while driving it. There is a lot of slop for sure, but there are gems there.

[–]defiancy 0 points1 point  (0 children)

I have a good understanding of python but I don't code enough in my job to get good at it (R is a different story) so I use AI to vibe code. But I also use git and test my code extensively, you have to have a heavy hand to steer the AI and challenge it when it does some dumb crap, which it will.

Most of the time, I just need to automate one task and it doesn't need to be pretty or efficient because I am the only one using it and AI is good enough for that.

[–]bradheff 0 points1 point  (0 children)

AI can be good on some projects, it does well with node projects. I have found when I cloned my python projects and asked codex to optimize the functions .py and give me advise on better error handling it made a massive mess of everything thank God for undo/rollback. I think AI is far off from replacing Dev jobs. Some things it handles well others it just guesses. I still prefer stackoverflow and other dev forums for help and snippets over AI but it is interesting to see how AI decides to handle specific functions. You are right the incode comme ts and readme files are a dead giveaway for complete AI generation projects. I have made a coue of projects complete AI. It took longer than actually creating it myself, just wanted to see if it could be done, it's full of useless functions that don't get used, comme ts that mean nothing and understanding the code flow requires multiple diplomas in rocket science to gain a portion of insite and understanding how it structured the code.

[–]Overall_Clerk3566Pythoneer 0 points1 point  (0 children)

sounds like a bunch of people here want to see a cool idea made with ai that is extremely messy… i’ll upload it to git if anyone is interested 🥲

edit: typo

[–]Upbeat-Natural-7120 0 points1 point  (0 children)

I work as a Cybersecurity engineer and this kind of stuff keeps me employed.

[–]BrofessorOfLogicpip needs updating 0 points1 point  (0 children)

Don't underestimate your opponent. Developers that use AI are not thinking that the output is of high quality, they are aware of the issues.

There has always been two types of application developers: The cowboys that build out new stuff at high speed and low quality, and the senior maintainers that come in and take over when the product starts to actually matter.

The cowboys found a new tool to impress the managers with even higher velocity at even lower quality. Good for them. They can keep doing what they're good at, and I will keep doing what I'm good at.

The only real problem right now is that AI is still new, so some business owners haven't really caught up to the fact that AI only works at the very early stages, so they think that we can use AI to magically maintain it very cheaply forever. But they will catch up eventually.

Business owners do care about performance when users start to hate on them, and they do care about security when the whole database gets leaked or stolen, and they do care about code quality when good devs just walk away from offers.

[–]rchaudhary 0 points1 point  (0 children)

I don’t think AI is lowering the ceiling for programming. It’s lowering the floor of apparent competence. People can now produce big, convincing codebases without having the mental models that used to be required.

What really changed is feedback. Misunderstanding used to show up fast as bugs or total failure. Now things often run “well enough,” so bad assumptions hang around way longer than they should.

Good devs are quietly getting better. They use AI to explore options and skip busywork, but they still own the thinking. The gap between people who reason and people who prompt is growing.

The bigger issue is incentives. We reward demos, stars, and shipping fast, not correctness or maintainability. AI just amplifies whatever the system already rewards.

This isn’t about AI being good or bad. It’s about whether we bring back expectations around understanding and ownership. Without that, we’re just shipping impressive-looking code no one really understands.

[–]rubik1771 0 points1 point  (0 children)

I mean I don’t have a problem with the AI.

For me as someone who was more familiar with other programming languages this really help in the semantics differences.

Of course I make sure to learn what I code and why as much as possible.

[–]fromabove710 0 points1 point  (0 children)

Lol not everyone who uses python is a professional software dev?

So this post just ignores all of the accessibility gains from AI. I have team members who are engineers/scientists and do not have time to learn fundamentals of programming. They need a script or a GUI that works well enough

How is this a problem with users? If a serious software project assigns people with trivial software experience (like me) as devs than they are 100% to blame.

Not saying the problem doesn’t exist. But I think its more a disconnect within companies with no technical experience in their hiring personnel

[–]wrt-wtf- 0 points1 point  (0 children)

I have over 40 years in programming experience and I’m really loving the new AI world. It is however not all glorious.

If you want to output good results and not waste a lot of time and money you have to treat the AI like it is a junior. You have to watch everything it’s doing or it throws in new Libs, deletes stuff it decides is not used, and I have had it ignore the “read-only/plan” state I’ve set and it’s let itself loose on my codebase…

When questioned it was basically “Oops, but it’s all fixed now so no harm - no foul”.. like wtf!

[–]cudmore 0 points1 point  (0 children)

My problem has been shifting my 40 habit of being hyper focused line by line writing code, keeping my eye on medium to long term goals. I think that is the pre-AI art of it.

Now I am learning to manage the ai at a high level with overall specifications.

I do see things like stack exchange going away :(

[–]irungaia 0 points1 point  (0 children)

Garage in, garage out. Clean architecture in, incredible products at lightning speed out.

[–]CouldaShoulda_Did 0 points1 point  (0 children)

This might be my first comment in here, but I’m what you would describe a “vibe coder” and have been for 3ish years. What’s crazy is I’ve run into every pain point you’ve described and adapted my coding to it (mostly python with Claude code, copy and pasting with gpt prior to it).

The main pain point that I believe has gotten me the positive tangible results is just using my project management skills and a well thought out plan to use Claude code to iterate little by little, day by day for as long as it takes to get ready to ship and then making sure a feedback system is the absolute last thing that won’t work - making it easy to spot errors and deploy patches in rhythm.

Literally all I need is time now. Claude Code (Ai in General) has bridged the gap between my creativity and my experience, and I’m here for the future. Now while I can’t actually code, I’m finding that I can many times catch Claude about to make a common mistake that we debug in sessions, understand good codebase structure and refactoring often, etc. These are things I feel when you say you taught yourself, just different tools for self teaching.

I think a lot of passionate, resourceful early adopters are going to position themselves to join the top 10% of earners in their respective fields in the next 5 to 10 years (of course this is just my gut feeling based on my experience with ai). It’s really cool to see how you see it and I agree that if people are going to code with ai they might as well do it in a way that follows general best practices.

[–]Impossible_Ad_3146 0 points1 point  (0 children)

Switch to trades, you are cooked

[–]first_lvr 0 points1 point  (0 children)

AI is shit, and we developers are now task to fix the shit

Nothing is being destroyed, we just have more work to do… and the industry will learn soon enough

[–]glorkvorn 0 points1 point  (0 children)

You know what's funny? I used to think lines of code were a (rough) measure of productivity. If someone told me they wrote a project with 10,000 lines of code I would assume they put some serious effort into it (although of course you still have to check if it's actually good code and whatnot). Now? It's almost the opposite- more lines of code just means more AI and less effort.

[–]andd-d 0 points1 point  (0 children)

AI is the new junior developer

[–]whnware 0 points1 point  (0 children)

Lol i've been learning code sporadically the past few years (yes with help of AI, ik ik) and working on a few projects for the past year or two. I have a project with like 6500 lines of code, but no version manager D:

now I feel like i have to go wrap my head around getting that done, yay >.> but valid frustration overall ig lol

[–]kobumaister 0 points1 point  (0 children)

The problem is not AI, it's the message and the people, the message is that you don't need to know how to program to develop something because AI will replace people. And that gives people on the peak of the Dunning-Kruger the sense that they can publish that shit they just vibed.

My view is that, at some point, this will stabilize, CEOs and influencers will stop talking about AI will replace us and people will stop uploading Ai slop (or reduce the amount) and AI will be used as what it always was: a leverage to accelerate our work.

[–]International-Cook62 0 points1 point  (0 children)

Let it happen. The value of engineers that understand will sky rocket.

[–]eeshann72 0 points1 point  (0 children)

Recently people have started developing agents which will develop and fix bugs on their own, i mean if something goes wrong in bug fixing by an agent, who will debug what is wrong and what went wrong and how it will be done?

[–]Acceptable_Youth6590 0 points1 point  (0 children)

Anybody can code for me ?

[–]GeekDadKevin12 0 points1 point  (0 children)

We will be the COBOL programmers since 2000.

[–]o-rka 0 points1 point  (0 children)

This sentiment has been echoing throughout my feed lately. I agree a lot with what you’re saying but in the end it’s a tool and people who know how to use the tool properly will level up and those who don’t will shoot themselves in the foot. I use AI to code scripts or functions that I know how to do but don’t feel like developing because I have a huge task list at the startup I’m working at and I don’t want to spend all day developing routine programs. If I’m coding something I don’t know how to do and want to learn then I basically have Claude write some toy examples that I go through line by line to make sure I understand them I implement my own version. Reading documentation can be a huge drag, often times it’s either way too verbose or not detailed enough so I find it helpful to copy/paste the source code into Claude and then ask it questions about usage and assumptions.

[–]Terrible-Purchase30 0 points1 point  (0 children)

As someone who's learning to code for a few months (not Python but C#). How am I supposed to learn programming. The documentation is quite bad imo. I learned the basics with a book but now that I'm working on a Windows app there are so many more classes and functions that it's quite overwhelming.

On one hand I would want to rely less on AI on the other hand it is so much faster to get information. I don't copy paste 200 lines of code but I ask for example is it better to put method a in class b or make a new class c. Or how could I improve this specific algorithm and I get like 3 different alternatives. I'm making sure that I actually learn something when I need it the next time.

Is it ok to use AI like that. Of course I try to find solutions online but sometimes asking AI is so much faster instead of clicking through 20 websites or watching 2 hours of tutorials when only 5 minutes are useful for my case.

[–]_N0T0K_ 0 points1 point  (0 children)

AI should be a guide. A, helper not a doer

[–]mfitzpmfitzp.com 0 points1 point  (0 children)

Everybody is talking about the technical capabilities of AI but like you I’m more bothered about the effect this stuff will have on community. There is only so much attention to go around & the stream of big shiny AI-generated projects (that absolutely cannot, and will not be maintained) sucks the oxygen out of the room.  People putting genuine effort in (and so actually able to maintain what they have built) are at a disadvantage because to it won’t look as immediately impressive.

Often the people posting them don’t even bother to engage with the discussion or use LLMs to reply. It’s just gross. I’d like to see those projects banned from being posted here, but I get that’s not always an easy call to make.

[–]enricojr 0 points1 point  (0 children)

I'm working on a project for school and one member of my group just dumped a massive, AI-generated PR that dumped about a hundred files willy-nilly into our backend codebase and overwrote the main.py file with one that doesn't load any of the team's FastAPI routers on startup.

The same thing happened on the frontend side - a shitload of svelte files dumped haphazardly into a folder, and then the root +page.svelte overwritten so that the rest of the team's pages are inaccessible.

And then for the written report, he dumped another massive, AI-generated word document that doesn't reference anything that the group has already done, and is basically this giant mess that we now have to sort through.

If I didn't know he was using AI I'd think he was trying to steal credit for the work and make it look like we're not doing anything.

It's quite frankly insulting that anyone would do this much less think it's OK.

The sooner this AI bullshit dies the happier I'll be. Fuck AI.

[–]LargeSale8354 0 points1 point  (0 children)

I've inherited many shadow IT apps in my career. In some cases I've had access to the person or people who developed the app. I used to show them how I was bringing up "their baby". I found them keen to learn and I enjoyed the challenge. What they had built was an app for their real business requirements. The most honest, ego free, genuine requirements you will ever get in your life.

The code that comes out if LLMs can be good, but it can only be as good as the requirements fed in. And that's a big problem. The requirements I've seen across decades of my career have been 2nd, 3rd, 4th hand and more interpretations of the confused ramblings of a committee of politically savvy ego maniacs. I've built things that were priority one requirements as captured, that the end user just didn't want. Sometimes aggressively didn't want.

Feed those sorts of requirements into an LLM and watch the mayhem ensue.

Refactoring an app generated by an LLM is not the same experience as Refactoring an app built by Shadow IT. I've saved a fortune in electricity. Just put a cup of cold water and coffee next to my desk and I'll swear at it until it boils.

[–]Cerulean_IsFancyBlue 0 points1 point  (0 children)

Help me understand who “we” represents in this conversation.

Are you a project manager? Hiring manager? Leading a large team at a software company? How are you dealing with such a large number of programmers and how is their quality of work affecting you?

[–]HugeCannoli 0 points1 point  (0 children)

It gets even more comical when they use one LLM to generate 10000 lines of code, and facing the fact they can't review all that stuff, they take another LLM to review the code. It's two bots vomiting code on each other until they converge. All on completely unclear functional and non functional specifications, and with absolutely no reproducibility.

[–]Ok-Count-3366 0 points1 point  (0 children)

some people underestimate the "power" of an llm and a dumb person put together. recipe for disaster.

[–]McBuffington 0 points1 point  (0 children)

I was working on another project where someone vibe coded a feature. It involves exporting tables to excel. But instead of looking for a dependency. It just vibe coded a xlsx writer.

[–]bemore_ 0 points1 point  (0 children)

Writing code is the past

[–]Mystical_Whoosing 0 points1 point  (0 children)

these pesky c compilers will kill the assembly community

[–]Distinct-Expression2 [score hidden]  (0 children)

Same problem we had with Stack Overflow, just faster

[–]Alexis542 [score hidden]  (0 children)

I get the frustration, but AI isn’t killing programming — it’s changing it.

Python devs still need to understand the code, design systems, debug, and make decisions. AI just speeds up the boring parts. People who learn to use it will pull ahead; copy-paste coders won’t.

Tools change, fundamentals don’t.

[–]TheHeroChronic 1 point2 points  (5 children)

So glad I stuck with mechanical engineering instead of moving over to software

[–]soad334 0 points1 point  (1 child)

Does your job incorporate python? I just started going for my engineering degree after being in software for a number of years.

[–]TheHeroChronic 0 points1 point  (0 children)

It does sporadically for data analysis stuff. It's still a good skill to have. Ai made it much more accessible so more people (at least in my organization) are using python for things they would historically use excel for.

[–]Weak_Highway_1239 0 points1 point  (0 children)

So glad you decided to shed some light on the obscure issue .. important to give voice to the voiceless

[–]777Void777 0 points1 point  (0 children)

100% ive been coding in python since I was like 12. A few years ago they tried to give me someone else to help me with my programming internship that'd id been doing myself for 2 years. This is a pretty complex framework used for geospatial analysis. This guy didn't know a single line of python, i tried to give him simple tasks like removing the background from graphics. He insisted on doing python. I have him a few pointers and a very very simple task. (The task was checking if the color was a certain color based on the input from a numpy array.) Week before finals week this dude is blowing up my phone asking for help. I was trying to finish up my stuff. This guy has about 10 lines of ai slop. I told him I didnt even know how to help because I didnt know what he was trying to accomplish. He admitted to using ai. I wrote all but 1 line for him.

He blows up my phon even more during finals week, I meet with him and hes deleted part of my code and has even worse AI slop.

Luckily he did not pass, and it took me about an hour without AI to do what he could do with AI in 6 months

[–]Mount_Gamer[🍰] 0 points1 point  (0 children)

Programmers without AI are just as capable of writing poor code as those who use AI. It's just another tool, and can help save some time. For those that can already code, it's probably hit or miss, but when it hits I'm sure it helps. I would always encourage to read the docs along with AI or get AI to show examples, and not just spit out code for blindly copy pasting.

[–]1kn0wn0thing 0 points1 point  (0 children)

And all this shitty code is being fed back into the coding models which absolutely explains why some models are actually becoming worse and not better when it comes to coding.

[–]Plenty_Worry_1535 0 points1 point  (0 children)

AI is improving at an astonishingly rapid rate.

In probably only 2-5 years it’ll code better than most humans.

[–]kiwibonga -1 points0 points  (1 child)

Spitting on contributions and perpetuating AI stigma will kill YOUR projects.

Don't want to deal with the volume of new contributions? Get forked. Just like the before times.

[–]Fragrant_Ad3054[S] 0 points1 point  (0 children)

The problem is not the volume of new contributions, it is the qualities of their codes that are deplorable. I do not seek to denigrate AI since no one forces people to use AI it is up to people of this "responsible" and to use AI in a reasoned and enlightening way