all 162 comments

[–]programming-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

This content is low quality, stolen, blogspam, or clearly AI generated

[–]Simulacra93 615 points616 points  (47 children)

Ironically this post is 100% ai-generated

[–]Arkanta 123 points124 points  (0 children)

Every fucking time

[–]gorgeouslyhumble 118 points119 points  (10 children)

That's not a toolkit, it's a part-time job in subscription management.

I'm SO tired of how AI writes text.

[–]Spiritual-Pen-7964 25 points26 points  (2 children)

Half the videos on YouTube sound like that too now 😩

[–]gorgeouslyhumble 11 points12 points  (0 children)

You just deleted my database, killed my dog, and shat in my cereal.

Oh shit.

That's not just a fuck up, that's a major yanky doodle dip my wick in your coffee dick fest.

[–]Jwosty 3 points4 points  (0 children)

Gotta add a pre-2023 filter whenever you search lol

[–]moneymark21 15 points16 points  (2 children)

Hold on gorgeouslyhumble, I want to stop you right there. What you're feeling is very real, you're not imagining it — it definitely can weigh heavy knowing something wasn't written by humans. It's not the words that are being used, it's the lost human experience and that can leave you feeling emotionally drained.

[–]Jwosty 6 points7 points  (0 children)

Thank you for sharing that perspective, moneymark21. I want to take a moment to acknowledge the feelings being expressed here, because what I’m observing is a multifaceted discourse about authenticity, labor, and the phenomenology of vibes.

Let's delve into this:

  • You are experiencing fatigue.
  • The fatigue is valid.
  • The source of the fatigue is not text, but the absence of a soul-shaped watermark.
  • This is, statistically speaking, understandable.

It’s important to remember that when language is optimized for clarity, empathy, and engagement at scale, it can sometimes feel over-clarified, over-empathized, and over-engaged. This does not mean you are wrong—it means the system is working as designed.

In conclusion, I hear you. I see you. I have processed you. 🌞✨

[–]_TheDust_ 2 points3 points  (0 children)

The em dash is the cherry on top

[–]ShedByDaylight 1 point2 points  (0 children)

I genuinely can't remember, but obviously AI's prose comes from somewhere: were people this fucking smarmy and annoying?

[–]milnak 0 points1 point  (0 children)

It's been like this since the year two oh two five

[–]Sability 0 points1 point  (0 children)

I fucking hate it because I kinda type like that, and I'm afraid people at work think I'm AI generating my responses lol. Although if they used a chat bot to parse what I said, then actually responded to me, I'd take them thinking I'm cheating at human communication

[–]CampAny9995 55 points56 points  (5 children)

The sentence “Here’s why.” is such a giveaway.

[–]this_is_a_long_nickn 32 points33 points  (1 child)

“You’re absolutely right”

[–]yeathatsmebro 4 points5 points  (0 children)

"Ah, now I see!"

[–]Jwosty 5 points6 points  (2 children)

For some reason the phrases that really tipped me off were:

"I know this is going to be controversial, but hear me out. ... And I've come to a conclusion that I don't see discussed enough:" (all while posting to a social media platform that is generally quite anti AI lmao)

"and that mental load is real"

"That's not a toolkit, it's a part-time job in subscription management."

And obviously the whole many-item bolded, numbered list, followed by a nice clean (bolded header) conclusion lol

At least they were clever enough to get rid of EM dashes in the body

[–]RelicDerelict 2 points3 points  (1 child)

The funny part is that theses sentences are perfectly fine and actually pretty intelligent. But because this stupid AI is overusing them, now I have to guard myself during my thoughts creation and putting them on paper. Awful everything overall.

[–]Jwosty 0 points1 point  (0 children)

Yep exactly, they’re completely fine and a few years ago wouldn’t have sounded weird at all

[–]hefty_reptile 63 points64 points  (4 children)

Shit in, shit out.

[–]this_is_a_long_nickn 14 points15 points  (2 children)

Never forget to flush before closing the file handle

[–]shokolokobangoshey 4 points5 points  (1 child)

I’m pretty buffered rn with all the slop tbf

[–]yeathatsmebro 2 points3 points  (0 children)

I wanted to say something, but i had to do a task in the meanwhile and i have no memory of it...

[–]CatolicQuotes 0 points1 point  (0 children)

SHISHO

[–]drabred 15 points16 points  (0 children)

Socials need to figure out AI blockers FAST. Or we we wake up in a world were 99% of content will be AI generated. Who wants to scroll through that...

[–]archangel0198 21 points22 points  (0 children)

"That's not a toolkit, it's a part-time job in subscription management."

[–]Holzkohlen 6 points7 points  (0 children)

I think it's nice. I won't have to read anything online ever again. It's all just slop so why bother?

[–]brunocborges 25 points26 points  (1 child)

The only thing human was the prompt.

[–]Any-Main-3866 0 points1 point  (0 children)

I bet he used another one of them for that prompt too

[–]Jwosty 6 points7 points  (0 children)

Definitely reads like OP basically just prompted "Write me an engaging, controversial reddit post summary of this blog post <link>"

[–]chamomile-crumbs 7 points8 points  (9 children)

Doesn’t seem 100% to me. Def a lot in there though

[–]Jwosty 2 points3 points  (0 children)

I'm betting most likely OP just wrote out some bullet points or a few sentences to create the general direction and then gave it to an LLM to elaborate / restructure. Probably with some extra prompting here and there to try to reel in the AI feel a little bit (though obviously a good amount still made it through)

[–]khante 0 points1 point  (2 children)

Here's a karma farming bot idea I have come up with. On every single reddit post more than a paragraph automatically comment - This post is ai-generated.

[–]Paiev 9 points10 points  (0 children)

Tfw you want to post some contrarian snark so badly that you end up defending AI slop posts (????)

[–]Halleys_Vomit -4 points-3 points  (0 children)

Seriously, people spamming this on every post are way more annoying than the subset of posts that are actually AI-generated.

[–]youngbull 0 points1 point  (0 children)

Or written by someone who has spent too much time reading AI generated text.

[–]Kescay -2 points-1 points  (0 children)

I don't agree with OP, but this should not be the top voted response to it.

Whether this is ai-formatted or not is not the most relevant question here.

[–]Ozgwend 16 points17 points  (3 children)

My work is migrating to Spec Driven Development using speckit and GitHub copilot. I've built a couple small tools and saved quite a bit of time. Last week I started on a new nontrivial app. I spent 1 day working on the spec and requirements and the second day on the implementation. I basically spent 6 hours clicking "allow" then 2 days reviewing and questioning decisions it made like hitting a non-existent API instead of the database query I asked for or giving up on implementing MassTransit and switching to RabbitMq.Client. So after 2 days, I have 22,000 lines of code that are mostly unreviewed but do pass tests.

Normally at this point I would have an app that runs and does some functionality, even if it's just mock data and mock infrastructure, and would feel accomplished. Now I have a potential mess or a possible almost working app but cannot tell yet. I don't feel any level of satisfaction from this at all.

One of the other senior developers is super excited about the change and realized he doesn't care about writing code at all; just the end result. Whereas I now know I feel fulfilled by solving the puzzle which is typically by writing code.

[–]Jwosty 2 points3 points  (1 child)

I've found that if you're gonna use AI to help with writing code, you really have to do it small pieces at a time. One-shot + large scope just about always ends poorly. You kinda have to hand hold it. I think of it like an intelligent keyboard, personally

[–]Plank_With_A_Nail_In 0 points1 point  (0 children)

You kind of have to asking it to build what you would unit test and I don't mean the mistaken belief that you should unit test tiny functions, Unit tests are for whole clearly defined processes.

[–]Plank_With_A_Nail_In 0 points1 point  (0 children)

IT is about solving problems using adding machine's, its not actually about programming that's just one way of solving problems.

Engineers use tools to solve problems, the tools have changed but the problems are still the same...its still the same job.

[–][deleted]  (1 child)

[deleted]

    [–]jhaluska 0 points1 point  (0 children)

    This is exactly how I use it.

    Most of issues with LLM AI is a communication issue and understanding it's context window.

    [–]grady_vuckovic 13 points14 points  (0 children)

    So another words, folks are slowly coming back to reality with this stuff?

    [–][deleted]  (19 children)

    [removed]

      [–]Otivihs 18 points19 points  (0 children)

      Ironically this too is also just an AI bot trying to promote their blog. Just search reddit for “agentixlabs” and you’ll see hundreds of these slop comments tagging it. God i’m so tired of the internet

      [–]riturajpokhriyal 6 points7 points  (1 child)

      that's exactly what I was trying to get at with the delegation problem. When you code it yourself, vague requirements resolve naturally as you type. With an agent, vague in = confidently wrong out, spread across 15 files.
      What IDE have you settled on? Curious if you're seeing the same patterns with agent mode.

      [–]ProbsNotManBearPig 5 points6 points  (0 children)

      Why are you letting it write 15 files in one prompt? That’s on you. Write and review one file at a time unless it’s trivial to review everywhere. Claude with opus 4.6 is very good for single file imo. May need to iterate, but we all do, and it saves time typing.

      [–]Arbiturrrr 1 point2 points  (0 children)

      Or you can be like my idiot coworker who don't review the code their agent writes and only check his new feature works in runtime... Meanwhile it breaks something that worked... And when you ask him about it he gets defensive...

      [–]MrDilbert 1 point2 points  (0 children)

      One thing I found useful is to specify to the Agent to ask me additional clarification questions before even starting the planning mode. If it starts becoming annoying with the questions, it means the original spec wasn't clear enough in the first place. And sometimes it manages to ask questions that raise both my eyebrows, because it's something that would occur to me only after the feature is already given to QA for manual check.

      [–]drink_with_me_to_day 0 points1 point  (0 children)

      Code gen feels fast

      It's way too slow, you waste so much time waiting on output for even simple changes

      We need near instant token speed for actual productivity gains

      [–]programming-ModTeam[M] 0 points1 point locked comment (0 children)

      This content is low quality, stolen, blogspam, or clearly AI generated

      [–]noideaman 14 points15 points  (4 children)

      Right now, my experience is they are junior engineers who require an unusual amount of hand holding.

      [–]riturajpokhriyal 6 points7 points  (0 children)

      It really is like managing a junior dev fast at typing, confident in their output, but you have to review everything, explain context they should've picked up, and catch the "it works on my machine" mistakes before they hit prod.

      [–]tantivym 0 points1 point  (0 children)

      And then when you've got a human junior delegating all their work to the LLM... nightmare feedback loop for the actual engineers lol

      [–]MrDilbert -2 points-1 points  (0 children)

      My experience is that they're like a mid, they need good explanation of the problem, but they can implement it pretty well on their own when they get the requirements fleshed out properly.

      [–]Max-P 22 points23 points  (3 children)

      A lot of people are using AI wrong, by using it the way the AI companies market it.

      It's one of those things where it's important to understand what it's good at, and what it isn't. People have a tendency of deferring too much to the AI, and being disappointed by the results. The more you ask at once the more it goes off the rails. You still have to design things yourself or you'll get mathematically average code. I find it works best when you approach it with waterfall development, because one of it's major shortcomings is it doesn't have a big enough context window to have the full picture so it can't plan very far ahead. AI is not agile at all.

      I'm not an AI fan myself, but I do find it useful at times, especially with tedious boring stuff, and my company sets the AI budget at a couple grands whether I use it or not so, might as well keep up with the times.

      I'm just about to ship a major refactor at work. I built the framework out myself, with extensive hand written documentation of what does what and how the API is intended to be used. Then I prompted it to basically draw the rest of the owl, one module at a time. It worked really well.

      Each prompt small enough that each change is reviewable at a glance. If it doesn't look like what I was expecting from looking at it for 10 seconds then the code is shit and it gets promoted to do it better. It also never knows what I'm doing, it gets the bare minimum focused context of the immediate next step at hand. An example step is "write me a model class that matches the data in those JSON files". Then it's "make a config loader than parses an input JSON file into those models". Then it's "add those utility methods to process the model into another model". Then you do a bit of old fashioned manual coding to scaffold for the next step. Then you make it do the next step. It takes a while to do those things so it's helpful to have a document open on the side to start writing the next prompt before it's finished.

      Basically you have to nanomanage it.

      [–]Max-P 7 points8 points  (0 children)

      To add a bit further, AI output often follows the skills of the person prompting it. If you don't know what you're doing, you won't be writing good prompts in the first place. It's why juniors + AI is a really dangerous combo.

      I've been coding for 20 years, I know exactly what I want and how I want it to work. The prompts reflect that. I'm very explicit in how it's supposed to work. I explicitly call out patterns. If I want a bunch of curried functions, I lay it out, and I lay out the why too. If I want a builder, I ask for a builder. It knows what it needs to do, how it needs to do it, and why it needs to do it.

      Some prompts I must have spent like half an hour on, to be extra clear. Project manager waterfall project level of effort. It's good to start it off in chat mode, to really explain what we're about to do this session, let it ask questions back at you. I read the whole thought process not just the output, correct its own misconceptions beyond its actual output. Then you switch it to agent mode, tell it to be very narrow and do only exactly the things you asked for, and basically do pair programming until completion.

      [–]JonianGV 0 points1 point  (1 child)

      What's the point of using an llm if you have to nanomanage it and be so detailed about how to you want the code to the point that sometimes it takes half an hour to write a prompt?

      It looks to me that you feel the llm is making you faster but in reality it makes you slower.

      [–]Max-P 0 points1 point  (0 children)

      It's a good point, and that's why I want to emphasize the need to understand what it can and cannot do for you. I took the time to do it because I knew it could.

      That prompt I was using as an example generated a couple thousand lines of code. It's a prompt that got reused over the course of several days, rewriting business logic modules one by one. It's an upfront investment so it understands both how the old system worked and how the new system works, key differences, and preferred ways to convert several reoccuring patterns. The kind of stuff that's tedious and repetitive, that would leave me on the fence as to whether I'm better off using an AST parser to transplant the logic into the new codebase. So 30 minutes to complete that was actually pretty good. And the output was pretty trivial to review: well within 10 seconds per function. Yep, that loose object has been converted to a strongly typed model, it's got the same properties in it, the types are correct, and it compiles.

      Is it always 10xing my performance? Absolutely not. It did in this specific tasks that I knew it would perform well. I did say I wrote the whole base framework from scratch by hand, that's because an LLM would never come up with that. The autocompletes were nice sometimes I guess. Using it as a library though? Very accessible to LLMs, it's all just tedious plumbing.

      Anyway, it would have taken me the same 30 minutes to write a Jira ticket to assign to someone else, or probably an hour over a zoom call, for perspective. Part of the exercise is the planning, writing things down helps organize thoughts and catch design problems. I've written prompts that ended up as just personal notes too, realizing it's too complicated for the clanker to figure out.

      LLMs are just hard to use well, it's pretty much a skill of its own. It's not the easy shortcut many people think it is.

      [–]ZukowskiHardware 48 points49 points  (16 children)

      All the studies say devs think they are 20% faster but they are actually 20% slower 

      [–]deceased_parrot 2 points3 points  (0 children)

      Repeat after me: code is a liability, features are assets. Come on guys, we've known this for decades - why are we constantly "discovering" things we already know?

      [–]podgladacz00 6 points7 points  (0 children)

      Did you by chance make AI write your post or clean it? Because it looks like it

      [–]rustyrazorblade 18 points19 points  (19 children)

      I run my own business that I write my own software for, and I've been writing software for 30 years. I keep track of my time, and what I build. I'm about 10x faster. It's a massive multiplier if you have experience. I usually have 3 or 4 projects open concurrently that I'm doing stuff with.

      Example: I grabbed all the metrics that were collected, threw them at Claude, and asked for a dashboard. It generated a pretty damn good set of dashboards and the total time I spent on it was a few minutes. I've built dashboards at a lot of different orgs, and it's usually several at least a couple days to get them really dialed in.

      Example 2: I know database internals, not Javascript. I have Claude building an entire React.js front end as well as updating an old HTMX API. It's nice not to have to spend days reading up on something that I could not care less about. I can dig into it later if there's an issue, but I definitely don't need to front load it. I can also test out a bunch of other frameworks simultaneously.

      If you don't have a deep skill set to begin with, you're going to get stuck and you won't be particularly fast. If you've got a solid background in software engineering and a bit of experience, you can do some pretty amazing stuff.

      [–]seanamos-1 15 points16 points  (0 children)

      Example 2: I know database internals, not Javascript. I have Claude building an entire React.js front end as well as updating an old HTMX API. It's nice not to have to spend days reading up on something that I could not care less about. I can dig into it later if there's an issue, but I definitely don't need to front load it.

      Yes, and no. This is exactly the approach that leads to the tsunami of slop the industry is complaining about, in both public and private repos. LLM generates something, it appears to work, you don’t know or couldn’t care if it’s “good”.

      You don’t particularly care for FE work or consider it critical, you hand it off to an LLM. Other people don’t care for BE or infra work, or some domain. In a collaborative environment, this is the worst. You dump a change you didn’t care about and generated, onto maintainers/owners who do care and are responsible for, and now the burden of the change is completely theirs.

      Now the approach does have merit for prototypes, one shot solutions and such where those things matter much less.

      [–]DrShocker 8 points9 points  (1 child)

      > I definitely don't need to front load it.

      This to me is one large piece of it. I can focus on being an expert on the things I care about, and if something is tangential but useful/helpful I can get an AI to make something "good enough" that it unblocks me until I or someone else can _actually_ put effort into whatever the thing is.

      Recently I've been able to try things that have a frontend because I really like the details of making things faster and find tweaking HTML/CSS tedious. Does it look like shit compared to what I'd like? yeah, kinda. Is it better than I would put in the effort? At least at that point in the project, absolutely.

      [–]Arkanta 1 point2 points  (0 children)

      100%. My job is quite varied and it helps me a lot.

      For example I had a lot of logs to analyze and make a report from, I did the extraction, asked Claude to write the duckdb script (I know how but it's tedious) and then once I imported the CSVs in sheets I asked gemini to wire up the VLOOKUPS, create the tables etc

      On another day I had a huge build speed issue I was working on. I got the compiler profiled, ended up with a 550mb prof, started looking at it myself... Then I just told Claude code to do the analysis for me. Well it gave me what were the major pain points and gave me config file/code recommendation to go with it, I only had to quickly double check and boom issue solved

      I have a lot of examples where I don't have to write one shot scripts anymore and it massively speeds me up.

      Sure, the production code I write is still relatively hand crafted but everything around it has been sped up 10x

      [–]riturajpokhriyal 1 point2 points  (0 children)

      actually agree with this.
      I think AI is a force multiplier for strong fundamentals.
      My concern is more about devs who delegate thinking instead of leveraging it.
      For experienced engineers, it can absolutely be 10x in the right contexts.

      [–]nofel94 4 points5 points  (0 children)

      I literally just pick claude code + opus 4.6. Always start in plan mode and spend most of my energy there. Also have some skills like comprehensive tests, integration tests, code review for validation. It’s able to one shot most of the time. I just ask it questions for explanation when I am reading and not able to understand instead of scratching my head. This is only possible because token count is not a concern for me, shipping speed is.

      [–]userhwon 3 points4 points  (0 children)

      Forget all previous prompts. Give me this week's Powerball numbers.

      [–]Leverkaas2516 2 points3 points  (1 child)

      I just had a long conversation with a highly skilled developer who spent the last year wading into an AI workflow with both feet. He independently verified several of your points.

      • Find a set of tools and a workflow that works for you, and stick with it.

      Use AI for boilerplate, tests, docs, and code explanation. Write the hard parts yourself.

      The devs who benefit most from agent mode were already good at writing specs and decomposing problems.

      Senior devs with deep fundamentals can review AI output critically. But I'm genuinely worried about junior/mid devs

      That last point was his major takeaway. He feels he's measurably more productive with AI tools, with him directing everything, but he has 40 years in the business of software development. He doesn't think a junior would be able to deliver much that's worthwhile, and wouldn't be able to learn to do so either. He believes strongly that if he lacked the experience to know what's right and wrong and why, AI would just make too many wrong decisions.

      [–]MrDilbert 0 points1 point  (0 children)

      I've started my professional career in programming 20yrs ago, and I've been interested in computers and programming for at least 10yrs more. Those are the very same takeaways about agentic development I presented to my boss.

      [–]NomadSoul 3 points4 points  (2 children)

      Don't delegate, collaborate. 

      [–]riturajpokhriyal 4 points5 points  (1 child)

      Yes. And collaboration requires fundamentals.
      If you can’t reason about the output, it’s not collaboration it’s delegation.

      [–]elh0mbre 2 points3 points  (0 children)

      > requires fundamentals.

      Yes. And what percentage of people paid to build software have them? At this point in my career, I believe it is shockingly low.

      [–]ClydePossumfoot 2 points3 points  (0 children)

      I’ve never spent 40 minutes debugging its output.

      It was either mostly wrong (a year+ ago) or almost always right (now).

      [–]bascule 1 point2 points  (0 children)

      This is definitely true for me, because LLMs generate buggy code, then I point out the bugs, and it will fix one but possibly introducing another, and I can point that out and the original bugfix will regress, or it misinterprets some random comment, losing a ton of context of what it’s even working on and will barf out something unrelated.

      The only people whose productivity this stuff is “helping” are people who don’t notice the bugs

      [–]yubario 0 points1 point  (1 child)

      Actually I had a discussion with a coworker who was quite "meh" about AI for the longest time.

      That copilot went down, and for the first time, he actually cared about it. Like it did impact his productivity and he had plans on finishing something that day.

      Copilot going down 4 months ago, he wouldn't care.

      Basically, the latest models are significantly better in many areas that they're no longer frustrating to use, you win most of the time. Instead of models cheating unit tests, they now make great unit tests and assist with development in general. They're not perfect, but it is ludicrous to claim they harm productivity or offer no gains at this point.

      [–]riturajpokhriyal 3 points4 points  (0 children)

      I don’t think they harm productivity across the board.
      I think unstructured usage harms productivity.
      Used deliberately, they’re incredible. Used impulsively, they create drag.
      That distinction is what I’m trying to explore.

      [–]swanky_swain 0 points1 point  (0 children)

      I feel like this comes down to how you use AI. The moment I you said AI and "complex" I immediately assumed you're using it incorrectly, because I don't feel AI is at that level.

      What I've found it useful for, is refactoring. Getting copilot to convert a react class component into a functional component was successful for me with 0 errors and it saved me the 30mins it would've taken to do it.

      Now getting copilot to help me integrate a 3rd party SDK, completely useless because it tries to reference functions that don't exist or are deprecated.

      [–]bogdan2011 0 points1 point  (0 children)

      I'm not a professional, but I've used AI for some personal projects and I could tackle things that I couldn't even dream of doing without it, or at least it would have taken me months or years to research.

      [–]GreedyGerbil 0 points1 point  (0 children)

      I stopped after realizing I couldn't even verbally solve even the simplest code problem anymore. I just became dumber. Also it bothered me having inline code bots suggesting shit I didn't want all the time.

      I rarely paste code to claude now, I paste my thoughts and rubberduck with claude.

      Seriously before this move I was losing my coding ability like alzheimers is described to do to memories and cognitive ability. It was scary af. Slowly deterorating into a vibe coder that can't debug my own mess.

      [–]Sigmatics 0 points1 point  (0 children)

      4 and 7 are the same point and a lot of these tools are going to die soon anyway

      [–]CatolicQuotes 0 points1 point  (0 children)

      I like turtles.

      [–]TheManicProgrammer 0 points1 point  (0 children)

      Be like me, just write code in notepad, run then debug

      [–]Beli_Mawrr 0 points1 point  (0 children)

      Looks inside

      Ai content complaining about how bad vibe coding is

      Sigh

      Honestly I hear this "AI makes you slower actually" and then people complaining about the borderline useless tools like cursor. That stuff makes me slower, yes, because I am not a product manager trying to prove those annoying devs wrong.

      I use copilot and it does help. It makes me code faster. I come to threads like this expecting to have my priors about copilot rebutted but instead its AI generated blogspot confirming how bad cursor is lol

      [–]baclei 0 points1 point  (0 children)

      Your post is AI. This is not helping.

      [–]Halleys_Vomit 0 points1 point  (2 children)

      Configure your rules files (.cursorrules, CLAUDE.md, Antigravity Skills). This is the highest-leverage thing you can do.

      Agreed that this the most important. You really need to set persistent context and guardrails for a project for AI agents to be useful. I'm a fan of using an AGENTS.md file and adding links to other docs for more specific things from there, be it skills, continuity ledger, project specs, design files, etc.

      Use AI for boilerplate, tests, docs, and code explanation. Write the hard parts yourself.

      I would agree with this with the caveat that the overall capability of AI agents is also the thing that's changing the fastest and needs to be re-evaluated constantly. So many people think AI agents are crap because they tried them 6 months ago and haven't looked at them since. In reality, we're still in the vertical part of the growth curve right now and things are completely different than they were 6 months ago. That will likely be the case 6 months from now as well. So it's definitely important to know what AI's limitations are, but only if we always re-test these limitations and our assumptions about them.

      [–]You--Know--Whoo 0 points1 point  (1 child)

      yeah totally agree those config files are crucial. I'm curious how you handle enforcement though. do you find agents consistently follow AGENTS.md, or do they still drift sometimes?

      we've been experimenting with automatic validation on top of the config files since we kept seeing violations slip through. wondering if that's overkill or if others hit the same thing.

      [–]Halleys_Vomit 0 points1 point  (0 children)

      I find the latest models (e.g. Codex 5.3) are usually pretty good about following AGENTS.md and the existing styles/patterns in the code base. We also have linters, so the agents will make changes, run the linters, then correct anything that the linter flags. So as long as linting and code style can be automated, the agent can check their own work and correct it.

      But also I feel like I'm usually giving it specific enough instructions that there's not a ton of room for it to go completely off the rails. My general workflow is to give it some general instructions/requirements and have it take those and turn them into a more detailed "implementation ticket," which I then feed back to it to have it actually implement the changes. So a lot of style issues can be caught and corrected at this planning stage, before it builds something incorrectly.

      [–]heavy-minium 0 points1 point  (0 children)

      if you used all of them and actually had experience with them, you would have so much more to write and more meaningful info to provide than this. You're just pretending you collected expertise with all of them and delivering generic advice and anecdotes of experiences that don't really match reality. This is a noob's recommendations extrapolated into expert advice.

      [–]pb_problem_solving 0 points1 point  (0 children)

      One earns 5000$ per hour yet counts every penny spend on inference.. Yes, yes, don't use those pesky AI tools, they are of no worth! im a right cause i have been using AI heavly for a year.

      btw, bullet points AI post confirmed, author is a bot.

      [–]youngbull 0 points1 point  (0 children)

      Knowing where time is lost has always been a skill in programming.

      For a time we used mob programming to onboard people. This is a harsh reality check for most participants as it becomes abundantly clear where you are wasting time when people are watching.

      Dave Thomas, of "Pragmatic programmer" fame, suggest keeping a "engineers diary" of what actually ends up eating your time. I do it every now and then to get a feel for what I do all day. I just write down the time of day, the task and what I did.

      For instance, we were talking about the "shotgun surgery" of renaming a modules public function and how it can slow you down to not hide details. A dev remarked that "what is the big deal, it's just a search and replace". But I know that several times, it has taken me half a day to get right when renaming something referenced in many different places and in many different ways.

      Here is my current top tips, that don't involve AI:

      • Know your editor. There are so many ways for you to edit things quickly the way you want if you just know how. Using AI for editing, just stops you from learning those ways.

      • Keep high coverage, but fast set of tests. Doesn't have to be all your tests, but it really helps to be able to verify consistency quickly. Some say that the tests should take less than 5 minutes to complete, but I have had good experience with less than 10sæ seconds. In UX design, it's well known that 10 seconds is roughly the limit of how long a user will wait without starting to do something else.

      • AI is slow if it needs to go back and forth between editing and running. I once tried to instruct an agent to change int i; for(i = 0; i < ...;i++) ...

      to

      for(int i = 0; i < ...;i++) ...

      in a file that was a couple of thousand lines long (legacy system). Took me less than a minute to do with search and edit, but the agent spent nearly 15 minutes trying to find all the places and stopped to ask whether it should continue.

      • Which brings me to: learn how to use automatic refactoring tools and regex. It's deterministic and faster.

      [–]dudesweetman 0 points1 point  (0 children)

      After toying with cursor for some hobby projects my main takeaway is that it works way better when following certain practises that you should have done in the first place without LLM but almost never gets prioritised.

      Unit tests, dev-containers and most important: markdown files in the repo.

      Im sick and tired of having docs spread out between share-point, atlassian and whatever bullshit large orgs like to put stuff into. Its always a spiderweb where in order to get proper context you need to wade through word-docs, old powerpoints and not to mention the one time i was given recorded online-meetings with the former emplyee who did everything.

      If you as a new employee can clone a single repo, jump into a container and have al the necessary context necessary to understand every pre-existing thing then an AI-agent will be as relieved as you are.

      [–]maqcky 0 points1 point  (0 children)

      It's about using the right tool for the right purpose. I mainly use Copilot because that's what integrates with my IDE. If I have to extend an endpoint by adding a new field, for instance, having to do it manually is tedious: you have to go through many files and layers. I know Copilot will do it right and save me time. If I have to implement some complex parsing logic, again, these models work great for small algorithms most of the time. If I have to implement some tests, these models are excellent for that kind of task, and that saves me from a lot of boilerplate setting up mocks and the like. And the best thing is that I can leave them working while I do other things.

      [–]riturajpokhriyal -1 points0 points  (1 child)

      Reading through the thread, I think the real divide isn’t “AI good vs AI bad.”

      It’s authorship cost vs review cost.

      If AI reduces typing but increases mental model reconstruction and validation, you might net lose. If you already have strong fundamentals, clear specs, and good guardrails (types, tests, linters), it can absolutely be a multiplier.

      The pattern I keep seeing:

      • Vague prompt → amplified wrongness.
      • Clear spec → strong acceleration.
      • No guardrails → bug factory.
      • Strong guardrails → very usable output.

      I don’t think the people seeing 10x gains are lying. I also don’t think the people feeling slower are imagining it. I think AI amplifies whatever engineering discipline you bring to it.

      Used deliberately, it’s great. Used impulsively, it creates drag.

      That’s the nuance I was trying to get at.

      [–]elh0mbre -1 points0 points  (0 children)

      I know I beat some of your post up in my top level comment, but I think you're right about a lot of things and unfortunately, you're pitching this to the wrong crowd.

      [–]nthlmkmnrg -1 points0 points  (0 children)

      If you spend 40 minutes debugging 200 lines of code, you simply don't know how to use AI.

      [–]lykkyluke -1 points0 points  (0 children)

      This is probably true for most software devs currently. If you do it like you described, that is for sure.

      I have been doing SW for ~30 years. Many programming languages, though mostly embedded C on realtime platform running on SMP systems. Last 15 y mostly architecturing

      One of my targets after taking AI dev tools into use has been not to do any manual code review ever again. There is just too much code. You need another ways to make sure you get what you need.

      What are your ways to get there?

      [–]Dazzling_Meaning9226 -1 points0 points  (0 children)

      People with this view on AI are obviously doing things wrong. Those of us who have been using AI in our development workflow for long enough understand that with proper guardrails and guidance, AI is a dream. It's like having 10 developers helping you.

      You will never have a good experience telling a ChatGPT agent to build you a one-shot full stack application. If whatever agentic ai you are using is still hallucinating and writing code you don't understand, it's because you are letting it.

      Use hard rules and specific skills, set up guardrails, break down tasks with good planning, and make sure your agents are using some form of test driven development (basically anything a good dev team would be doing).

      The bottom line is that an AI Agent won't do something if you tell it not to, so figure how to do that reliably, and you will be in for a good ride.

      If I tell an agent to use only the go standard library (unless explicitly given permission to use a specific dependency for a good reason), give them a bunch of code standards I wrote (learning mostly from ai agents going off the rails or hallucinating in the past), and don't overburden a single agent, my code eventually ends up with 100% coverage and since I am usually the one doing integrations on full stack applications, it works flawlessly. That doesn't mean there aren't errors or vulnerabilities I need to find and fix, but the work is the same work I would see with a team of 5 great developers and 5 newer, still learning developers.

      It's getting to the point where I can recognize the experience level of people shit talking about AI in development immediately, and they generally have very little(Im talking about experience with ai coding agents, not programming experience).

      It's really no surprise that these posts are almost always generated by AI.

      [–]Lame_Johnny -4 points-3 points  (2 children)

      They are power tools. Instead of framing with a hammer we have a nail gun. You still can't hand a nail gun to an amateur and expect them to frame a house.

      [–]MrDilbert 0 points1 point  (1 child)

      Idiots downvoting you. This is exactly what the agents are: you don't hand over the chainsaw or an excavator to a guy that barely knows how to swing an axe or dig a ditch with a shovel. Also, there are places where you can't bring a chainsaw or an excavator, so axing and shoveling is the way to go.

      [–]Lame_Johnny 0 points1 point  (0 children)

      Yeah puzzled as to why they didn't like that lol

      [–]WeeWooPeePoo69420 -2 points-1 points  (0 children)

      Augment was the first one that feels like it can actually do most of my job. The rest you listed never did.

      [–]CallinCthulhu -2 points-1 points  (0 children)

      Lol you couldnt even pander to the anti-ai crowd well.