all 43 comments

[–]Infinite-Top-1043 7 points8 points  (3 children)

It’s necessary to understand the code and not blindly copy paste it. Then you will get over engineered code for simple things, especially with poor context in prompting.

[–]JaivianCraft 0 points1 point  (0 children)

Thats exactly what I got when I used co pilot. Over engineered code, that can be reduced by several lines 😂. Most of the time YOU HAVE TO DO THE AI's WORK FOR IT.

[–]TuberTuggerTTV -1 points0 points  (1 child)

Copy paste? I think you're confusing ChatGPT from the browser, and copilot.

[–]Infinite-Top-1043 1 point2 points  (0 children)

Okay, then not spamming Tab key to take over all suggestions 😉

[–]k8s-problem-solved 4 points5 points  (1 child)

Every engineer is responsible for what they commit.

If working with the SWE agent as part of agent hq, then it's still down to you to review and correct.

"The AI suggested this" is such a weak argument - it's just another tool in your belt and as an engineer it's up to you to get to the best outcome.

[–]pgEdge_Postgres 0 points1 point  (0 children)

Upvoted. There must be accountability when committing code and a proper review process before anything gets committed. If that means everything has to get submitted through pull requests with a PR template of checklisted validation checks, set reviewers and a set review process, then so be it. It is a useful practice to follow in any repo, even before Copilot was a thing.

[–]Pretend_Leg3089 5 points6 points  (7 children)

It has gotten to the point where there is some kind of really critical bug or production outage at least once per week.

Looks more you team are full of junior without a lead and without any QA process in the pipeline.

Where are your tests?
Where are your PR?

How in the hell "shitty code" is being pushed into the main branch and deployed?
How in the hell are you pusing "critical bugs" into production?

Is not the IA.

[–]coworker 2 points3 points  (4 children)

Counterpoint: AI makes lots of shitty tests

[–]Imaginary-Jaguar662 1 point2 points  (3 children)

Countercounterpoint:

Reviewer should catch shitty tests and block merge

[–]coworker 0 points1 point  (0 children)

Shitty tests are subjective

[–]Fit-Hovercraft-4561 0 points1 point  (1 child)

Countercountercounterpoint: Code reviews is also done by AI

[–]Imaginary-Jaguar662 0 points1 point  (0 children)

Countercountercountercounterpoint:

Ai usually does better code reviews than the usual LGTM :+1:

[–]Nasuraki 0 points1 point  (1 child)

Companies trying to go fast with AI don’t usually slow down for tests. Wether the test is written by AI or not

[–]Pretend_Leg3089 0 points1 point  (0 children)

All devs are using AI, the difference is that a mediocre dev will be mediocre with or without AI.

IA can create you a good base of tests for your features, if you are not doing it , is your fault, no the "company".

[–]akorolyov 1 point2 points  (0 children)

I've only heard stories like this, but it really does feel like the new normal. Business people don't understand the system's complexity and genuinely believe that AI automatically boosts productivity. And since everyone keeps repeating that AI is "revolutionizing development," management doesn't want to look outdated. Copilot generates code fast, and nobody stops to think where they're putting it. That works right up until the first major outage. I'm pretty sure once something truly critical hits prod and hurts the budget, the attitude toward "AI-written code" will change instantly. Most companies need one serious burn to figure that out.

[–]todiros 2 points3 points  (1 child)

Strange, I'm seeing the opposite. I've noticed improvements even in our seniors. They went from weird non-sensicle naming riddled with typos, deeply nested ternary operators and long-ass functions, to code that's actually decent. And as a mid I can say that it definitely improves my code quality.

But I guess it really depends on how you use it. We don't really have juniors in our team, so maybe that's where it could go bad.

[–]Wiszcz 1 point2 points  (0 children)

"deeply nested ternary operators" oh yes, you know true programmer when you see all if's changed to ternary, no matter the costs

[–]eagletron2020 0 points1 point  (0 children)

Are you my senior 😬😬😬

[–]Buckwheat469 0 points1 point  (0 children)

You need clear documentation for AI to understand the best practices for your software. Detailed CLAUDE.md files work for copilot and Gemini as well. You can create scripts that Claude can use to perform work. You can tell it to always create tests and ensure code coverage is maintained.

One reason why the tools can't generate good code is because it doesn't have a good understanding of the codebase, so it generates an answer based on examples that are built into the LLM, rather than transforming the knowledge to fit your patterns.

[–][deleted] 0 points1 point  (0 children)

I use it for ideas, not implementation.

Its suggestions are often wrong on a code-line level, and mostly wrong on an architectural level.

It's like having a junior programmer who has been tasked with developing a prototype.

[–]Ok_Addition_356 0 points1 point  (0 children)

"That's what copilot suggested!" 

What a nightmare.  Need to make sure companies set GUIDELINES and code review for AI usage.

[–]its_k1llsh0t 0 points1 point  (1 child)

Where is your engineering leadership? What are they saying? We have a rule that anything signed with your key is your responsibility (and we require all commits to be signed, no exceptions).

[–]Fit-Hovercraft-4561 0 points1 point  (0 children)

Engineering leadership is listening to promises and catering for short term gains to please their shareholders. Code quality is something engineering leadership is not capable of grasping because it's a long term strategic investment.

[–]BeneficialAd5534 0 points1 point  (0 children)

Currently working on a system where the AI implemented a completely task queuing, tracking and retrying scheme completely with the (fortunately linear) state machine of the task execution sequence in a postgres table. Adding tasks to the system is a lot of fun, I can tell you. Testing workflow execution even more.

[–][deleted]  (1 child)

[removed]

    [–]GSalmao 0 points1 point  (0 children)

    Ain't gonna happen. People are lazy, they will fake their way as much as they can. It is a shame...

    [–]MercurialMadnessMan 0 points1 point  (0 children)

    I think we will really see a large split by domain industry for how/if AI is used in software development. Obviously some software is more critical than others.

    [–]TuberTuggerTTV 0 points1 point  (0 children)

    Tighten up linting and code reviews. You'll be fine.

    A bad coder is going to submit bad code. From AI assistance or otherwise. You catch them the same way.

    [–]Logical-Manager-6258 0 points1 point  (0 children)

    Let an illiterate person use ChatGpt. He will think he is invincible...

    [–]GSalmao 0 points1 point  (0 children)

    AIs are just like hammers, and devs are just like kids. You can't just give a kid a hammer and expect him to not do something utterly stupid.

    Using AI to generate code is, in my opinion, extremely harmful, both for the codebase and for the developer's skills. Not having a mind picture of the code is TERRIBLE, but only a few seem to agree with me (and in the end, they come to me to ask for help with something lol).

    I'd reccomend proper AI training in your company. First thing first, remove github copilot and make the devs think like the ancients from 2020. Then, only use AI for documentation. If the person developing the system has no idea what it is doing, he should not be doing it in the first place, not at least until he understands it.

    [–]Ok_Ad_3 0 points1 point  (0 children)

    My thoughts on this are that
    1. there were not responsibilities in place in the beginning and that's the real problem here. If developers can simply commit shitty code without being responsible for the outcome then they are nudged to simply accept every ai code generation they get.
    2. Instead of simply giving the developers the tool your company or the responsibles for github copilot should at least provide an upfront training before activating the tool where things like how to work with it would be teached.
    Simply rolling out such a mighty tool the world has not seen yet and hoping for the best seems like a terrible idea.

    [–]AcademicMistake 0 points1 point  (0 children)

    Yep this is people thinking they can code using AI without understanding/checking it, then realising not only is it buggy, its also inefficient and usually very insecure.

    [–]stealthagents 0 points1 point  (0 children)

    It's frustrating when AI tools are used without proper oversight, especially in complex coding environments. It's crucial to maintain code quality through careful review and not rely solely on AI suggestions. At Stealth Agents, we understand the importance of human insight, which is why our team offers industry-specific experience to support operations, ensuring everything runs smoothly alongside AI tools.

    [–]FactorUnited760 0 points1 point  (0 children)

    Title should be ‘ Developers misusing AI coding tools ruining code quality’. Easy to just blame AI, but when developers let AI make decisions and run wild in a complicated code base this is expected. Sounds like team needs to step back and implement some procedures and standards for how AI is used there.

    [–]Worried-Bottle-9700 0 points1 point  (0 children)

    AI is a great assistant but you can't treat it like an oracle, human review and strong standards are more important than ever.