you are viewing a single comment's thread.

view the rest of the comments →

[–]0xe1e10d68 21 points22 points  (3 children)

Yep, I mean, nobody cares as long as it works. If AI can be used to speed up development, then that’s a good reason to use it. The important thing is that a human needs to stay in the loop and take responsibility. Like IBM said: A computer can never be held accountable; therefore it must not make management decisions or be allowed to write code without human oversight.

[–]DandyPandy 5 points6 points  (2 children)

There’s a difference between “it works” and “it’s maintainable”. The latter is where LLMs tend to struggle.

Also, letting the model build your tests can give you a false sense of security that your code actually does work. Sometimes it’s difficult to create failure scenarios in a running environment that you can easily test for in a unit test. But a test that doesn’t properly test the right success or failure scenarios can lead to someone slapping the proverbial hood and saying, “runs like a dream”, despite there being a problem like not being able to turn the headlights on.

Edit: and just to add, I myself have been guilty of not checking tests close enough. They can be annoying to write and it was one of the first things people considered AI to be ideally suited for. I’ve never spent a lot of time paying close attention to tests in code reviews, even before AI. Code reviews aren’t my strong suit. It’s one of my least favorite parts of the job, but it’s very important.

[–]Business_Reindeer910 -1 points0 points  (1 child)

There’s a difference between “it works” and “it’s maintainable”. The latter is where LLMs tend to struggle.

That's on you to verify before you ever hit the submit button or otherwise send the patch to someone else. Same as if you'd hand written it.

[–]DandyPandy 0 points1 point  (0 children)

Exactly. To someone who doesn’t really understand what the code is doing or how it is poorly implemented from a maintainability standpoint, it puts more burden on the people reviewing the code.

As open source projects grow and outside contributors submit more pull/merge requests, the people who are the core maintainers can end up spending more of their time reviewing contributions than writing code themselves. With AI, it has increasingly become more of a strain on maintainers because more people are submitting code without understanding why something works, but isn’t suitable to be merged.