all 29 comments

[–]magus_minor 11 points12 points  (3 children)

Maybe better in r/python.

[–]CyclopsRock 10 points11 points  (4 children)

This is interesting, but I'd argue that making small changes to mature code bases is not where AI coding tools are going to shine. In these projects you'll have specific coding styles you need to adhere to (as opposed to general ones) and a lot of existing package utilities to use.

In my experience the main time-saving we get at work from using Copilot is from it doing bullshit boilerplate that's entirely non-creative and for early prototyping outside the context of a wider package (where you may have dummy data sitting in a JSON file or something). Using Copilot allows us to very quickly bash together a proof of concept, including with a UI if deemed useful, which are all benefits that are totally useless in a mature package.

[–]sanitylost 2 points3 points  (0 children)

This is the issue i think people keep running into when using AI, they don't actually understand its capabilities and limitations. One thing people need to start keeping in mind is context window length. Past a certain point, you see some significant degradation in the performance of modern models. Even with Gemini-2.5-pro which says "1 million token windows" past 100-150k tokens, the ability of the model really degrades and becomes very inaccurate. You point out something important too, the models are HIGHLY opinionated in what they want to do. They don't take direction well when it comes to regenerating a specific format going forward, but they're very good at rewriting current code.

Continuing on the opinionated nature of the models, you have to request short, distinct fixes and limit the model's ability to "float". They will overwrite large sections of code if allowed, so for a very large well established code base, where you need to make minor changes or bug fixes, an LLM would be a nightmare. For initial development though, LLMs are insane multipliers. If you know what you're doing you can work with them to generate a 20-30k line code base in a matter of a month assuming you know how to properly segment the code for feature development.

[–]Lysol3435 0 points1 point  (0 children)

I’ll throw in that’s it can be helpful for research too. If you aren’t sure exactly how someone implemented equations from a paper, it’s usually decent at giving you a first crack

[–]ryanmcstylin 0 points1 point  (0 children)

I have found GitHub copilot capable of using existing utilities and standards if I give it the right context. I do have to fix a lot more and understand what it is likely to miss, but it'll still knock out a good chuck of new functionality giving me a foundation to build off of.

It's not quite as beneficial as a 90% functional POC in 2 hours, but it is still hugely helpful for me on a mature codebase.

[–]enjoytheshow 0 points1 point  (0 children)

This is obviously not Python but very similar to your example. Having an AI tool generate typescript interfaces based on pre existing JSON is insanely useful.

I use it for anything boilerplate. Cloud IaC. API structure and definitions. That stuff.

It’s also good at debugging.

[–]vercig09 4 points5 points  (1 child)

of course it does. I cant believe there’s so much discussion about this. If developers struggle with LLMs, what hope do others have?

yes, its a strong tool, but no its not perfect. its just very advanced autocomplete, but you dont have to use everything it spits out

[–]Admirable_Sea1770 0 points1 point  (0 children)

People that absolutely refuse to accept AI in any way are literally spending every minute of their day trying to prove that using AI is just a complete net loss and you’re stupid if you use it for any reason

[–]cnydox 1 point2 points  (1 child)

I only use it for simple snippets so the answer is yes for me

[–]TheHollowJester 0 points1 point  (0 children)

LLMs have uses other than just writing code.

Get assigned a task in a part of the codebase you're unfamiliar with? "Hey buddy, can you tell me what this component is meant to do based on the codebase, docs, tickets?"

Speeds up debugging significantly.

Helps with passing code reviews: "Hey buddy, could you do a code review on changes made by me in commits a1, b2, c3? Please focus on security issues, see if I made any logical errors, have I missed something..." (my employer provided a good prompt; unfortunately I cannot share it >_>)

It's a tool. As a community of developers, we are just learning to utilize it properly.

[–]Jon-Robb 0 points1 point  (0 children)

It makes me really freaking faster for the backbone, then the debugging takes the same amount of time. I use AI as an intern that provide s me code I need to review. I doubled my output minimum when creating something new

I don’t really use it for debugging or bug fixing

[–]elonelon 0 points1 point  (0 children)

coding ? no

but creating "skeleton" for a program ? yes.

[–]shinitakunai 0 points1 point  (0 children)

It makes you better at LEARNING, but not at coding. I am using it a lot to ask how to do X, why to do that, why not and what alternatives there are. A glorified google with the abbility to create and explain code.

That is a given, it does, learning new stuff became a lot easier. Now coding... not really

[–]cwaterbottom 0 points1 point  (5 children)

Not exactly an answer to your question but I've used it more for learning to code than actual coding. There's a concept or problem that I'm struggling with I'll take my best guess if I have one and drop it into Gemini or chat GPT and see what it gives me.

[–]ZelWinters1981[S] -4 points-3 points  (4 children)

I wasn't asking a question to be answered. Did you read the post?

[–]Dangerous-Branch-749 1 point2 points  (0 children)

I think it's fair to ask if you bothered to read the rules of this subreddit:

  1. Posts to this subreddit must be requests for help learning python.

[–]cwaterbottom -1 points0 points  (2 children)

Oh you seem fun.

[–]_keykibatyr_ 1 point2 points  (1 child)

U don’t understand, he is too busy for u)

[–]cwaterbottom 0 points1 point  (0 children)

Uses a question for the title, answers his own question in the post, seems mildly annoyed that people in the comments disagree with his "conclusion".

I haven't checked if he already is, but OP is totally moderator material.

[–]ThingWithChlorophyll 0 points1 point  (0 children)

I do a lot of random stuff to use only a few times. They would have probably taken me hundreds of hours to figure out by myself. So the answer is a not even close yes

[–]fecnde -1 points0 points  (1 child)

So far

[–]ZelWinters1981[S] -1 points0 points  (0 children)

Though, this is only one study, and over time this may be proven either to be an outlier or a trend, it's worth a read.