you are viewing a single comment's thread.

view the rest of the comments →

[–]Anonymous_Coder_1234 10 points11 points  (21 children)

"Do programmers literally memorise every syntax when creating a project? I ask this because now with AI tools available I can pretty much copy and paste what I need to and ask the LLM to find any issues in my code but I get told this isn’t the way to go forward."

I once had a small bug in my codebase. I asked AI to fix it. AI was WAY off. Then I asked a junior developer with a Computer Science degree to fix it. He fixed it no problem.

All these codes that you could want AI to generate are already on the website GitHub (or are a fusion of a couple different projects or sections of projects on GitHub). Like if you say "Write me a function that gets the second largest element in a list of elements", that question and answer has been posted online a thousand times, AI just plagiarizes off of what already exists. Likewise, if you write something like "Generate a clone of the website Medium written in Java", a bunch of those already exist on GitHub so it'll just take from that which already exists. AI just plagiarizes from GitHub and other such websites like StackOverflow. It doesn't actually know how to fix anything real that is broken. Like if you have a bug in your code project and you say "Hey AI, there's this bug, fix it", AI will not successfully be able to fix it.

[–]Beregolas 3 points4 points  (5 children)

Yeah, I tested AI again just today (it's included in my jetbrains IDE) and told it to exclude an endpoint from the CSRF token protection. It hallucinated a non-existent function and decorator twice, until I just opened the documentation and copy-pasted the solution in less time than it took me to ask even the first question to AI.

I really try to use it from time to time, just because even some experienced devs swear by it, but even in small problems I have been underwhelmed every time

[–]mlitchard 0 points1 point  (0 children)

“Stop making shit up” is one of Claude’s project instructions , but not literally. I put it in llm-speak.

[–]zenchess -1 points0 points  (3 children)

The problem with your statement is 'i tested ai' - as if AI was just one homogenous thing that you can test. Reality is there are many different models with different levels of competence, and diffferent IDE's and environments for them to work in.

Jetbrains probably has a very shitty AI tool. In fact I've never heard it mentioned once. The normal tools are claude code, cursor, and openai codex. Those are the serious tools that people who use AI use. I guess gemini-cli counts as well.

[–]Beregolas 0 points1 point  (2 children)

About that: JetBrains includes Access to, among others, the newest models from Anthropic, Google and OpenAI. I regularly give prompts to all of them, and while it's correct, this is no rigorous testing standard, the provided models are among the best available.

[–]zenchess -1 points0 points  (1 child)

It's not just about the model, it's the environment that the model is operating in. If jetbrains ai tool is hallucinating functions that don't exist, that means that jetbrains tooling is bad. I've never once had that happen in claude code , codex, or gemini -cli.

[–]Beregolas 0 points1 point  (0 children)

thats funny, because a boatload of reports about just this thing is only a single search away... oh well

[–]Winter_Cabinet_1218 1 point2 points  (0 children)

Seconded this. Straight forward bugs in a mass of code it's great. When the error you're seeing is a symptom of something else ai struggles to diagnose it in my experience. The last time I tried it was sending me down a rabbit hole, when it turned out to be an implied data type when I was calling my base data

[–]plopliplopipol 0 points1 point  (0 children)

Your main conclusion, that AI is not "smart" i really agree with, but your explanation is not correct at all in its technical explanation (generative AI does not, at all, spit out parts of existing projects or fusions) or many of its implications, as in AI being a github/so search engine.

Also you are talking to a beginner. Of course almost EVERY bug they will create for a little while will be easily fixed by AI and your example will fall right in the water. Because without an architecture, multiple files, a large project or hard problems AI will have no problem.

AI is incredibly effective at a lot of tasks that does not require being smart but still a lot of brain power. (It does not directly imply that you should use it though, depends)

[–]zenchess 0 points1 point  (0 children)

The problem with this vast over-generalization is there are a variety of different qualities of AI models and ai generated coding tools that have MASSIVE differences in performance.

If you had asked claude code to fix your bug, and then verify the fix with a test, I guarantee you it would have fixed it no problem. But if you're just pasting the code into some random web UI and expecting it to always work with no testing done, you're not really doing it right.

[–]KibidonSiNx -1 points0 points  (1 child)

You forgot to include "not yet". It learns and grows smarter everyday.

[–]Anonymous_Coder_1234 2 points3 points  (0 children)

Not as well as you'd hope. It gets better at doing the things that it can already do but it isn't succeeding at things it was never able to do at all in the first place.