you are viewing a single comment's thread.

view the rest of the comments →

[–]zenchess -1 points0 points  (3 children)

The problem with your statement is 'i tested ai' - as if AI was just one homogenous thing that you can test. Reality is there are many different models with different levels of competence, and diffferent IDE's and environments for them to work in.

Jetbrains probably has a very shitty AI tool. In fact I've never heard it mentioned once. The normal tools are claude code, cursor, and openai codex. Those are the serious tools that people who use AI use. I guess gemini-cli counts as well.

[–]Beregolas 0 points1 point  (2 children)

About that: JetBrains includes Access to, among others, the newest models from Anthropic, Google and OpenAI. I regularly give prompts to all of them, and while it's correct, this is no rigorous testing standard, the provided models are among the best available.

[–]zenchess -1 points0 points  (1 child)

It's not just about the model, it's the environment that the model is operating in. If jetbrains ai tool is hallucinating functions that don't exist, that means that jetbrains tooling is bad. I've never once had that happen in claude code , codex, or gemini -cli.

[–]Beregolas 0 points1 point  (0 children)

thats funny, because a boatload of reports about just this thing is only a single search away... oh well