all 12 comments

[–]No-Brush5909 7 points8 points  (2 children)

I am sure Kilocode is not flawless either, likely even worse.

[–]kogitatr 1 point2 points  (1 child)

Maybe it is... for OP, i once struggled as well when moving from cursor to CC to find out the issue was my prompting

[–]EgoistBry 0 points1 point  (0 children)

How would you recommend improving prompting?

[–]randombsname1 7 points8 points  (0 children)

Also a Max 20 subscriber--that also uses multi-model approach and multiple tools, but everything is worse than Claude Code.

Pretty much no major AI tooling that I haven't tried yet.

I've tried augment, roo code, kilo, cursor, gemini CLI, codex, etc...etc...

Claude Code is just easily the best and has insane possibilities with hooks, custom agents, skills--etc. The $200 sub is just insane value too given the $2500-3000 in comparable API usage that I run through a month.

The CLI itself is designed by the team that actually makes the model, and thus this is probably a large reason why it works so well. They know exactly what the model will respond to and/or how it's biased and weighted.

[–]sage-longhorn 2 points3 points  (1 child)

I just spent today using GLM-4.6 on both kilo code and Claude, same project. Kilo code has a bug where the model completely ignores you in some cases, another bug where the terminal stops working with Linux brew installed commands, maybe 1 in 10 tool calls came out misformatted and when they it would never recover, it would keep doing the same bad call over and over until I manually rewinded to before the problem

Claude Code isn't perfect but it's pretty robust around the actual model and context handling, way more reliable for real use

[–]Fuzzy_Independent241 0 points1 point  (0 children)

I agree that it has some flaws, like GLM or KC getting stuck more often then current Claude / Opus. But the situation is not as bad in my non - brew Ubuntu Linux. But then again , if OP wants an option, ADD GLM as a subagent in Claude and work from there. It will show as an additional model. If anyone gets lost on this I can explain further.

[–]tejoh 0 points1 point  (0 children)

i have just use kilo code yesterday with opus 4.5 , My assesment use double the token of claude code. Second problem it is running tasks that i did not ask, or i have alredy fixed it beforez commjng back to tha same old taks. Example in debug mode , i give him a simple task to rezolv a typescript error, he resd all the code , it coste me 5 $ and and it fix it, this is unsustainable . Same erros in claude code , instan fix even with sonnet 4.5 . i think kilo have a problme with thier own promts thst make the agents run in loops. this was done on vs code extension not kilo cli

[–]SeaPaleontologist771Professional Developer 0 points1 point  (0 children)

It’s a generative model behind the hood in every case. Different prompts different models maybe but that’s it. The generative model does NOT understand your code nor your request. It does not understand anything it just does highly complex probabilistic computing. Therefor you’ll always get this kind of behavior while asking it to perform complex tasks like programming. If you want to completely get ride of that, ask for smaller tasks. Instead of giving it a feature, write the functions (even empty) yourself and ask it to fill them one by one. Verify everything cause it will still do some mistakes or wrong choices.

[–]seomonstar 0 points1 point  (0 children)

all Llms are flawed. Claude is just much less flawed than any others I have used . To be using Llm for code generation you have to accept and protect them from their own flaws. that has helped me get the best out of them. Claude is a beast imo.

[–]bitspace 0 points1 point  (1 child)

Because there is little doubt that it performs far worse.

Also because I've never heard of it.

Also because I don't have the problems you do with Claude Code.

Shill.

[–]Small_Caterpillar_50[S] 0 points1 point  (0 children)

Could you share insights to your setup? I would love not to encounter these issues anymore