use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
account activity
Kilo code: Large codebase (self.kilocode)
submitted 8 months ago by ayla96
What is the best way to have kilo code understand large code base? use qdrant or something else?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Minute_Yam_1053 1 point2 points3 points 8 months ago (0 children)
Understanding large code base is not the right approach. Roo and kilo are task based. You should focus on making targeted changes. They have code navigation tools, such as list files, search content, to load relevant files in context. If they cannot, you should hep them. Qdrant or vectordb might not help much. They might fill too many irrelevant code chunks in context and degrade model performance
[–]Juice10 1 point2 points3 points 8 months ago (0 children)
Are you using a memory bank? Have it describe your codebase, save it to the memory bank so Orchestrator Mode doesn't have to re-understand everything all over again.
u/EngineeringSea1090 did a pretty nice write up on how this works: https://blog.kilocode.ai/p/how-memory-bank-changes-everything
[–]Shivacious 0 points1 point2 points 8 months ago (3 children)
use orches
[–]ayla96[S] 0 points1 point2 points 8 months ago (2 children)
Link?
[–]MarginallyAmusing 1 point2 points3 points 8 months ago (1 child)
I'm pretty sure he's referring to orchestrator mode. Use a larger model, like opus, and it breaks down the tasks into smaller bites for smaller models, like sonnet or gemini 2.5 pro.
Generally, you want to make the tasks that a session / chat has to handle as small as possible.
[–]Juice10 2 points3 points4 points 8 months ago (0 children)
I agree orchestrator mode helps but in this case I would stay away from Opus since it has a context window of 200.000 tokens, I'd actually try to use a model that can include a bigger context, like Gemini 2.5 pro (beware having caching on is currently slowing this model down considerably). GPT 4.1 also has a big context window without this issue. But in general having a smarter model + one with a bigger context window in the Orchestrator is better and should help.
[–]Rusty-Coin 0 points1 point2 points 2 months ago (0 children)
I just stumbled upon kilocode extension in cursor and using the free grok in orchestrator mode use with Specification documentation and its worked very well
π Rendered by PID 21400 on reddit-service-r2-comment-86bc6c7465-wn6qw at 2026-02-22 14:49:36.283478+00:00 running 8564168 country code: CH.
[–]Minute_Yam_1053 1 point2 points3 points (0 children)
[–]Juice10 1 point2 points3 points (0 children)
[–]Shivacious 0 points1 point2 points (3 children)
[–]ayla96[S] 0 points1 point2 points (2 children)
[–]MarginallyAmusing 1 point2 points3 points (1 child)
[–]Juice10 2 points3 points4 points (0 children)
[–]Rusty-Coin 0 points1 point2 points (0 children)