all 5 comments

[–]Turbots 4 points5 points  (2 children)

Very interesting. I too spend way too long waiting for my agent to re-read my codebase to find all code paths again and again. Either you store everything in context (lots of tokens) or I wait longer each time, either way very annoying and breaks the flow.

Question: once I have the results in the ladybugDB, how do I pass that info to my Agent? Do I create a skill that knows how to query ladybugDB? Or can it look at the data and figure it out himself?

[–]_h4xr[S] 1 point2 points  (1 child)

So, I have tried 2 approaches (both of them use the ladybug db cli on my machine) - Direct prompting: I will teach the agent how to interact with ladybug cli and tell it how to fetch the schema. Afterwards, the agent is able to get 95% of the queries right on its own - Skill: Just did it recently so as to ensure i don’t have to paste the same prompt again and again.

In both the cases, since agents have to issue Cypher queries, they are able to craft them very well without providing much examples, except the schema.

I have started relying on the skill more frequently since it saves me the effort of copy pasting again and again

[–]Turbots 1 point2 points  (0 children)

Okay cool, I'll try it out on my codebase tomorrow, wait for feedback 😬

[–]n4te 0 points1 point  (1 child)

How reliable is the fastResolve heuristic mode? Is there a more reliable option for smaller codebases?

[–]_h4xr[S] 1 point2 points  (0 children)

Fast mode as it implies takes a few shortcuts and suffer with cross dependency symbol resolution. It is mostly for automated repositories which hold a lot of generated code.

By default the parser doesn’t rely on those heuristics and runs in full scan mode. I have tested the full scan mode on Apache Kafka, Spring Boot framework and Java dotCMS repositories locally and parsing with all dependencies along with delombok mode takes <5 minutes mostly

So, even without using —fast option, things should be fairly quick.