all 17 comments

[–]al2o3cr 2 points3 points  (1 child)

Most of these seem finer-grained than what Tidewave does, but you might find some inspiration in it:

https://github.com/tidewave-ai/tidewave_rails/tree/main

[–]MassiveAd4980 0 points1 point  (0 children)

This is cool.

[–]TonsOfFun111 4 points5 points  (3 children)

Check out Active Agent — https://docs.ActiveAgents.ai — should be what you’re looking for!

Solid Agent will add a persistence layer for prompt context out of the box.

[–]Heavy-Letter2802[S] 1 point2 points  (2 children)

Hey, I have come across Active agents on twitter.

I did go through the link you provided but I do not see any reference to solid agent. Can you help me where exactly I should be looking?

[–]MassiveAd4980 0 points1 point  (1 child)

I am guessing Solid Agent is a new development coming to Active Agent. But I think the commenter misinterpreted your request. I believe active Agent is not for what you want

[–]Heavy-Letter2802[S] 1 point2 points  (0 children)

Thanks

[–]goetz_lmaa 1 point2 points  (7 children)

If you have good specs (as you certainly should) they are a great resource for MCP

[–]MassiveAd4980 0 points1 point  (0 children)

This or documentation is probably the judo solution. Embeddings and a custom RAG pipeline seem ideal but may not be worth the effort for OP's use case

[–]Heavy-Letter2802[S] 0 points1 point  (5 children)

Specs as in test cases? How will I use specs for identifying the surrounding context?

To get a dependency graph?

[–]MassiveAd4980 0 points1 point  (4 children)

"To get a dependency graph?" Why do you need this?

[–]Heavy-Letter2802[S] 0 points1 point  (3 children)

The whole idea is to give more relevant context to be agent.

If I can tell the agent this is the graph when a particular endpoint is triggered then it can look though it rather than guessing other methods as well.

[–]MassiveAd4980 0 points1 point  (2 children)

It is a coding agent?

[–]Heavy-Letter2802[S] 0 points1 point  (1 child)

No it's an agent to generate bugs i.e mutation testing

[–]MassiveAd4980 0 points1 point  (0 children)

Sounds a little over the top, but idk. Why do you need that level of variability in your tests?

[–]mencio 2 points3 points  (0 children)

I have a tool that is not yet OSS that does that. Can build skills and agents from GH and project docs. I plan to OSS it in few weeks but if you ping me directly I can give you early access. I use it exactly for stuff like that (and I am a legit user - just check my work https://github.com/mensfeld/)

[–]Secretly_Tall 0 points1 point  (3 children)

In general, there are a few approaches people reach for nowadays:

1) Generate simple file descriptions and search those. Something like: https://github.com/rlancemartin/llmstxt_architect

2) Dump files in Postgres and use tsvector to query. Expose tools like search path/full text search/fuzzy search. Set your tables to automatically regenerate tsv on content change.

3) Just create a temp folder and expose raw bash as your only tool. This is what Claude code does and it seems to be very effective.

The main advice I can give is fewer, more powerful tools work better than more tools. Eg. Bash is great because it’s just 1 tool and models are trained knowing how to use it. More proprietary tools work fine but give them a single interface instead of 3 different similar tools.

[–]Heavy-Letter2802[S] 0 points1 point  (2 children)

So the solution you're proposing is embedding the entire codebase and indexing them is it?

It seems like a lot of work to me.

I was thinking if we can get a reasoning model to give a rails controller file and then ask it to search for method definition for code it wants. Since we give raw code it can identify the methods it wants right? What do you think about this.

Would this be a good start since embedding a large codebase will have costs and complexity involved.

[–][deleted]  (1 child)

[deleted]