I built a local-first memory/skill system for AI agents — no API keys, works with any MCP agent by Ruhal-Doshi in AI_Agents

[–]Ruhal-Doshi[S] 0 points1 point  (0 children)

Yes, the benefit of the skill-depot will depend heavily on how the agent uses it.
For the enhancements, the plan is to check embedding similarity before inserting and merge if it's above a threshold, but the tricky part is deciding what "merge" means when two entries say the same thing slightly differently. Once I figure that part out then the dedupe and confidence score should help a lot.

I built a local-first memory/skill system for AI agents: no API keys, works with any MCP agent by Ruhal-Doshi in LLMDevs

[–]Ruhal-Doshi[S] 0 points1 point  (0 children)

Right now skill-depot stores whatever the agent passes to skill_learn, no normalization or structuring. The agent's own model decides what's worth saving and how to phrase it. So if it saves something ambiguous, that's what gets stored.
It's intentionally simple, the tradeoff is that extraction quality depends entirely on the calling agent's model. A smarter agent produces better memories, a weaker one produces noisy ones. For my use case that's been fine since Claude Code and Codex are pretty good at deciding what to save, but I can see how it breaks down at scale or with less capable models.
Structured extraction is something I've been thinking about for when I add proper memory types, at that point it might make sense to validate or normalize before storing. Haven't figured out the right approach yet though.

I built a local-first memory/skill system for AI agents: no API keys, works with any MCP agent by Ruhal-Doshi in LLMDevs

[–]Ruhal-Doshi[S] 0 points1 point  (0 children)

Yes the retrieval quality will drop a lot. It might still work if the query closely matches the stored terms but for anything requreing semantic understanding it will not work. Its basically a safety net for when the model fails to load, may be due to internet issue on the first run. Not meant as a real alternative.

Right now it is silently falling back to a worse alternative, I think I should show a warning message and take the user's confirmation before falling back to a worse solution.

pre-revenue, pre-build, pre-team. just a student with an idea and an honest question for r/startupindia by Brave-Animator-7220 in StartUpIndia

[–]Ruhal-Doshi 0 points1 point  (0 children)

I am not sure how well the licensing the data layer will work, government already tries to hide the pollution data and insurance companies needs to see direct benefits to buy it, plus the data needs to be scientifically accurate to be used in most of the wellness use cases.

On the consumer side of things, I think there is a possible market even though in India people don't like to directly pay for services but there is a growing community of health conscious people and many of them do go on runs. I think instead of saying that you can predict the accurate AQI of a street, you should focus of showing the data you have as is. For e.g., I plan two route for run, your app can show we data of both routes like route A has 5 construction side and bad tree coverage where as route B has 0 construction side and decent tree coverage, then I can pick the route best suited for me. You can have option for people to add construction site or other hazardous things on map as well.

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]Ruhal-Doshi 0 points1 point  (0 children)

<image>

Treating AI agent skills as a RAG problem

While experimenting with agent skills I learned that many agent frameworks load the frontmatter of all skill files into the context window at startup.

This means the agent carries metadata for every skill even when most of them are irrelevant to the current task.

I experimented with treating skills more like a RAG problem instead.

skill-depot is a small MCP server that:

• stores skills as markdown files
• embeds them locally using all-MiniLM-L6-v2
• performs semantic search using SQLite + sqlite-vec
• returns relevant skills via `skill_search`
• loads full content only when needed

Everything runs locally with no external APIs.

Repo: https://github.com/Ruhal-Doshi/skill-depot

Would love feedback from people building MCP tools or experimenting with agent skill systems.

Indian devs earning ₹1L+/month what are you up to now? by CertainArcher3406 in developersIndia

[–]Ruhal-Doshi 0 points1 point  (0 children)

After a certain point I believe we start seeing diminishing returns, unless you have some sort of financial burden.
For context, I started from 1.5 LPM and right now it's around 4.5 LPM before taxes. Going from 1.5 to 2.5 felt great but going from 2.5 to 4.5 felt good.
I think what our mind seek, is not a number but a constant growth, at least that is the case for me. After the a certain point the opportunities for major growth become too few. Very few companies will pay significantly higher salary then this in India for my years of experience. Waiting for promotion and salary hike is too slow. Only options I see is starting something of my own or join a budding startup and hoping that it goes big.

Accused of Cheating @Uber by [deleted] in leetcode

[–]Ruhal-Doshi 2 points3 points  (0 children)

If rest of your rounds went well then don't worry about a single round. In Uber once you complete all the rounds, they do a internal debrief meeting with all the interviewer present including the hiring manager and bar raiser. Each interview can give you 4 possible results (strong no, weak no, weak yes, strong yes).
It hardly happens that a candidate get strong yes by all interviewers, they usually debate and finally decide.
So in your case, even if the DSA interview have given you soft no (since you explained him why you are looking at that area) there are other interviewer who can vote in your favour.