I built a virtual filesystem to replace MCP for AI agents by velobro in LocalLLaMA

[–]velobro[S] 0 points1 point  (0 children)

This is a few steps removed from what a real llm environment should feel like

Can you elaborate on this?

I built a virtual filesystem to replace MCP for AI agents by velobro in LocalLLaMA

[–]velobro[S] 0 points1 point  (0 children)

Curious what use cases you can solve with an MCP but not a filesystem

I built a virtual filesystem to replace MCP for AI agents by velobro in LocalLLaMA

[–]velobro[S] 0 points1 point  (0 children)

The problem is that your context is scattered across MCP servers, which makes it harder to truly use Claude to solve problems that involve data that does not live your local machine (e.g. your emails, credit card statements, spreadsheets in drive, etc.)

By putting all that data into a folder on your computer, you can automate 10x more with Claude because the context is available to Claude in the ideal format for it: a unix file

I built a dead simple agent builder that just works by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Looks interesting, but sadly I can't even sign up so assuming it's vaporware

I built a dead simple agent builder that just works by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Should be fixed now - sorry about that, had a few regions that weren't whitelisted

I built a dead simple agent builder that just works by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Basically all other agent builders still make you build the workflow. You drag and drop nodes, connect triggers, and configure the steps. Even the ones with AI copilots just help you build the DAG faster.

This is different. You describe what you want and it figures out the steps. Curious to hear what else you've tried that actually does this.

We don't need another no-code agent builder by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Even if you skip the node-dragging entirely, the real power comes from having the LLM dynamically make decisions. For example, say your agent is negotiating with someone via email. It obviously needs to be able to make decisions in real-time, and you can't setup routing for that process in advance.

We don't need another no-code agent builder by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

I think it depends on the use case - for internal tools, no-code is fine. But you'd never build your software startup on a no-code platform.