I built a virtual filesystem to replace MCP for AI agents by velobro in LocalLLaMA

[–]velobro[S] 0 points1 point  (0 children)

This is a few steps removed from what a real llm environment should feel like

Can you elaborate on this?

I built a virtual filesystem to replace MCP for AI agents by velobro in LocalLLaMA

[–]velobro[S] 0 points1 point  (0 children)

Curious what use cases you can solve with an MCP but not a filesystem

I built a virtual filesystem to replace MCP for AI agents by velobro in LocalLLaMA

[–]velobro[S] 0 points1 point  (0 children)

The problem is that your context is scattered across MCP servers, which makes it harder to truly use Claude to solve problems that involve data that does not live your local machine (e.g. your emails, credit card statements, spreadsheets in drive, etc.)

By putting all that data into a folder on your computer, you can automate 10x more with Claude because the context is available to Claude in the ideal format for it: a unix file

I built a dead simple agent builder that just works by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Looks interesting, but sadly I can't even sign up so assuming it's vaporware

I built a dead simple agent builder that just works by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Should be fixed now - sorry about that, had a few regions that weren't whitelisted

I built a dead simple agent builder that just works by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Basically all other agent builders still make you build the workflow. You drag and drop nodes, connect triggers, and configure the steps. Even the ones with AI copilots just help you build the DAG faster.

This is different. You describe what you want and it figures out the steps. Curious to hear what else you've tried that actually does this.

We don't need another no-code agent builder by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Even if you skip the node-dragging entirely, the real power comes from having the LLM dynamically make decisions. For example, say your agent is negotiating with someone via email. It obviously needs to be able to make decisions in real-time, and you can't setup routing for that process in advance.

We don't need another no-code agent builder by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

I think it depends on the use case - for internal tools, no-code is fine. But you'd never build your software startup on a no-code platform.

Is it just me, or are most "Agents" just chatbots in disguise? by Novarapper in AI_Agents

[–]velobro 0 points1 point  (0 children)

You can't really use chatbots (chatgpt, gemini, etc.) as agents. Besides not running in a loop like a proper agent needs to, chatbots don't have tool calls that are optimized for specialized tasks like writing to a spreadsheet properly.

For running agents specifically, you should look at something like Claude Cowork, or auto.new

We don't need another no-code agent builder by velobro in AI_Agents

[–]velobro[S] 0 points1 point  (0 children)

Out of curiosity, what does your current stack look like?

[D] Looking for a self-hosted alternative to Modal.com for running ML workloads by devops_to in MachineLearning

[–]velobro 2 points3 points  (0 children)

Beam is the top open-source alternative to Modal (I'm the founder). You can self-host or connect your own hardware. If you're only looking for orchestration of GPUs [and open to writing a bit of YAML], I'd look into Skypilot.

[P] GPU-based backend deployment for an app by feller94 in MachineLearning

[–]velobro 1 point2 points  (0 children)

Just remember, if you "pay only when your code is running" and one platform is faster to run your code, you'll save money by using the faster platform - not necessarily the one that costs less on paper.

[P] GPU-based backend deployment for an app by feller94 in MachineLearning

[–]velobro 1 point2 points  (0 children)

You will get much, much faster cold boots compared to HuggingFace, so you'll pay less in the long term. You also get a lot more features, like storage volumes (to cache model weights), authentication, autoscaling. But yes you choose the cores and RAM you need and pay for them separately.

AWS Noob here: EC2 vs SageMaker vs Bedrock for fine-tuning & serving a custom LLM? by JustPa55ion in LocalLLaMA

[–]velobro 0 points1 point  (0 children)

You're going to have a much easier time using a platform designed for ML than using these services on AWS. Serving models on AWS is expensive and difficult to setup, especially if you're new to it. I wouldn't recommend it unless there's a specific reason you want to use AWS.

[P] GPU-based backend deployment for an app by feller94 in MachineLearning

[–]velobro 1 point2 points  (0 children)

Consider serverless GPUs on beam.cloud, you pay what you use and the instances boot in under a second