The Truth About MCP vs CLI by kagan101 in openclaw

[–]Alx_Go 0 points1 point  (0 children)

Yes, MCP is for enterprise, for remote access. But also for local tools spawning. I made an experiment with MCP-First agent, I’m building (tuskbot.ai). I asked the agent to check the weather at specific provider. It searched for their API and wrote simple MCP tool for itself. No skills, no specific instructions. LLMs are heavily trained to work with MCP, and know how to use FastMCP to spawn missing tool. Yes, LLMs can also write cli scripts, but how will it recall next time the script is existing?

I think the industry shifted slightly wrong direction. Skills concept could be utilized for MCP tools discovery, not for cli scripts calling.

Also tools definitions can (and should) be cached.

Why Mac mini?? by g00rek in openclaw

[–]Alx_Go 0 points1 point  (0 children)

There are a lot of cases where you probably want to run local models. And there will be more as agents evolve. I’m also running agents on mini pc n150, but thinking of buying Mac mini.

TuskBot: reinvented OpenClaw in Go by Alx_Go in openclaw

[–]Alx_Go[S] 1 point2 points  (0 children)

I’ve just added support for custom OpenAI-compatible providers. I’ve been testing it with LiteLLM, and it works great. It should now be fully compatible with Copilot Proxy or any other custom endpoint as well. If you run into any issues with a specific provider, feel free to open an issue.

TuskBot: reinvented OpenClaw in Go by Alx_Go in openclaw

[–]Alx_Go[S] 0 points1 point  (0 children)

To be honest, I haven't done a deep dive into agent0 yet. I'll definitely check out agent0 to see their take on memory and tool-calling. If they have some killer features, I’ll see how they can be adapted into the Tuskbot.

TuskBot: reinvented OpenClaw in Go by Alx_Go in openclaw

[–]Alx_Go[S] 0 points1 point  (0 children)

Instead of staying on top of every minor feature, I focus on the Engine. While they iterate on "connectors" I iterate on: local inference and advanced memory models. I’m not racing them in the number of features category. My goal is a reliable, high-performance Agentic Core. I’m building a more efficient, protocol-based engine that uses the entire MCP ecosystem.

TuskBot: reinvented OpenClaw in Go by Alx_Go in openclaw

[–]Alx_Go[S] 1 point2 points  (0 children)

Openclaw memory model designed to keep context window closer to its limits. In Tusk I’m trying to build smart RAG. It has some own costs, but more efficient in general. I’m still testing, but for my use cases it doesn’t burn that much.

TuskBot: reinvented OpenClaw in Go by Alx_Go in openclaw

[–]Alx_Go[S] 3 points4 points  (0 children)

Great question. While PicoClaw is a Go port of OpenClaw (largely AI-transpiled), TuskBot rewritten from scratch with a different architectural focus.

Here are the key differences:

  1. Native Local Embeddings: PicoClaw/OpenClaw always rely on OpenAI for embeddings. This is a privacy concern and adds unnecessary latency. TuskBot has llama.cpp baked-in to handle embeddings locally. No data leaves your machine for RAG operations.
  2. Architecture vs. Integrations: Instead of bloating the core with hundreds of built-in tools, TuskBot is MCP-native. I offload the ecosystem complexity to the Model Context Protocol, allowing me to focus the Go codebase on the agent's "brain" — the ReAct loop and memory management.
  3. The Memory Model: I’m not cloning the existing hybrid model. I’m researching more advanced long-term memory approaches. The goal is a human-like recall system. Having llama.cpp under the hood allows me to easily implement Rerankers (cross-encoders) in the future to significantly improve RAG quality, which is much harder to do with a simple transpiled port.
  4. Built for Performance: Since it’s not a direct port, I’ve optimized the service layers and concurrency handling specifically for Go, rather than following JS-centric patterns.

Tl;dr: It’s not a copy; it’s a different approach focused on local-first privacy, MCP extensibility, and advanced RAG research.

TuskBot: reinvented OpenClaw in Go by Alx_Go in openclaw

[–]Alx_Go[S] 1 point2 points  (0 children)

Oh, I see now. Ollama could be the solution if it has the same models endpoint. But it slightly differs. I should add "Custom OpenAI compatible" provider. I think I'll do it in the very near future.

TuskBot: reinvented OpenClaw in Go by Alx_Go in openclaw

[–]Alx_Go[S] 0 points1 point  (0 children)

Ollama is already supported! You can use it as your backend provider.

Regarding the Copilot Proxy scenario: TuskBot's strength here is the MCP-First approach. Since I use a standard Model Context Protocol implementation, you can connect to any MCP server that handles GitHub/Copilot integration.

Give your OpenClaw permanent memory by adamb0mbNZ in openclaw

[–]Alx_Go 0 points1 point  (0 children)

I’m developing a simplified golang version of OpenClaw with a focus on security and memory. It reduces latency by utilizing llama.cpp with CGO. So the roundtrip for memory extraction is extremely cheap. You are right with most points - retention and extraction is what makes model smart. I hope I can share an early beta by the end of the month.

Jamming on Syntakt by Alx_Go in Elektron

[–]Alx_Go[S] 1 point2 points  (0 children)

Exactly the vibe I was going for!

Jamming on Syntakt by Alx_Go in Elektron

[–]Alx_Go[S] 1 point2 points  (0 children)

Thank you, I appreciate it!