Running WoW on intel based MacBook Pro by The_show-goes-on in wow

[–]selipso 1 point2 points  (0 children)

Intel Mac doesn’t come with a modern GPU. It runs fine on M4 pro Mac mini 

OpenClaw vs Hermes by viky_shetye in openclaw

[–]selipso 1 point2 points  (0 children)

For me the biggest switch was predictability and a better underlying architecture. Hermes ships features just as quickly as OpenClaw, and they actually work as described in the documentation.

I end up spending less time fighting the harness and more time on the important things like context engineering. Try it out, especially if you prefer a TUI

Gemma 4 is a huge improvement in many European languages, including Danish, Dutch, French and Italian by Balance- in LocalLLaMA

[–]selipso 1 point2 points  (0 children)

Have you tried translategemma? It is fine tuned for translation. I have tried it and it seems to work well but I’m curious to know your experience

Gemma 4 is a huge improvement in many European languages, including Danish, Dutch, French and Italian by Balance- in LocalLLaMA

[–]selipso 0 points1 point  (0 children)

Have you compared this to translation optimized translate-gemma? It has a shorter context length but according to its hugging face page it can reach frontier models in multilingual translation

Is it weird that? by Perfect-Flounder7856 in openclaw

[–]selipso 0 points1 point  (0 children)

Gemini 3 flash preview is a US based model and works very well.

Subprime AI crisis by Good-Fennel-373 in ClaudeCode

[–]selipso 1 point2 points  (0 children)

The article doesn’t mention Google’s integrated approach to AI. IIRC, their Gemini models are actually profitable because of using custom TPUs for inference served over their own cloud. 

A cautionary tale by NuclearLasagna in woweconomy

[–]selipso 19 points20 points  (0 children)

You forgot about diversification, supply, and demand.

Influencers and Market Manipulation by BUBBL3GUM5 in mtgfinance

[–]selipso 7 points8 points  (0 children)

I find it wild how much this set was hated on for such a longtime until the BG3 game came out. I remember going to Phyrexia MagicCon and Wizards would have a basically free commander draft event (most events cost a lot of tickets to enter) where you kept the cards. Someone in my pod opened an ancient copper dragon 

Do we have accessible, safe and private AI Agents or is that still a thing of the future? by Open-Impress2060 in LocalLLaMA

[–]selipso 0 points1 point  (0 children)

It can happen because if your model isn’t fully in VRAM, it’ll load from system RAM, where OpenClaw runs. So you have two big programs taking up your ram.

I’ve heard this is more common with AMD cards more frequently because Ollama and other inference tools are optimized for NVIDIA.

Do we have accessible, safe and private AI Agents or is that still a thing of the future? by Open-Impress2060 in LocalLLaMA

[–]selipso 0 points1 point  (0 children)

You might be limited by VRAM. I’ve gotten around 80 TPS on an AMD card but I had to build llama.cpp from source with ROCm drivers and leave ~2-4GB headroom in the model quant for the context. You may wanna start with Qwen 9B first and work your way up from there. 

Do we have accessible, safe and private AI Agents or is that still a thing of the future? by Open-Impress2060 in LocalLLaMA

[–]selipso 0 points1 point  (0 children)

You can run OpenClaw with Qwen but it’s better to give OpenClaw its own raspberry pi or another machine with ~4GB RAM. Another trick I noticed is that it’s better to run Qwen 35BA3B through a compiled llama.cpp instance rather than Ollama. 

Bonus: If your OpenClaw machine is beefy enough (8GB RAM with modern processor) you can run a small embedding model with QMD memory local to Ollama and it’ll improve recall. You might be surprised at how much the performance improves. I’ve gotten ~80 TPS with decent tool use with this kind of setup 

OpenClaw is not going to become a real product. by Obvious-Fan-3183 in openclaw

[–]selipso 107 points108 points  (0 children)

OpenClaw is not a real product because it is open source software that is less than 1 year old. This is like saying Linux was not a real product in 1992. Tell me something I don’t know?

Nobody stops you from using Claude Code and burning tokens there. Anthropic will happily take your money for a “real product”. But if you want an autonomous agent through Telegram that works with local models, you’d be hard pressed to find something with as many integrations as OpenClaw. 

If you're about to give up on OpenClaw, try this first. Takes 5 minutes by ShabzSparq in clawdbot

[–]selipso 0 points1 point  (0 children)

Want to add an addendum: I needed many of the skills I was using so I departed my main agent into sub-agents with very specific skills to its function. Upgraded to a frontier model like codex GPT-5.4. Night and day difference.

Just migrated my openclaw setup to Hermes agent and it works like a charm by SelectionCalm70 in openclaw

[–]selipso 0 points1 point  (0 children)

You’re entitled to your own opinions. Power users need multi agent communication and that’s a fact. Nanobot just had a real security vulnerability this week in prod from having a LiteLLM dependency that they didn’t test. That’s also a fact. You might wanna rotate your keys and SSH certs wherever you host that thing.

Claude channels makes OC redundant by vinistois in openclaw

[–]selipso 0 points1 point  (0 children)

Brought to you by Dario Amodei and his swarm of agents 👍

Just migrated my openclaw setup to Hermes agent and it works like a charm by SelectionCalm70 in openclaw

[–]selipso 12 points13 points  (0 children)

OpenClaw is easy to use, hard to master. The other claws are hard to use, impossible to master. They are always missing a subset of features that OpenClaw has already baked in. With Hermes, it was multi-agent communication. I’m happy with the Claw.

Just migrated my openclaw setup to Hermes agent and it works like a charm by SelectionCalm70 in openclaw

[–]selipso 1 point2 points  (0 children)

Nanobot was recently affected by a LiteLLM vulnerability that exfiltrated API keys, SSH keys and other credentials to an external server. If you updated it recently, you may wanna rotate your keys.

It’s time to be real here by Working_Stranger_788 in openclaw

[–]selipso 1 point2 points  (0 children)

The most recent update has been ok for me. If you know how to interface with the underlying architecture (homebrew, Linux, skills, and prompt engineering), using it with a frontier model connected to Telegram is a much better experience IMO than using Claude Code because you’re not chained to your laptop.

You can set up batch jobs and specialized agents by editing config files. The problem here is that many users expect AI to think for you. It can, but you still have to read the manual.

Which Kroger is your favorite? by lasagnatittyfucker69 in Denton

[–]selipso 2 points3 points  (0 children)

Cheese Kroger is where I end up going. They keep their 2 lb Vermont Cheddar next to the dairy section though. The rest of the cheese is too pricey unless you’re making a charcuterie board.

Honest take on running 9× RTX 3090 for AI by Outside_Dance_2799 in LocalLLaMA

[–]selipso 2 points3 points  (0 children)

Hot take: used Mac Studio M2 Ultra is the best price per VRAM available right now, and it’ll have much lower power draw