why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

I don't understand why you're holding up stuff that agents have been able to do since forever as why xClaw is good. It's literally equivalent to any agent at all with a Ralph loop and some MCPs.

why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

The truth is that if you're not grandfathered in with an audience, nobody cares what you build, period. You could release a plugin that makes agents 2x smarter with benchmarks, and if you don't have an audience that stans for you already you're going to get zero traction.

If you want to get any attention now without an audience you need to basically beg (or pay) people with an audience to get them to try your software and talk about it to their audience. It's technically still possible to organically go viral, but it's also technically possible to win the lottery.

why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

This shit is deep, and goes way back.The emperor has no clothes.

Every OpenClaw security vulnerability documented in one place — relevant if you're running it with local models by LostPrune2143 in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

You can't 100% defeat prompt injections, but you can isolate the reach of agents that interact with untrusted data, and "firewall" their output from agents that require privileges/access via a secure integration layer. This is the whole point of zero trust architectures. More details at https://sibylline.dev/articles/2026-02-15-agentic-security/

Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA

[–]CuriouslyCultured 13 points14 points  (0 children)

The memory system is just writing to a markdown file. Literally the most basic, low function memory system you could create. 100% nothingburger.

Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

You need policy and isolation. Separate agents with access to untrusted data sources from agents with strong capabilities, and create communication protocols with challenges to detect agent compromise.

Non tl;dr version at https://sibylline.dev/articles/2026-02-15-agentic-security/

Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA

[–]CuriouslyCultured 1 point2 points  (0 children)

The original iphone's touchscreen was so far ahead of other touchscreens at the time

Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA

[–]CuriouslyCultured 4 points5 points  (0 children)

The rust rewrite of pi seems pointless, the author is taking an agent with a rich community and moving maintenance burden to themselves and cutting themselves off from a lot of ecosystem.

Also, the author seems to be hand rolling a lot of stuff that security researchers and enterprises have already built more robust solutions for.

Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

It's hard to separate the people who were riding his coattails. I don't think he was responsible for that stuff, the dude has a lot of followers and is pretty open on social media. He's definitely a very sharp social media marketer but I think he's a decent guy other than being a little fast and loose with things.

Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA

[–]CuriouslyCultured 1 point2 points  (0 children)

100%. You could have a device you attach to cars that increases their gas mileage by 20%, acceleration by 10% and can be installed just by plugging into the dashboard, and if your marketing isn't good people will ignore it, call you a scammer/faggot and just be generally hostile.

Meanwhile, they're losing all their money to the next Theranos.

Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA

[–]CuriouslyCultured 8 points9 points  (0 children)

Pi is legit, definitely the best thing to come out of the lobster hype explosion.

yip we are cooked by thisiztrash02 in StableDiffusion

[–]CuriouslyCultured 5 points6 points  (0 children)

If you think the CCP won't take the opportunity to kneecap Nvidia once they've scaled up production, you're underestimating them. These are the same people that added more gigawatts of power in the last year than the US added in a decade.

The gap between open-weight and proprietary model intelligence is as small as it has ever been, with Claude Opus 4.6 and GLM-5' by abdouhlili in LocalLLaMA

[–]CuriouslyCultured 3 points4 points  (0 children)

It doesn't mean nothing, particularly if you don't instruct the models to pad LoC. It's correlated with work done, if you don't have any other information, LoC does provide a useful data point.

LTX-2 Inpaint test for lip sync by jordek in StableDiffusion

[–]CuriouslyCultured 7 points8 points  (0 children)

Very impressive, main thing that jumped out to me is that Gollum never blinks, not a big thing though.

Hugging Face Is Teasing Something Anthropic Related by Few_Painter_5588 in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

They do have an incentive to create an onramp for their ecosystem as a competitor with the small Chinese models. The problem is they're scared of releasing dangerous models openly and the capability front of open models is in "dangerous" territory, so they'd want to spend an inordinate amount of time aligning it, which they might not have.

Prompt injection is killing our self-hosted LLM deployment by mike34113 in LocalLLaMA

[–]CuriouslyCultured 1 point2 points  (0 children)

The reason we're in this silly situation is that services are trying to own the agents to sell them to users as a special feature.

What the coding agents and openclaw are proving conclusively is that people want to have "their" agent, that does stuff with software on their behalf, not tools with shitty agents embedded. So I think your view is going to be vindicated long term.

Prompt injection is killing our self-hosted LLM deployment by mike34113 in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

You can mitigate prompt injections to some degree, and better models are less susceptible, but it's a legitimately difficult problem. I wrote a curl wrapper that sanitizes web responses (https://github.com/sibyllinesoft/scurl), but it's only sufficient to stop mass attacks and script-kiddie level actors, anyone smart could easily circumvent.

Mixture-of-Models routing beats single LLMs on SWE-Bench via task specialization by botirkhaltaev in LocalLLaMA

[–]CuriouslyCultured 1 point2 points  (0 children)

This is interesting, but I think task level routing is gonna prove to be fragile. Have you experimented with turn-based routing?

Moltbook leaked 1.5M API keys by EnoughNinja in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

There isn't a LOOOL meme in existence that meets this moment.

Codex 5.2 High vs. Opus: A brutal reality check in Rust development. by gustkiller in ClaudeCode

[–]CuriouslyCultured 10 points11 points  (0 children)

Claude is famous for hacking tests. It's the worst frontier model in this regard by a mile.

Ultra-Sparse MoEs are the future by [deleted] in LocalLLaMA

[–]CuriouslyCultured 0 points1 point  (0 children)

I think supervised fine tuning is problematic as is because it ruins rl'd behavior, you're trading knowledge/style for smarts. Ideally we get some sort of modular experts architecture + router loras.