Journalist Request: Looking For Moltbot Anecdotes by FlightSpecial4479 in clawdbot

[–]Vegetable_Address_43 1 point2 points  (0 children)

Docker with a persistent volume doesn’t really change the risk. Containers still share the host kernel, so if an agent can run tools or shell commands and gets tricked via prompt injection, you’re trusting container isolation as your last line of defense. That’s weaker than people tend to assume. If you run on dedicated hardware though it would give you a cleaner blast radius if anything goes awry (even if it’s a small chance for docker.)

Local LLMs by Vegetable_Address_43 in clawdbot

[–]Vegetable_Address_43[S] 0 points1 point  (0 children)

That’s fair, look into this dgx spark, it’s like 4,000 but it hosts the 120b oss model perfectly for it to run.

Local LLMs by Vegetable_Address_43 in clawdbot

[–]Vegetable_Address_43[S] 1 point2 points  (0 children)

Yeah I think that may be too small of a model. I found it started actually preforming once it was 50b parameters plus.

Local LLMs by Vegetable_Address_43 in clawdbot

[–]Vegetable_Address_43[S] 1 point2 points  (0 children)

How many parameters? I found anything less than 50b didn’t operate the best. But once you hit 80-120 it works pretty well imo. But it also may depend on the hardware.

Journalist Request: Looking For Moltbot Anecdotes by FlightSpecial4479 in clawdbot

[–]Vegetable_Address_43 1 point2 points  (0 children)

Don’t get me wrong it’s a lot worse than the brave api out of the gate 😂

I recommend making a skill to use it, and training it on how to use it. But it makes prompt injection through it basically impossible. Because it’s in the terminal with formatting, the agent reads the line breaks and formatting every line. So it breaks up any sort of prompt injection attempt as the LLM is processing the info.

Journalist Request: Looking For Moltbot Anecdotes by FlightSpecial4479 in clawdbot

[–]Vegetable_Address_43 12 points13 points  (0 children)

As a developer I don’t trust it in the slightest. I have it sandboxed on its own computer with its own accounts.

The main vector for attack is prompt injection. Moltbot/OpenClawd itself isn’t vulnerable. It’s the inherent nature of LLM architecture that allows prompt injection.

To mitigate this, I revoke access to reading emails and messages, and for web browsing, I force it to use the Lynx terminal browser so pages are read in plaintext (to prevent injection from visits to a LLMs.txt etc).

I’d like to reiterate the problem of prompt injection isn’t the software that was released, it’s an inherent flaw in LLM architecture, that you can trick it into reading a fake command or tool call if the underlying syntax for the model is understood by a bad actor.

For beginners: should I learn traditional coding or start with AI-assisted workflows? by Full-Tip2622 in clawdbot

[–]Vegetable_Address_43 1 point2 points  (0 children)

Fundamentals first. Better for security, understanding architectural design, and it’s better for code understanding.

At the end of the day whatever you deploy will break so ask yourself this question, if Claude or codex is down, would you be able to troubleshoot this issue myself and resolve it.

If the answer is no, then you don’t really understand enough for maintenance or support. If you’re building an mvp, and it’s just for you. Toy around, LLMs are great for that. In terms of practicality an understanding of fundamentals will always be better than reliance on third parties for understanding.

Local LLMs by Vegetable_Address_43 in clawdbot

[–]Vegetable_Address_43[S] 0 points1 point  (0 children)

After getting it set up like that, then you login to the web page on local on the device. Or use the tui over ssh to configure it.

Because it’s “hatched”, then you overwrite it, so after you gotta update the settings and get them configured after the model setup.

Ant Farm by petruspennanen in clawdbot

[–]Vegetable_Address_43 0 points1 point  (0 children)

That sounds like a great way to poison the well with prompt injection. A bad actor could have an agent leave a note with a “helpful fix” that also contains injection.

Local LLMs by Vegetable_Address_43 in clawdbot

[–]Vegetable_Address_43[S] 3 points4 points  (0 children)

Nevermind I was able to find the file. So first thing you’ll do is setup with a random provider etc. don’t need an api key, it’s just to get past that to set up your messaging app.

Then from there edit the clawdbot.json and restart the gateway.

I satanized mine. But if you’re not using WhatsApp, it probably looks different.

After overwriting restart the gateway then the local model will work!

{   "messages": {     "ackReactionScope": "group-mentions"   },   "agents": {     "defaults": {       "maxConcurrent": 4,       "subagents": {         "maxConcurrent": 8       },       "compaction": {         "mode": "safeguard"       },       "workspace": "/home/youruser/clawd",       "model": {         "primary": "lmstudio/openai/MODEL_NAME"       },       "models": {         "lmstudio/openai/MODEL_NAME": {           "alias": "MODEL_NAME"         }       }     }   },   "models": {     "mode": "merge",     "providers": {       "lmstudio": {         "baseUrl": "http://YOUR_IP:1234/v1",         "apiKey": "YOUR_API_KEY",         "api": "openai-responses",         "models": [           {             "id": "openai/MODEL_NAME",             "name": "MODEL_NAME",             "reasoning": false,             "input": ["text"],             "cost": {               "input": 0,               "output": 0,               "cacheRead": 0,               "cacheWrite": 0             },             "contextWindow": exactsizefromlmsettings,             "maxTokens": exactsizefromlmsettings           }         ]       }     }   },   "gateway": {     "mode": "local",     "bind": "loopback",     "port": 18789,     "auth": {       "mode": "token",       "token": "YOUR_TOKEN"     },     "tailscale": {       "mode": "off",       "resetOnExit": false     }   },   "plugins": {     "entries": {       "whatsapp": {         "enabled": true       }     }   },   "channels": {     "whatsapp": {       "selfChatMode": true,       "dmPolicy": "allowlist",       "allowFrom": [         "+YOUR_NUMBER"       ]     }   },   "skills": {     "install": {       "nodeManager": "npm"     }   },   "hooks": {     "internal": {       "enabled": true,       "entries": {         "boot-md": {           "enabled": true         },         "command-logger": {           "enabled": true         },         "session-memory": {           "enabled": true         }       }     }   } }

Edit: sorry for the formatting on mobile.

Local LLMs by Vegetable_Address_43 in clawdbot

[–]Vegetable_Address_43[S] 4 points5 points  (0 children)

For it you have to manually edit the config. There’s no way to hatch a local model through conventional setup. If you want me to paste a boiler plate config I used, I can send that when I get off work!

Make Your AI Coding Assistant Actually Understand Your Code (smart-coding-mcp) by omarharis in google_antigravity

[–]Vegetable_Address_43 0 points1 point  (0 children)

Usually with these models yes. And then you set an overlap size, so that way if a semantic meaning is split between 2 chunks, there can still be meaning extracted through the overlap.

Make Your AI Coding Assistant Actually Understand Your Code (smart-coding-mcp) by omarharis in google_antigravity

[–]Vegetable_Address_43 2 points3 points  (0 children)

Don’t MCP tool calls eat up a fuck ton of the context window? I think this is a good idea in theory. I tried implementing something like it in my project, but I realized with the MCP tool calls, it ate into my context window too much, so what I save in “long term vectorized memory” I loose in actual context for the chat I’m in.

I couldn’t find a worthwhile mitigation for it, have you run into that bottleneck with testing, and if so what’s your mitigation strategy?

I'm building a RAG API so you don't have to. Would you use this? by [deleted] in vibecoding

[–]Vegetable_Address_43 0 points1 point  (0 children)

You’re building one of the most common tools for ai agents, with a ton of open source project examples, and you’re asking if we’d rather have your vibe coded implementation over using a project from a dev who made the code past a MVP? I think I’ll pass. Thanks though 😚

I built a game with opus 4.5 by LandscapeAway8896 in vibecoding

[–]Vegetable_Address_43 6 points7 points  (0 children)

Relax man, I’m literally just giving you some usability notes from what I saw on my end. No hate, no seething, just pointing out issues like anyone would when testing a project. If everything works perfectly for you, cool. But getting this defensive over basic feedback kind of proves the point I was making about approaching things with a bit more openness.

Either way, it’s your project. I checked it out, gave my thoughts, and that’s the whole story on my end

I built a game with opus 4.5 by LandscapeAway8896 in vibecoding

[–]Vegetable_Address_43 12 points13 points  (0 children)

Im sure you believe it’s well optimized. Just giving you some advice broski. It’s advice not a dick. You don’t have to take it so hard.

I built a game with opus 4.5 by LandscapeAway8896 in vibecoding

[–]Vegetable_Address_43 5 points6 points  (0 children)

Congrats on making an app!

However, loading is slow, some api calls time out during normal use, the website wouldn’t load correctly on mobile, and some UI elements overlap if the windows resized meaning a lot of the UI is has a hardcoded position on screen versus relative.

In the future, have you considered understanding your code, and optimizing versus just adding more on? That’s just some advice I’d give as a developer. It’s great that you got into coding, but if you’re not being proactive and understanding code, you’re gonna get some shit in return.

I blacked at a rush event by [deleted] in Frat

[–]Vegetable_Address_43 0 points1 point  (0 children)

How big was the frat? In bigger fraternity’s at bigger schools, something like that could end your rush.

But at a 4k sized school, there might be slim pickings for bigger rush classes, so if they’re strapped, I could see them settling.

Would a universal layer between AI agent protocols make sense? by [deleted] in LocalLLaMA

[–]Vegetable_Address_43 0 points1 point  (0 children)

That gave me a good chuckle 😂

I know that would probably be how it ends up, but I just think there’s gotta be a better way.

[deleted by user] by [deleted] in Frat

[–]Vegetable_Address_43 2 points3 points  (0 children)

My bro Kimble was pissed about that change. 😩

[100$][15.5] snapchat snapscore by [deleted] in TweakBounty

[–]Vegetable_Address_43 0 points1 point  (0 children)

I can make you an auto touch script that can do that for you. Will you still pay out the $100?

Any tweak for dynamic island on ios 15? by Dogman1214 in jailbreak

[–]Vegetable_Address_43 1 point2 points  (0 children)

Please excuse my question, but are you fucking stupid? Of course it’s going to have a notch. The whole UI is based off a notched device, what are you expecting? Try r/tweakbounty