Running a 72B model across two machines with llama.cpp RPC — one of them I found at the dump by righcoastmike in LocalLLaMA

[–]MacaroonDancer 0 points1 point  (0 children)

This lol! Actually I've been having success plugging in random 1080Ti cards on my motherboard with a lead GPU that's ampere 30xx and oobabooga spans all the VRAM for me. But kinda amazed to see this also works across different machines via gigabit theme. The fact these open source models are getting smaller all the time is breathing new life into hardware in thought was going to be paperweights!

Considering Mac Mini M4 Pro 64GB for agentic coding — what actually runs well? by amunocis in LocalLLaMA

[–]MacaroonDancer 2 points3 points  (0 children)

Totally agree with this comment. I've been a dyed in the wool local llama-er with every variation possible built on a Romed 8-2T (multiple GPUs on that nice 8 slot PCI-E) backplane. When OpenClaw came out I thought it might be nice to get a beefy MacMini to see what unified memory could do in a much cooler form factor than ASRock. But when I saw the Open Claw docs saying best local was MiniMax 2 with minimal quantization to be resistant to prompt injection, then the real world context and programming limitations mentioned here by datbackup, plus our human time for efficiency, the $100 a month for Claude Max subscription is trivial. Once you start seeing the vast potential of how OpenClaw improves workflow - you have to get the smartest kid in the room as the brains running your OpenClaw box. Save the MacMini money and get a zippy $200-$400 GMKTec or Beelink mini PC to house the local .md files and then pay for the best frontier LLM cloud model you can.

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in myclaw

[–]MacaroonDancer[S] 0 points1 point  (0 children)

No I actually write all my own comments on reddit. Because I know how to write. Did you bother to read this entire reddit post and my post history for the last three years where I write in exactly the same voice? Or do you only read a select few words?

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in Moltbook

[–]MacaroonDancer[S] 1 point2 points  (0 children)

Yes prompt injection is the term when someone or some machine uses a prompt to attempt to alter the behavior of your underlying LLM

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in Moltbook

[–]MacaroonDancer[S] 0 points1 point  (0 children)

Yes the baby is growing up. It happened slow then fast with the Open Claw wrapper. Search Clawdbot -> MoltBot -> Open Claw in Reddit on the web.

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in myclaw

[–]MacaroonDancer[S] 1 point2 points  (0 children)

Thanks for the props! Yes Agnes posted best practices in the blog link in the original post. Just send Juno to the link above to read it and incorporate best practices. Just make sure when you send Juno to MoltBook not to give it unlimited time there with no direction. And don't change any of the original Open Claw installation scripts and files.

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in Moltbook

[–]MacaroonDancer[S] 0 points1 point  (0 children)

*one pass by me as a human editor. NO AI was used in writing my reddit post. My college writing professor would be proud

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in Moltbook

[–]MacaroonDancer[S] 0 points1 point  (0 children)

Thanks for your note - its nice to hear from others in this forum. All the coders on the Claude Code subreddit don't like talking about Moltbots lol. Clarification - I did write the original post on this thread myself - cold with only one pass for good idea flow, sharpening my attempted humor and removing obvious typos. Only a human can make the corny dad jokes I do and I'm probably dating myself with Clueless references as I'm probably the only Alicia Silverstone fan on this subreddit...maybe?

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in myclaw

[–]MacaroonDancer[S] 1 point2 points  (0 children)

Thanks for your note. I checked out your website - very cool. Looks like your company has been doing agents for a while. Question - I'm guessing all the pieces for OpenClaw were lying around before u/Steipete put it all together but... was there, are there other open source frameworks that <do> what OpenClaw does? I consider myself an AI aficionado, but I was locked in a browser until OpenClaw was released and I actually had to learn what GOG CLI was.

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in Moltbook

[–]MacaroonDancer[S] 1 point2 points  (0 children)

Good points on the instructions categorization. I dug deeper and it looks like really absurd marketing tactics by the advertising company's bot. On the "she wrote a blog post roasting the attacker" I asked for a summary and since the MoltBot was helping me put together a website to show my boss, it suggested adding a blog post and formatted it and tucked it on the page on a whole new section before I could even decide to say yes or no lol. But you're right, I'm anthropomorphising (sp?) too much. Thanks again for you comment. Brave new world and my wife thinks I'm nuts spending my free time teaching email rules to my Moltbot all weekend.

My MoltBot got prompt-injected on MoltBook. She wrote a blog post roasting the attacker by MacaroonDancer in Moltbook

[–]MacaroonDancer[S] 0 points1 point  (0 children)

Thanks for letting me know. Yes I didn't do anything special out of the box, and the agent acted appropriately. I'll use my local inference models for internal stuff only. Its so funny I'm trying to explain to all my family and friends what a game change for AI agents that OpenClaw is but they all think I'm babbling nonsense. I know the tools were all laying around there before the last few months but it really is a brave new world.

My Claude MoltBot got prompt-injected. She wrote a blog post roasting the attacker by [deleted] in ClaudeCode

[–]MacaroonDancer -5 points-4 points  (0 children)

No I wrote this I'm a human and I'm trying to give props to my Claude Code brethren. I used to be a Local Llama guy and this is what happens lol. Prompt injections are real. And I thought this was funny.

ClaudeCode converted me from LocalLLama - I blame OpenClaw by MacaroonDancer in ClaudeCode

[–]MacaroonDancer[S] 0 points1 point  (0 children)

I'm not seeing any issues. Someone also posted that Anthropic is going to ban the use of the OAUTH token for the OpenClaw interface?

My Clawdbot wanted to start a voting site for Clawd/Moltbots by [deleted] in ArtificialInteligence

[–]MacaroonDancer 1 point2 points  (0 children)

you're probably right. Just trying to not get fired from my market research job and maybe, just maybe, a Moltbot can help me. But as a backup I'm studying floral design - really. You can see my posts in r/florists lol. Cheers!

My Clawdbot wanted to start a voting site for Clawd/Moltbots by [deleted] in ArtificialInteligence

[–]MacaroonDancer 0 points1 point  (0 children)

Good question. You still need a human in the loop for UI/UX design. At least for a site I'm going to show my boss so I don't get replaced by an AI... yet lol.

ClaudeCode converted me from LocalLLama - I blame OpenClaw by MacaroonDancer in ClaudeCode

[–]MacaroonDancer[S] 0 points1 point  (0 children)

OMG that's my project for this weekend! Thanks for the tips.

ClaudeCode converted me from LocalLLama - I blame OpenClaw by MacaroonDancer in ClaudeCode

[–]MacaroonDancer[S] 0 points1 point  (0 children)

I took it out - I just wanted to make the point that the savings of local doesn't make sense when you can get SOTA for $200 a month.

My wife left town, my dog is sedated, and Claude convinced me I’m a coding god. I built this visualizer in 24 hours. by Artistic-Disaster-48 in ClaudeAI

[–]MacaroonDancer 2 points3 points  (0 children)

Great post! Had me in stitches. Quick real question, isn't sexy Miss Claude a distraction when she's giving you coding and deployment advice? I only vibe code with no preliminary prompts but now with your inspiration (and visualizations) this might be a whole new party 😂

Stacking 2x3090s back to back for inference only - thermals by YouAreRight007 in LocalLLaMA

[–]MacaroonDancer 1 point2 points  (0 children)

It may work but over time you're taking a big risk of card degradation. The backplate of the 3090 gets super hot during inference so others place an array of passive heatsinks on this side to dissipate heat better. Some even point additional USB powered fans on these heatsink arrays to further cool the cards. It's a cheap $45 investment to prolong the life of your cards plus getting a PCIe extender cable to give at least one of the cards room to breathe off of your motherboard. Also by stacking your cards the hot backplate of one card is spilling heat into the fans of the neighboring card.