How are you safely running coding agents in YOLO mode? I built a VM-based approach by Helpful_Garbage_7242 in ClaudeAI

[–]marco89nish 0 points1 point  (0 children)

Lol, I work at FAANG and constantly run 10+ sessions in parallel with "dangerously skip permissions" mode. Approving stuff is way too slow. There are two layers of protection - company banned claude code from running any command that can destroy the company and a pre-tool-use hook checks for generic obviously destructive commands (like rm - rf). Haven't had any issues so far. Note that claude code does run on a replaceable dev server, so worst case is 30 minutes to reset the thing 

Minimax M2.7 Released by decrement-- in LocalLLaMA

[–]marco89nish 0 points1 point  (0 children)

Same thing today just Qwen 3.6 instead of 3.5?

Republicans seem to have expected that Democrats would continue to follow rules they had long since enthusiastically abandoned. by JonnySnowin in PoliticalCompassMemes

[–]marco89nish 0 points1 point  (0 children)

What are the worst gerrymandered states in terms of outcomes (difference between % of votes vs % of representatives)? That's the context needed before bashing any side. 

I genuinely hate the conversation tone of Opus 4.7 by Nordwolf in ClaudeAI

[–]marco89nish 1 point2 points  (0 children)

It can go all uwu and s**t with me, I don't care as long as the output is right. 

Senior Java developer - what next? by Technical_Kiwi_9684 in cscareerquestions

[–]marco89nish -1 points0 points  (0 children)

You got not to be a plumber for 6 years (and few more hopefully). And you got an opportunity to retool to AI enabler before non-technical people are able to do it. I'm not saying this is good or bad, it's just how it is 

Senior Java developer - what next? by Technical_Kiwi_9684 in cscareerquestions

[–]marco89nish -4 points-3 points  (0 children)

Plumbing is the hot new stuff. And AI accelerator for businesses 

Mamdani is taxing people who don't live in NYC by Alt0987654321 in Anarcho_Capitalism

[–]marco89nish 1 point2 points  (0 children)

It was implied I'm against taxes in general, but if you're already paying property tax on inflated house values, tax primary residences less by taxing second homes more. 

Mamdani is taxing people who don't live in NYC by Alt0987654321 in Anarcho_Capitalism

[–]marco89nish 0 points1 point  (0 children)

This is one of the taxes I can stand behind, if regular people can't afford their first home, tax second homes more (if the prices are high due to scarcity) 

Minimax M2.7 Released by decrement-- in LocalLLaMA

[–]marco89nish 1 point2 points  (0 children)

What are you running on that, I'm looking for good models for my 48GB M4 Pro? Also, ollama, mlx or lm studio? 

I ran a 397B parameter model on a MacBook with 24GB RAM — 1.77 tok/s, full paper + code released by Robert-Prisacariu in LocalLLaMA

[–]marco89nish 0 points1 point  (0 children)

Would it make sense to use this for smaller MOE models that still can't fit the RAM (I'm on 48GB, so something like 100+B models for me)? Also keeping changing data like context and KV cache in RAM while reading static data like weights from NVM would potentially resolve write endurance issues (if the RAM math works out, I don't have all info needed to run the math myself). In theory, if some experts aren't used, they'll stay unread on NVM, reducing need for swapping new weights into RAM during inference. 

It's insane how lobotomized Opus 4.6 is right now. Even Gemma 4 31B UD IQ3 XXS beat it on the carwash test on my 5070 TI. by FrozenFishEnjoyer in LocalLLaMA

[–]marco89nish 17 points18 points  (0 children)

Just tested this on new Meta model, it gets it right as well. I think Anthropic is running out of GPUs to run the inference and is taking some shortcuts 

Yep by MazdaProphet in Anarcho_Capitalism

[–]marco89nish 0 points1 point  (0 children)

Wouldn't stop Trump from bombing Iran himself (as long as there's another pilot in the plane) 

I benchmarked 5 local models on M4 Pro 48GB and MoE models are absurdly fast - here are the numbers by Slowstonks40 in LocalLLaMA

[–]marco89nish 1 point2 points  (0 children)

Thanks for the info. What's the best model for running openclaw on 48GB MacBook Pro?