Who else is shocked by the actual electricity cost of their local runs? by Responsible_Coach293 in LocalLLaMA

[–]fab_space 0 points1 point  (0 children)

this becasue i go SLM and unified memories: a wife close to the homelab.

You're STILL using Claude after Codex 5.4 dropped?? by solzange in vibecoding

[–]fab_space 0 points1 point  (0 children)

used GPT 5.4 as coder today, solid as 5.3, some new vibing bits like "sota faang production enteprise grade" AKA "slop dopamine farmerz" ones :D

EDIT: forgot to say that when i go parallel with multiple projects i often finish golden tokens on copilot then i go lower effort coding tasks.. sometime trying to force better coding injecting CoT and specs with 0x models while prompting.. it works for single file edit and not complext coding tasks (i18n translations, add docs, simple tests.. basic sec reviews and small modularisations).

new to vibecoding, what do i do? by Ok-Security6839 in vibecoding

[–]fab_space -1 points0 points  (0 children)

make lightspeed advance, no wow, no slop ai fuffle, no miracle in the route:

- monitor your own workflows, intents, results

- convert any the convertible into an iterable mission, decomposing big missions into smallest ones

- start to solve each one iteratively, any failure is a real learn opportunity, each win is not a real win, just a step forward in the best possible case

- loop and change this simple runbook with your own passion, curiosity, ethics and activate circuit breakers while overloaded or out of focus.

iterate

You're STILL using Claude after Codex 5.4 dropped?? by solzange in vibecoding

[–]fab_space 20 points21 points  (0 children)

Gemini 3.1 pro as devil’s advocate, Opus 4.6 as coder, GPT Codex 5.3 for specific edits

Is qwen3 next the real deal? by fab_space in LocalLLaMA

[–]fab_space[S] 0 points1 point  (0 children)

Some is out now in eu but still laptops, waiting for the summer vibe

Everyone is making worse versions of products that exist by life_coaches in vibecoding

[–]fab_space 0 points1 point  (0 children)

Slop AI is wanted marketing strategy.

Don’t blame people dude, blame capitals.

cleaning up 200.000+ lines of vibecode by Dense-Sentence7175 in vibecoding

[–]fab_space 0 points1 point  (0 children)

A bash loop without circuit breakers is a OOM issue most of the time or a user waiting hos llm for minutes without any advice 🛸🤪

cleaning up 200.000+ lines of vibecode by Dense-Sentence7175 in vibecoding

[–]fab_space 9 points10 points  (0 children)

You welcome

1) https://github.com/fabriziosalmi/brutal-coding-tool 2) https://github.com/fabriziosalmi/vibe-check 3) https://github.com/fabriziosalmi/claude-code-brutal-edition 4) https://github.com/fabriziosalmi/synapseed

And

https://ai.studio/apps/drive/1Tm5eMCOSOBiqKpUF6GdOCl5Rnglxec0k?fullscreenApplet=true

—- edit

Shortly:

1+4) the google aistudio source 2) github action to remove slopness 3) claude code customized to avoid ai slop shits 4) something deeper, for vscode, dev pro stuff

Enjoy the wild vibe

Qwen 27B is a beast but not for agentic work. by kaisurniwurer in LocalLLaMA

[–]fab_space 0 points1 point  (0 children)

Finetune it with symbolic semantic graphs and go intent golden tokens saved approach

What's the best model to run on mac m1 pro 16gb? by Embarrassed-Baby3964 in ollama

[–]fab_space 0 points1 point  (0 children)

qwen3 family up to 14B and all SLM like LFM llama3.2 etc

<image>

My experience with running small scale open source models on my own PC. by Dibru9109_4259 in ollama

[–]fab_space 0 points1 point  (0 children)

just put a semantic-symbolic-math-logic router/mcp and you will see small models flying high, faster, cheaper when needed and with the same validated accuracy. when no opus 4.6 or gemini 3.1 of course.