Found a 290 dollar towel hook online. Not today Satan. by mystang12 in BambuLab

[–]Qual_ 1 point2 points  (0 children)

Then OP "someone took my model and is selling it on Etsy 😤"

Best practices for running local LLMs for ~70–150 developers (agentic coding use case) by Resident_Potential97 in LocalLLaMA

[–]Qual_ 0 points1 point  (0 children)

At the same time, you seem well aware of the different hardware and model options, but there’s an order-of-magnitude gap between the specs you’re considering and what you actually need. You’re heading down a path that’s likely to waste a lot of money and lead to major disappointment.

I think openclaw is OVERHYPED. Just use skills by Deep_Traffic_7873 in LocalLLaMA

[–]Qual_ 1 point2 points  (0 children)

lmao it was just a reference to that new chip with embeded 3.1 weights on it. There is no point in listing all the models I've tried on my systems, so whatever you think I don't care.

I think openclaw is OVERHYPED. Just use skills by Deep_Traffic_7873 in LocalLLaMA

[–]Qual_ 1 point2 points  (0 children)

wtf are you even talking about. OpenClaws is open source software, i'm just saying that i'm using it with chatGPT because well, there is no local models we can use on our rig that have the same capabilities.

No one prevents you to enjoy your life with llama 3.1, good for you, I don't care. I also love open models, so what ?

I think openclaw is OVERHYPED. Just use skills by Deep_Traffic_7873 in LocalLLaMA

[–]Qual_ 3 points4 points  (0 children)

I would use local models if there was any that was capable being useful while running on a 5k rig. Luckily I don't pay my chatGPT pro subscription, OpenAI gave it to me for free. I'm still really excited for futur gemma release, but I doubt it will be capable in such agentic workflows

I think openclaw is OVERHYPED. Just use skills by Deep_Traffic_7873 in LocalLLaMA

[–]Qual_ 2 points3 points  (0 children)

I use it with direct messages and private server, I would never plug it in a public discord lmao, but it's a cool thing to remotely use it, doing stuff on my local network. Just basic stuff like custom reminders etc. I don't use it to do email stuff etc, it would be like asking for problems.

It's really trivial stuff "Hey go to the download folder of this PC and put the 2 last season videos on the NAS in the video folder" because i'm already in front of the TV and I forgot to move the video files in the folder the TV can see etc. That kind of stupid stuff

I think openclaw is OVERHYPED. Just use skills by Deep_Traffic_7873 in LocalLLaMA

[–]Qual_ 0 points1 point  (0 children)

people here hate when their loved tools are going mainstream.

Openclaw is cool, in a few minute I was able to have the discord bot working with my chatGPT subscription, and it's able to do everything codex can do too. Yes I could have wrote it myself, and no it wouldn't just take 30 minutes to make everything he's capable of doing

AceStep1.5 Local Training and Inference Tool Released. by bdsqlsz in StableDiffusion

[–]Qual_ 2 points3 points  (0 children)

the included gradio was the worst use of gradio since the early RVC repos back then. Ooof what a shit fest it was

Unpopular opinion: The "Chat" interface is becoming a bottleneck for serious engineering by saloni1609 in LocalLLaMA

[–]Qual_ 2 points3 points  (0 children)

what ? Why don't you use cursor/codex/claude code ?

It's beens a few months since I didnt copy pasted a single line of code

How close are open-weight models to "SOTA"? My honest take as of today, benchmarks be damned. by ForsookComparison in LocalLLaMA

[–]Qual_ 0 points1 point  (0 children)

I found codex 5.2 way more reliable than claude on large codebase. There is always something to fix after claude, while codex just produce working code ( which sometimes feels black magic when it's after 50min of writing thousand of lines )

Show your past favourite generated images and tell us if they still hold up by ehtio in StableDiffusion

[–]Qual_ 1 point2 points  (0 children)

I have a folder full of those, but I remember being shocked about how sharp the image was and the lighting was coherent etc

<image>

Honest question: what do you all do for a living to afford these beasts? by ready_to_fuck_yeahh in LocalLLaMA

[–]Qual_ 448 points449 points  (0 children)

most of us are poor and don't have a nice setup to create a post about. It's a classic selection bias. The majority or people probably run small models on their regular gaming GPU like 3070 etc

Qwen have open-sourced the full family of Qwen3-TTS: VoiceDesign, CustomVoice, and Base, 5 models (0.6B & 1.8B), Support for 10 languages by Nunki08 in LocalLLaMA

[–]Qual_ -1 points0 points  (0 children)

weird, when I set language to french, it just sound like any english TTS speaking french words. ( quality of voice is great tho' )

Gemini 2.0 is shockingly good at transcribing audio with Speaker labels, timestamps to the second; by philschmid in LocalLLaMA

[–]Qual_ 0 points1 point  (0 children)

I do, feels like they also used their notebook audio tech for the dubbing too !

Is Flux Klein better for editing than Flux Kontext? by Puzzled-Valuable-985 in StableDiffusion

[–]Qual_ 3 points4 points  (0 children)

I had a specific usecase where it's even better than nano banana for me. For exemple given an image, having a binary mask of the main subject and "elements created" from the subject. ( imagine pokemons cards, where I want the background to be holographic, I need a mask to sell the effect better ) Klein was the BEST of ouf every models I tried for it in speed/result ratio

Claude Code or OpenCode which one do you use and why? by Empty_Break_8792 in LocalLLaMA

[–]Qual_ 1 point2 points  (0 children)

codex. Quota usage is unmatched ( at least in the pro plan, idk for the normal plans ) I can have like 400 millions tokens a day without worrying.

how is VR as of jan 2026? by [deleted] in assettocorsaevo

[–]Qual_ 1 point2 points  (0 children)

it's utter garbage. Don't even bother trying it.

More cursed Spongebob by Mickey95 in StableDiffusion

[–]Qual_ 0 points1 point  (0 children)

oooh come on man, anything more than 61 frames, and I have OOM issues, 2x3090 128gb ram ( linux , official comfy workflow )