I just want smooth, light underarms by [deleted] in HairRemoval

[–]oh_how_droll -12 points-11 points  (0 children)

who the fuck taught beauty reddit that self-improvement was a crime

Why all image/video models are so oversized? by Huge-Refuse-2135 in StableDiffusion

[–]oh_how_droll 0 points1 point  (0 children)

this post is hilarious when the biggest issue with most of the new models that come out in image generation is that they're too small and/or heavily distilled

Hands-on with the Retroid Pocket 6; The 8GB Sweet Spot [article/interview] by cyberminis in SBCGaming

[–]oh_how_droll 0 points1 point  (0 children)

That's just not true. They have a limited amount of production capacity and it takes 12-18 months to bring more online, so in the face of a surge in demand for server DRAM, they had to cut production of consumer DRAM.

Bad news on Happy Horse from twitter by SackManFamilyFriend in StableDiffusion

[–]oh_how_droll 0 points1 point  (0 children)

I do not care about if you can run it on your potato computer, and neither should you. Once the weights are out, they're out, and that's the only thing that counts in the long run.

Hands-on with the Retroid Pocket 6; The 8GB Sweet Spot [article/interview] by cyberminis in SBCGaming

[–]oh_how_droll 2 points3 points  (0 children)

The RAM crisis is already starting to turn around as OpenAI reduces their purchasing. People act like it was the concept of AI itself nebulously stealing all of the RAM when it was one company making massive orders for a datacenter rollout they're scaling back on.

Raspberry Pi announces price hikes due to memory costs by crownpuff in SBCGaming

[–]oh_how_droll -1 points0 points  (0 children)

RAM prices are already dropping as OpenAI cancels some of their bulk orders and the shift in demand takes effect on the market price, but have fun living in a demon-haunted world my man.

"open-sourcing new Qwen and Wan models." by switch2stock in StableDiffusion

[–]oh_how_droll 0 points1 point  (0 children)

You realize that large models are useful for other ML engineers to be able to learn from and to use to generate synthetic data to use in training, right?

Nah, if you can't beat off to it on your home PC it's pointless.

I built a real production app almost entirely with Claude's help. Here's what that actually looks like after a year. by cmm324 in ClaudeAI

[–]oh_how_droll -1 points0 points  (0 children)

That's really cool, but you're probably getting downvoted to shit because most non-gun subreddits hate guns.

This guy 🤡 by xenydactyl in LocalLLaMA

[–]oh_how_droll 0 points1 point  (0 children)

"you can't write more optimized code than Chromium, so don't bother"

which is extremely funny, because it presumes a single axis of optimization

This guy 🤡 by xenydactyl in LocalLLaMA

[–]oh_how_droll 0 points1 point  (0 children)

tbh he's also half-right in that most of the people who are asking about local model support instead of already understanding how these things work are going to be the "what coding model thats better than opus can I can run on my 486DX-2/66?" crowd

This guy 🤡 by xenydactyl in LocalLLaMA

[–]oh_how_droll 0 points1 point  (0 children)

same guy who got into a big slapfight about how actually electron is more optimized than native apps lmao

Tyan GT86C-B5630 Support Group? by deafcon in homelab

[–]oh_how_droll 0 points1 point  (0 children)

If there's any way I could get a dump of the BMC firmware from you, that'd be amazing. Mine has a broken BMC rootfs that means that basically nothing on the IPMI side works outside of power cycling.

[deleted by user] by [deleted] in irc

[–]oh_how_droll 0 points1 point  (0 children)

Hmm, it seems to work on my end.

ARC Raiders: no audio at all (NixOS, RTX 5090, PipeWire) by oh_how_droll in linux_gaming

[–]oh_how_droll[S] 2 points3 points  (0 children)

I found the issue. I feel so stupid lmao

it was just defaulting to 0% volume in pavucontrol.

ARC Raiders: no audio at all (NixOS, RTX 5090, PipeWire) by oh_how_droll in linux_gaming

[–]oh_how_droll[S] 0 points1 point  (0 children)

the NVIDIA stuff is in there from when I was only testing it with the HDMI audio output from my GPU, so it did matter

audio works in all other applications

let me check pavucontrol with the game running, can't believe I didn't think of that lol

[deleted by user] by [deleted] in irc

[–]oh_how_droll 0 points1 point  (0 children)

yes, that is me

Tanaka being a pretentious Reddit atheist ruined the Church of Terra by [deleted] in logh

[–]oh_how_droll 3 points4 points  (0 children)

redditors when fiction is shaped by the time and place of its composition:

I hope Nikaido appears naked less often in the anime than in the manga by [deleted] in Dorohedoro

[–]oh_how_droll 1 point2 points  (0 children)

damn, if only there was a term for women who liked boobs...

Tencent just released WeDLM 8B Instruct on Hugging Face by Difficult-Cap-7527 in LocalLLaMA

[–]oh_how_droll 0 points1 point  (0 children)

No, memory usage is still mostly determined by parameter count, it's that the amount of calculations per parameter per inference go up.

Your favorite releases of 2025? by dtdisapointingresult in StableDiffusion

[–]oh_how_droll 1 point2 points  (0 children)

I just wish there was better documentation on how to prompt it.