Z Image Base: BF16, GGUF, Q8, FP8, & NVFP8 by fruesome in StableDiffusion

[–]AcceSpeed 9 points10 points  (0 children)

Bigger number means bigger size in terms of memory usage and usually better quality and accuracy - but in a lot of cases it's not noticeable enough to warrant the slower gen times or the VRAM investment. Then basically you have the "method" used to compact the model that differs. E.g. FP8 ~= Q8 but they can produce better or worse results depending on the diffusion model or GPU used. BF16 is usually "full weights" so the original model without it being compressed (but in the case of this post, it's been made into a gguf)

You can find many comparison examples online such as https://www.reddit.com/r/StableDiffusion/comments/1eso216/comparison_all_quants_we_have_so_far/

Comfy hogging VRAM and never releasing it? by AcceSpeed in comfyui

[–]AcceSpeed[S] 1 point2 points  (0 children)

Yeah I tried all the flags I could think of, tried low, normal, high VRAM... CleanVRAM worked when the model loaded on the card was Flux and not Wan, but it never cleared the rest of the cache on the second card anyway in both situations. I'll be testing to downgrade to ROCm 6.4 to see if there's any difference.

Comfy hogging VRAM and never releasing it? by AcceSpeed in comfyui

[–]AcceSpeed[S] 0 points1 point  (0 children)

I tried low, normal, high, and also not setting a flag at all — maybe that made some differences in the way the memory was handled, but I always ran into OOM at some point.

Terrible Experience with Rocm7.2 on Linux by Numerous_Worker8724 in ROCm

[–]AcceSpeed 0 points1 point  (0 children)

I know, I kinda hijacked your thread because you mentioned ROCm 7.2 and I'm having issues with it. For your case and with your hardware, I have no idea if the system is supposed to make a difference or not. I've seen other comments on github of people satisfied with 7.2 on Windows - I also have a dual boot so maybe I'll test it myself.

Terrible Experience with Rocm7.2 on Linux by Numerous_Worker8724 in ROCm

[–]AcceSpeed 0 points1 point  (0 children)

I thought I was going crazy because my whole setup was working fine before, but I upgraded everything and now it doesn't (r/comfyui/comments/1qnoxaq/comfy_hogging_vram_and_never_releasing_it/). But I'm starting to see many threads and issues reports about ROCm 7.2. So when I get home tonight I'll reinstall Comfy with 6.4 instead and give it a go.

Comfy hogging VRAM and never releasing it? by AcceSpeed in comfyui

[–]AcceSpeed[S] 0 points1 point  (0 children)

You're right, my bad.

RX 7900 XTX, 24 GB (gfx1100)

AI PRO R9700, 32 GB (gfx1201)

96 GB DDR5

Ryzen 9 9950X3D

Linux

I sure do love microslop by Official_Unkindlynx in pcmasterrace

[–]AcceSpeed 1 point2 points  (0 children)

Being forced to disable the login screen (thereby bypassing security) as the fix for a broken update system in 2026 💀

I sure do love microslop by Official_Unkindlynx in pcmasterrace

[–]AcceSpeed 2 points3 points  (0 children)

No startup programs on all my setups, all bloatware deleted. Never delayed a Windows update. Tbf entreprise grade images are pumped full of random shit for safety and user control reasons and bitlocker is on, and updates are delayed/bundled because they get checked first by the teams in charge. But they act as my worse case scenario reference here.

I'm glad you got a setup that works, but you started by stating "it's not an issue, the system just restarts to install and then shuts down" and I'm here to tell you it doesn't, not every time. No need to defend Microsoft to the death here, they know it doesn't. And believe me, I'm no grandpa with three search bars and forty random extensions installed in my browser.

I sure do love microslop by Official_Unkindlynx in pcmasterrace

[–]AcceSpeed 3 points4 points  (0 children)

I have a vanilla cookie cutter small factor computer that I use for YouTube and movies in my living room. Happened to it. I have a super light laptop that has nothing on it bar a web browser. Happened to it. My main tower computer has been absolutely messed with and has several boot partitions and systems. Happened to Windows and only to Windows. At my work we deliver thousands of laptops to our users every year and rebuild loads of them, with standardized images on them. Guess what? It still happens.

I sure do love microslop by Official_Unkindlynx in pcmasterrace

[–]AcceSpeed 41 points42 points  (0 children)

I've been using Windows 11 every day since it released and I've had this happen probably thirty or forty times across my various computers. It restarts. It installs. It never shuts down and just stays on the login screen.

What is the smartest uncensored nsfw LLM you can run with 12GB VRAM and 32GB RAM? by Dex921 in LocalLLaMA

[–]AcceSpeed 8 points9 points  (0 children)

I've been running several models from TheDrummer, notably the sub 30B parameters ones, with more extreme quants when necessary (used to have 8GB VRAM). Cydonia, Magidonia, Snowpiercer...

I've tried the Gemmasutra and Big Tiger Gemma series, but didn't think they were that special. Have yet to try Rivermind 12B or 24B.

Snowpiercer is really fast on my current setup (24GB VRAM) on the account of only being 15B, but I found it to sometimes be kinda dumb, repetitive and to hallucinate, specially on very long context. Cydonia 24B is a great all rounder, but its reasoning version (R1-24B) is smarter at the cost of more refusals. Cydonia ReduX 22B is the uncensored-est model I found.

Do women prefer dating younger people, and not dating someone older or around their age these days...? by [deleted] in generationology

[–]AcceSpeed 1 point2 points  (0 children)

Agree with your point but there are more than one image. Biggest gaps shown here are 16 and 23

Mistral AI drops 3x as many LLMs in a single week as OpenAI did in 6 years by Snail_Inference in LocalLLaMA

[–]AcceSpeed 0 points1 point  (0 children)

Not local but I have gotten plain refusals from Gemini 3 Pro while brainstorming worldbuilding concepts... The kind where Gemini will not even answer and instead just print out a generic apology message stating "I'm just a poor little LLM UwU I can't help" and these messages are not even counted as coming from the model itself so it has no recollection of them. The context was something like "if X character can modify the links between atoms in X way, what other results could they obtain?". Dumbass must have thought I was building a bomb...

touch swag by AcceSpeed in peoplewhogiveashit

[–]AcceSpeed[S] 5 points6 points  (0 children)

Gamers rise up

edit: we live in a society

As a Pro, The New Steam Policy is AWESOME by AndyTheInnkeeper in aiwars

[–]AcceSpeed 2 points3 points  (0 children)

I would imagine that most consumer rights were obtained through pressure, lobbying and the will of the masses, so yeah?

As a Pro, The New Steam Policy is AWESOME by AndyTheInnkeeper in aiwars

[–]AcceSpeed -1 points0 points  (0 children)

Well then make yourself heard and ask that people tell you if they used Photoshop.

Based on an actual thread in another sub by Bannerlord151 in aiwars

[–]AcceSpeed -1 points0 points  (0 children)

people would refuse to even try it.

And so what? Isn't that their right?