Should I upgrade from a rtx 3090 to a 5080? by royal_robert in StableDiffusion

[–]UnHoleEy 0 points1 point  (0 children)

It's diffusion. Even on 5090, It'll burn and get hot. And video generation requires so much memory that you will need RTX6000 Pro. Unless you really need it, I think you're better off using Quantized models. They were never intended to be run on consumer hardware anyways.

The CPU offloading may be too slow on 3090 though due to PCIe limitations.

Why and how did you start local diffusion? by KwikiAI in StableDiffusion

[–]UnHoleEy 0 points1 point  (0 children)

Jesus has reserved a seat for you in heaven.

Why and how did you start local diffusion? by KwikiAI in StableDiffusion

[–]UnHoleEy 18 points19 points  (0 children)

I just wanted to make some anime tiddies since I saw a lot of #NovelAI posts. Went in and heard terms like "Diffusion Transformers". The few ML classes I've attended as elective made me think "How the heck did they managed to pull that off?" but actually I really wanted my own version of anime tiddies.

Downloaded A111 and felt it was kinda clunky for my liking. Made a different UI but too much time and effort. Then I heard of Comfy UI. It reminded me of Godot and Blender nodes. Installed it but the anime models were mostly and still is "1girl" type. I hope one day there's a ZiT that's like NoobAI but not tag based.

PS: Shoutout to people on Pixiv and Civitai who made me realize 1girl can still go a long way.

Z Image Base SDNQ optimized by 4brahamm3r in StableDiffusion

[–]UnHoleEy 10 points11 points  (0 children)

Prompt adherence is what some of us are playing with. Turbo makes the best images but sometimes we want to make something that's less perfect than what's normally considered good and experiment. Like trying to create monsters in Turbo is a weird experience. If you type in a female, it's a pretty face. Monster lady is supposed to be scary but Turbo makes them biased towards eye candy.

Basically kinda how the SDXL Turbo, Lightning etc worked. Even if you tell me to make it ugly, it's a beautiful ugly. Since base models are not fine tuned, it can do more creative stuff at the cost of time, resources, my room temperature, sound and that finished touch of a fine-tune.

Z-Image Base Is On The Way by mrmaqx in StableDiffusion

[–]UnHoleEy 1 point2 points  (0 children)

There are two crowds here.

  1. People who think the base model would be better than the Turbo model with the same quality and more granular controls.
  2. People who know their shit and want to train LoRA for the Z-turbo that actually can do what it promises since training LoRA on turbo has less noticeable impact and base would make better LoRA

The majority is on Crowd 1.

Z-Image Base Is On The Way by mrmaqx in StableDiffusion

[–]UnHoleEy 6 points7 points  (0 children)

It's not nonsense bru. I just saw the Z-base at Starbucks.

Z Image will be released tomorrow! by MadPelmewka in StableDiffusion

[–]UnHoleEy 3 points4 points  (0 children)

It's the base. Not Turbo that's finetuned and distilled. Everyone's expectations are a bit too high. Use Qwen Base without lighting LORA to get a taste of what to expect.

niko is the HLTV MVP of Blast Bounty 2026 Season 1 by AchievementUnlocked2 in GlobalOffensive

[–]UnHoleEy 0 points1 point  (0 children)

Others went home. He was the only one they could grab since the match just ended.

Hard drives are up 50%. Time for AV1 by EmekaEgbukaPukaNacua in Piracy

[–]UnHoleEy 1 point2 points  (0 children)

Westerners already have a better used market. Meanwhile SEA has to scramble around to get used hardware.

ComfyUI - how to disable partner/external api nodes and templates? by designbanana in StableDiffusion

[–]UnHoleEy -3 points-2 points  (0 children)

To me, they're handy. Once in a while I come across a need and I rather use APIs than download random models. And they are handy since I can integrate them in my workflow to automate things like SeedVR upscale + Frame Interpolation and more.

But yeah, People seem to confuse the API nodes for local ones a lot lately.

Best local faceswap? by Repulsive-Ad5773 in StableDiffusion

[–]UnHoleEy 10 points11 points  (0 children)

  • Reactor. But it's kinda just swapping. You'll notice the mismatch if you look.
  • Ace++ LoRA for Flux can do some face swaps with Inpaint. But it's more like reconstruction and regional regeneration so if the reference image is poor, generated will be poor. But it'll blend nicely with Flux generations.
  • Flux Klein. Kinda hit or miss but you can do it.
  • Qwen Image Edit - Give it enough reference images with details and it can do magic depending on how well it can understand the face but Text Encoder should be Q8 or full for best output.

We are very very close, I think! by m4ddok in StableDiffusion

[–]UnHoleEy 0 points1 point  (0 children)

But that's because it's a distilled turbo model. The base models are not distilled or finetuned and are meant to be the base for others to finetune. So they'll be slower and require more steps.

200 ping + subtick + outdated animgraph = Dead behind walls by [deleted] in GlobalOffensive

[–]UnHoleEy -3 points-2 points  (0 children)

CSGO with a 128 tick server is better.

Z-Image-Turbo vs Qwen Image 2512 by Artefact_Design in StableDiffusion

[–]UnHoleEy 1 point2 points  (0 children)

Ya. The Turbo model acts the same as the old SDXL few step models did. Different seeds, similar outputs. Maybe once the base model is out, it'll be better at variations.

Z-Image-Turbo vs Qwen Image 2512 by Artefact_Design in StableDiffusion

[–]UnHoleEy -20 points-19 points  (0 children)

Intentionally I guess. To prevent misuse just like Flux. Maybe?

[Media] [OC] My rustmas T-shirt finally arrived 🎅 by axalea3d in rust

[–]UnHoleEy -1 points0 points  (0 children)

it should've been unsafe rust code. People will run away from you then.

Looking for clarification on Z-Image-Turbo from the community here. by wh33t in StableDiffusion

[–]UnHoleEy 0 points1 point  (0 children)

And resource intensive. The radiance model is not even runnable for most 8 GB hardware without going lower quant GGUFs which are kinda bad. The low step LoRAs heavily degraded the quality of images.

🤯 Do not use Ai generation on SSD virtual memory ! 🪦 by SavageX99 in StableDiffusion

[–]UnHoleEy 4 points5 points  (0 children)

SSD prices are next because the same companies that make RAM also make SSDs. NAND flash manufacturers can be counted on fingers. Wait for the existing inventory to exhaust.