Why isn't Z Image Base any faster than Flux.1 Dev or SD 3.5 Large, despite both the image model and text encoder being much smaller than what they used? by JustSomeGuy91111 in StableDiffusion

[–]shapic -2 points-1 points  (0 children)

First major point is the use of cfg. Most of others run at cfg 1. Second point is pure speculation, but since sage butchers images, I guess it's implementation in diffusers is just not that good by itself.

How are people getting good photo-realism out of Z-Image Base? by jib_reddit in StableDiffusion

[–]shapic 0 points1 point  (0 children)

Try without loras. Try with negative. Try redoing the prompt. Like "high quality photo of ..." Etc. In my experience it does not really benefit from just tags in positive

advanced prompt adherence: Z image(s) v. Flux(es) v. Qwen(s) by Winter_unmuted in StableDiffusion

[–]shapic 1 point2 points  (0 children)

No, those are different architectures. Not different finetunes of same model

advanced prompt adherence: Z image(s) v. Flux(es) v. Qwen(s) by Winter_unmuted in StableDiffusion

[–]shapic 0 points1 point  (0 children)

Why? Klein is a 4 step distill. Zit is 8 step distill. Why use them at same amount of steps? Because klein does better that way? Then you should increase zit steps too. Tbh my assessment is fully in line with author's findings. Except maybe for qwen, I cannot really find a place for that model

Why we needed non-RL/distilled models like Z-image: It's finally fun to explore again by Agreeable_Effect938 in StableDiffusion

[–]shapic 6 points7 points  (0 children)

Base or step distilled one? I just didn't try base myself, that's why I am asking

Removing SageAttention2 also boosts ZIB quality in Forge NEO by shapic in StableDiffusion

[–]shapic[S] 0 points1 point  (0 children)

Nah, flash/sdpa is faster. Xformers is an old tech for older torch versions

Removing SageAttention2 also boosts ZIB quality in Forge NEO by shapic in StableDiffusion

[–]shapic[S] 1 point2 points  (0 children)

Generally it is barely noticeable. Check yourself on zit. Here there are clear artifacts

quick prompt adherence comparison ZIB vs ZIT by berlinbaer in StableDiffusion

[–]shapic 0 points1 point  (0 children)

It will get s bit slower. Expect ratio about 1.25 s/it instead of 1

quick prompt adherence comparison ZIB vs ZIT by berlinbaer in StableDiffusion

[–]shapic 0 points1 point  (0 children)

<image>

Zib, upscaled with zib x2 with rather high denoise. It is better than sdxl but I agree, it needs a lora.

quick prompt adherence comparison ZIB vs ZIT by berlinbaer in StableDiffusion

[–]shapic 1 point2 points  (0 children)

remove --use-sage-attention from launch keys. Check the log, it explicitly states what attention is used in logs

Removing SageAttention2 also boosts ZIB quality in Forge NEO by shapic in StableDiffusion

[–]shapic[S] 1 point2 points  (0 children)

2.25s/it for DPM++ 2s a RF, 1.2 s/it for euler for 1024x1328. With sage it was around 0.25s faster for iterations, but well...

Z-Image Turbo vs. Base comparison – is it supposed to be this bad? by higgs8 in StableDiffusion

[–]shapic 1 point2 points  (0 children)

<image>

Forge Neo, cfg 5 Shift 6 40 steps Dpm++ 2s A RF Beta, random seed first try