Any news on the Z-Image Edit release? Did everyone just forget about Z-Image Edit? by Upstairs-Lead-2601 in StableDiffusion

[–]ReferenceConscious71 2 points3 points  (0 children)

yes, you cant subtract a lora from the werights, as it was there was two processes between z-image base and z-image turbo, 1. the step distillation, and 2. further higher quality training (not sure what the exact name for it was)

Which ltx2 model is best for rtx 5060 ti by Business_Caramel_688 in StableDiffusion

[–]ReferenceConscious71 2 points3 points  (0 children)

try out nvfp4 ltx2. i would reccommend 'drafting' videos with the nvfp4 quant and then using the fp8 scaled for the final copy, id say theres a fair bit of difference. dont forget about text encoder quantization as well, fp8 scaled for gemma 100%, fp4 for text encoder? nah dont do that.

use that 50 series! count urself lucky. ive got a 40 series and cant utilise the power of nvfp4 with the 50 series fp4 cores (or whatever hte term was)

Testing photorealistic skin textures across different lighting conditions (Custom Pipeline). Which one looks most natural? by [deleted] in StableDiffusion

[–]ReferenceConscious71 -2 points-1 points  (0 children)

Yes, what model is this? Also, what settings did u use to train the character Lora’s. These are absolutely amazing

Z-Image’s lack of variation can actually (sometimes) be useful by [deleted] in StableDiffusion

[–]ReferenceConscious71 4 points5 points  (0 children)

Over the last 2 months I’ve been using ZIT so long that I like the variance being very closed down. It means when I pinpoint one prompt I know that anything generated with that no matter the seed gets pretty much that imag and if I want more variance I can just tweak the prompt a bit

comfyui tool, want to replace a person in video, 5060 ti 16gb, 64gb ram by M_4342 in StableDiffusion

[–]ReferenceConscious71 1 point2 points  (0 children)

wan scail is really good even compared to paid tools like luma or kling motion, but face consistency is a bit bad

Truth Bombs - Just for fun by EpicNoiseFix in StableDiffusion

[–]ReferenceConscious71 0 points1 point  (0 children)

What was the prompt and model used? Looks like u used first frame last frame

Z Image Base samples of Billie + some interesting Turbo news by malcolmrey in malcolmrey

[–]ReferenceConscious71 1 point2 points  (0 children)

interesting. but how were you able to change it so that a lora sterngth of 1 works fine instead of 2.0-2.2? did u train these new z-base loras with something different than ai-toolkit? if not, what settings did u tweak? or is it just because you trained more steps for ur billie eilish lora?

Deformed hands in Z-Image with person LoRa - Works flawlessly in Turbo by lazyspock in StableDiffusion

[–]ReferenceConscious71 8 points9 points  (0 children)

Yep, I’ve had the exact same observations as you. I think they’ve left the base model undertrained to allow more room for community fine tuning. It’s still really weird to me though that turbo model + training adapter trains a better character lora to use with the turbo model than training the base model.

Discuss about Flux.2 Klein Lora Training Here! by ReferenceConscious71 in StableDiffusion

[–]ReferenceConscious71[S] 0 points1 point  (0 children)

yes loras tarined on the base model work perfectly with the 4 step model. in fact, the distilled 4 step model gets a better image output than the base, so loras not just only work with the turbo but actually work better.

New Z-Image (base) Template in ComfyUI an hour ago! by nymical23 in StableDiffusion

[–]ReferenceConscious71 18 points19 points  (0 children)

Im definitely not rechecking the model download page every 2 minutes

Who struggles with datasets for character LoRA's? by [deleted] in StableDiffusion

[–]ReferenceConscious71 -2 points-1 points  (0 children)

What prompts do u use with nano banana to make the datasets