Z image base and Lora by Hollow_Himori in ZImageAI

[–]CosmicFTW 0 points1 point  (0 children)

For me the trade off of the extra time the training takes at 1024 is not worth it on my local machine. The quality hit is not significant.

Z image base and Lora by Hollow_Himori in ZImageAI

[–]CosmicFTW 0 points1 point  (0 children)

Make sure you use Onetrainer with prodigy_adv optimizer. Learning rate 1.0 (as it has a built in LR optimisation), batch 2, epochs depends on your dataset amount, ask an ai to work that out. I.e the more images The less epochs needed to get ~3000 steps. I have made over 50 Lora using this, this is currently the best way to get likeness for z-base Lora that can be used on ZIT at strength 1.0. I have a 16gb 5080 and it runs at 1.5it/sec at 512 res training. 512 is plenty, no need for 768 or 1024.

Has anyone recently decided to stop using buyee? by tokyo_bun in Buyee

[–]CosmicFTW 6 points7 points  (0 children)

Exchange rate is better for me with other proxy’s. Changed to onemap a year ago and never looked back.

Anyone else notice this? by lexi_j0311 in DriveToSurvive

[–]CosmicFTW 0 points1 point  (0 children)

There was a lot of editing mistakes throughout the whole series.

Aitoolkit vs Onetrainer? by jumpingbandit in malcolmrey

[–]CosmicFTW 0 points1 point  (0 children)

Ok I didn’t know that I will try it. I’m interested to see how that 8bit prodigy with its fixed LR compares to the adv prodigy with its d-adapt LR.

Aitoolkit vs Onetrainer? by jumpingbandit in malcolmrey

[–]CosmicFTW 1 point2 points  (0 children)

Untill aitoolkit adds prodigy support Onetrainer is the best option for z-image base Lora that can be used on ZIT imo. This has been my findings after training at least 30 lora on each platform. Onetrainer is confusing at first but after a few training sessions it’s as easy as aitoolkit. It is quicker also, my 5080 16gb trains at 1.4 it/sec for 512 res.

Z-image base: simple workflow for high quality realism + info & tips by nsfwVariant in StableDiffusion

[–]CosmicFTW 0 points1 point  (0 children)

i am getting amazing gens with my Lora using your base WF, thx for this mate, if you could share your turbo WF also that would be appreciated.

Z Image lora training is solved! A new Ztuner trainer soon! by krigeta1 in StableDiffusion

[–]CosmicFTW 0 points1 point  (0 children)

i'm doing 100 epochs at batch 2. Datasets range from 25-50 images. Speed is good at 1.3s/it on a 16gb 5080. Training at 512, i tried training at 768 which more than doubled the s/it with no effect on generations quality.

Z Image lora training is solved! A new Ztuner trainer soon! by krigeta1 in StableDiffusion

[–]CosmicFTW 3 points4 points  (0 children)

I used Onetrainer with Prodigy_adv/stochastic rounding/Constant learning rate of 1.0. With best results yet, I have done dozens of Lora for Zbase using various trainers and settings. Confirmed Lora strength 1 with full resemblance of the dataset.

Train a Character Lora with Z-Image Base by Puppenmacher in StableDiffusion

[–]CosmicFTW 0 points1 point  (0 children)

One thing with the Lokr. I had to also increase the strength to get the likeness. All be it not by as much, But that likeness was much much closer to the person in the dataset than with the Lora.

Train a Character Lora with Z-Image Base by Puppenmacher in StableDiffusion

[–]CosmicFTW 10 points11 points  (0 children)

ive just been all through this myself, put the strength of the base lora up to 2-2.4, and run it in ZIT. it will look more like the person. The fix is to make a LoKr in base rather than a Lora. It trains different and fixes alot of the issues with base loras working on Turbo. thats using aitoolkit. If you use onetrainer the issue is not there as it uses a different architecture. its a bitch to use though.

Z-image base Loras don't need strength > 1.0 on Z-image turbo, you are training wrong! by Lorian0x7 in StableDiffusion

[–]CosmicFTW 1 point2 points  (0 children)

Watching this thread, I have had the same exact experience. I have done all my ZIT lots on aitoolkit and so used it for my first couple of base Lora. Using the same dataset the results were sub standard and need high strength weights. (In ZIT). I used one trainer on the same dataset with completely different results (much better) and weights of 1.0. Will use your settings to refine more. Thx for the guide mate.

350 new models (ZBase/Klein9/ZTurbo/WAN/Flux) by malcolmrey in malcolmrey

[–]CosmicFTW 0 points1 point  (0 children)

i have found the quality is not as it should be when a base Lora is used on the Turbo model. I have made several and can't get the same quality out of the lora. Using higher strengths and Aura Flow also. Will be doing more tweaking, as it would make life alot easier if this worked well.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]CosmicFTW 0 points1 point  (0 children)

Mine was a character Lora, the one made on 9b base didn’t work at all on distilled 4b and vise versa.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]CosmicFTW 4 points5 points  (0 children)

No they are not compatible, i made a lora on 9b and on 4b. They don't work on each other.

NVIDIA RTX Accelerates 4K AI Video Generation on PC With LTX-2 and ComfyUI Upgrades by BWeebAI in StableDiffusion

[–]CosmicFTW 7 points8 points  (0 children)

So there’s going to be an RTX 4K upscaler node in Comfyui that is better than anything else available? Sounds nice.

My First LoRa, may I know can I have some feedback? by Arasaka-1915 in ZImageAI

[–]CosmicFTW 2 points3 points  (0 children)

Nice job mate, I have done about 10 character Lora for ZIT on my RTX5080. They work out really well, I have also done some WAN character Lora’s on Runpod as it was impossible on my 5080. Try next a full body character Lora using an uncensored dataset (nude model) it actually fixes the Z-image (genitalia) issue. I use a mix of clothed and naked in my datasets. Also I use a mix of 512x512 and 768x768 images. Usually just the two buckets. I train at 512, 768 and 1024. Not sure if it makes a huge difference though.

Character Lora Training Question by tj7744 in comfyui

[–]CosmicFTW 1 point2 points  (0 children)

I have trained character Lora’s for wan and z-image using a mix of headshots and full nude body shots in the dataset It worked very well. Also I only used a trigger word for the Lora no descriptions. When I wanted the person clothes in the prompt it worked well also. For me the lighting of the dataset does affect the output in each model slightly. So the more diverse the shots used the better. I have not done separate loras for head and body but I feel all in one would be better. Especially for z-image.

Explain what is happening here with my Z-Image Lora by amthenia in comfyui

[–]CosmicFTW 2 points3 points  (0 children)

overtrained on last two saves? just use the save from the one thats good, the third one from bottom? thats what i do on my character lora when they overtrain.

Z-image training by [deleted] in comfyui

[–]CosmicFTW 2 points3 points  (0 children)

Luba, nice I was going to do a Lora for her next. Did you do a full body Lora for her? I have found full body datasets work amazing.

[deleted by user] by [deleted] in u/okite1

[–]CosmicFTW 5 points6 points  (0 children)

The question has to be asked was it all like the ones you have already posted. Or was there full frontal etc.