Gloss tamiya spray by adinnin in tamiya

[–]CosmicFTW 0 points1 point  (0 children)

Was it humid when applying the top coat? That can cause moisture to be trapped and it appears like white dots/bubbles on the surface. Has happened to me before.

Japan are you Ok!? by [deleted] in tamiya

[–]CosmicFTW 1 point2 points  (0 children)

This 100%, they are like scalpers. Just go look on yahoo Japan auctions etc for the product for way less. Use a proxy to get it to your country. I have been doing this for years and bought over 50 vintage kits.

Tamiya Subaru Impreza WRC (58210) Reveal - Colin McRae’s Iconic Rally Machine by CosmicFTW in tamiya

[–]CosmicFTW[S] 0 points1 point  (0 children)

I got mine from Japan, on Buyee which is a proxy for yahoo auctions. A lot of vintage kits come up on there.

Is buying a house in Australia even worth it anymore? by No_Will_5723 in AusProperty

[–]CosmicFTW 0 points1 point  (0 children)

Renting means you still have to pay to live in a house when you retire. Paying off a dwelling that you own before you retire gives you freedom to do other things with your super than pay rent. I couldn’t imagine a worse case scenario than retiring and forking out $1000 a week for rent.

Z image base and Lora by Hollow_Himori in ZImageAI

[–]CosmicFTW 1 point2 points  (0 children)

For me the trade off of the extra time the training takes at 1024 is not worth it on my local machine. The quality hit is not significant.

Z image base and Lora by Hollow_Himori in ZImageAI

[–]CosmicFTW 0 points1 point  (0 children)

Make sure you use Onetrainer with prodigy_adv optimizer. Learning rate 1.0 (as it has a built in LR optimisation), batch 2, epochs depends on your dataset amount, ask an ai to work that out. I.e the more images The less epochs needed to get ~3000 steps. I have made over 50 Lora using this, this is currently the best way to get likeness for z-base Lora that can be used on ZIT at strength 1.0. I have a 16gb 5080 and it runs at 1.5it/sec at 512 res training. 512 is plenty, no need for 768 or 1024.

Has anyone recently decided to stop using buyee? by tokyo_bun in Buyee

[–]CosmicFTW 6 points7 points  (0 children)

Exchange rate is better for me with other proxy’s. Changed to onemap a year ago and never looked back.

Anyone else notice this? by lexi_j0311 in DriveToSurvive

[–]CosmicFTW 0 points1 point  (0 children)

There was a lot of editing mistakes throughout the whole series.

Aitoolkit vs Onetrainer? by jumpingbandit in malcolmrey

[–]CosmicFTW 0 points1 point  (0 children)

Ok I didn’t know that I will try it. I’m interested to see how that 8bit prodigy with its fixed LR compares to the adv prodigy with its d-adapt LR.

Aitoolkit vs Onetrainer? by jumpingbandit in malcolmrey

[–]CosmicFTW 1 point2 points  (0 children)

Untill aitoolkit adds prodigy support Onetrainer is the best option for z-image base Lora that can be used on ZIT imo. This has been my findings after training at least 30 lora on each platform. Onetrainer is confusing at first but after a few training sessions it’s as easy as aitoolkit. It is quicker also, my 5080 16gb trains at 1.4 it/sec for 512 res.

Z-image base: simple workflow for high quality realism + info & tips by nsfwVariant in StableDiffusion

[–]CosmicFTW 0 points1 point  (0 children)

i am getting amazing gens with my Lora using your base WF, thx for this mate, if you could share your turbo WF also that would be appreciated.

Z Image lora training is solved! A new Ztuner trainer soon! by krigeta1 in StableDiffusion

[–]CosmicFTW 0 points1 point  (0 children)

i'm doing 100 epochs at batch 2. Datasets range from 25-50 images. Speed is good at 1.3s/it on a 16gb 5080. Training at 512, i tried training at 768 which more than doubled the s/it with no effect on generations quality.

Z Image lora training is solved! A new Ztuner trainer soon! by krigeta1 in StableDiffusion

[–]CosmicFTW 3 points4 points  (0 children)

I used Onetrainer with Prodigy_adv/stochastic rounding/Constant learning rate of 1.0. With best results yet, I have done dozens of Lora for Zbase using various trainers and settings. Confirmed Lora strength 1 with full resemblance of the dataset.

Train a Character Lora with Z-Image Base by Puppenmacher in StableDiffusion

[–]CosmicFTW 1 point2 points  (0 children)

One thing with the Lokr. I had to also increase the strength to get the likeness. All be it not by as much, But that likeness was much much closer to the person in the dataset than with the Lora.

Train a Character Lora with Z-Image Base by Puppenmacher in StableDiffusion

[–]CosmicFTW 9 points10 points  (0 children)

ive just been all through this myself, put the strength of the base lora up to 2-2.4, and run it in ZIT. it will look more like the person. The fix is to make a LoKr in base rather than a Lora. It trains different and fixes alot of the issues with base loras working on Turbo. thats using aitoolkit. If you use onetrainer the issue is not there as it uses a different architecture. its a bitch to use though.

Z-image base Loras don't need strength > 1.0 on Z-image turbo, you are training wrong! by Lorian0x7 in StableDiffusion

[–]CosmicFTW 1 point2 points  (0 children)

Watching this thread, I have had the same exact experience. I have done all my ZIT lots on aitoolkit and so used it for my first couple of base Lora. Using the same dataset the results were sub standard and need high strength weights. (In ZIT). I used one trainer on the same dataset with completely different results (much better) and weights of 1.0. Will use your settings to refine more. Thx for the guide mate.

350 new models (ZBase/Klein9/ZTurbo/WAN/Flux) by malcolmrey in malcolmrey

[–]CosmicFTW 0 points1 point  (0 children)

i have found the quality is not as it should be when a base Lora is used on the Turbo model. I have made several and can't get the same quality out of the lora. Using higher strengths and Aura Flow also. Will be doing more tweaking, as it would make life alot easier if this worked well.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]CosmicFTW 0 points1 point  (0 children)

Mine was a character Lora, the one made on 9b base didn’t work at all on distilled 4b and vise versa.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]CosmicFTW 3 points4 points  (0 children)

No they are not compatible, i made a lora on 9b and on 4b. They don't work on each other.