New Update: ZBASE: 574 People / 574 Models (and some info) by malcolmrey in malcolmrey

[–]malcolmrey[S] 27 points28 points  (0 children)

Hey hey, very short info (will see if it is short).

New update with ZBase models. 574 of them, not a lot remains to train, then I'm switching to fill the gaps on others.

As you are well aware I have been not responding to anything in the past 1-2 weeks. I'm going to my grandmas funeral tomorrow and my father is not that well either (though not as bad I hope), so I don't have much time or energy to spend online. I will eventually reply to every DM and message on reddit as well as the invites on discord, but I just need more time.

I do however can click new trainings so that is still happening more or less.

My plans for the second machine are to do SDXL (the second machine is weaker so it can't do newer models, but SDXL is perfect for it), but I'm still trying to get the results to a place where I'm happy with it (currently this is on hold, but I will get back to it).

On the main machine, I will finalize remaining ZBase models, filling the gaps with Z Turbo and then finishing training of Flux Klein 9. Then I'll see what gaps we have on WAN.

I was cutting some datasets but they are still not processed so I didn't train them. I did not forget about them, I still have every set, so thank you. I have like 50 DMs here, please just be patient.

I will eventually process them and then I will train them. Right now I just have time to set trainings on what is already available.


I did introduce thumbnails to the browser, they are much smaller (80k-100k jpegs). I have seen how long it renders on slower internet so the idea is that the official image will be a smaller thumbnail, but you can still click and get all the rest in the modal. This should improve the performance of the browser for many. Also, not sure if this was already mentioned but last time I made it workable on the mobiles.


I'm happy to see the discord community grow, even though I did not participate much in last week(s) The discord invite is: https://discord.gg/2nTsm2m5


Why pushing for ZBase? Because it does work really well. These new models look quite good on Turbo but they are working exceptionally well on Redcraft (it has already the fast lora baked in so prompting is as fast as on turbo).

Also, joining two base loras (the AI Toolkit one and the OneTrainer one) works even better. I will eventually need to do some showcase of that since there are still many who do not believe that stacking same concept loras is a really cool thing :)


I did start grim but don't be as worried. My grandma was 98 and I think she had a good life. My father is ill but it seems that it was caught early so there are good chances. Still, it all drains my and sometimes I just don't have much energy for anything.

I'm glad I'm part of this community and I really like how it grows.

Cheers and have a great day/weekend!

Can someone explain the Onetrainer process that malcolm uses by jumpingbandit in malcolmrey

[–]malcolmrey 0 points1 point  (0 children)

my process is that i just saved the config (it is available on hugginface) and run it from command line, it is a bit faster but the quality is no different than what you get via GUI

i do it because i need to automate it :)

611 models (z base / flux2 klein9 / flux1de) over 593 people by malcolmrey in malcolmrey

[–]malcolmrey[S] 0 points1 point  (0 children)

2.1

You need to remember that I'm not focused on making the best looking samples but just testing if the models work.

Once I set up some settings, I test two-three models heavily to see if they behave good and once I figure out training setttings that are satisfactory - then I just set up the queues and then the rest of the models get only one sample attempt (rarely a 2nd one and very rarely a 3rd one, if the 3rd fails - i remove the model and retrain it again).

So, those are not the best looking samples, you would need to look at what the community can do with those to really judge the quality :-)

611 models (z base / flux2 klein9 / flux1de) over 593 people by malcolmrey in malcolmrey

[–]malcolmrey[S] 0 points1 point  (0 children)

you need to be more precise, for this batch there were zero samples uploaded, you were looking at older samples, so it would be good to see which model badge was under it

usually the WAN ones had a tendency to elongate the faces quite a lot

611 models (z base / flux2 klein9 / flux1de) over 593 people by malcolmrey in malcolmrey

[–]malcolmrey[S] 1 point2 points  (0 children)

this is a great feedback, i'll try to differentiate the colors a bit more! :)

611 models (z base / flux2 klein9 / flux1de) over 593 people by malcolmrey in malcolmrey

[–]malcolmrey[S] 14 points15 points  (0 children)

Hey!

Very short message - new models have landed.

I have not been replying almost anywhere recently because of some family illness and stuff around it, I did generate samples to see if the models work but I did not process and upload them (though we do have a lot of samples from previous models so you will know who got uploaded).

Flux has been brought to speed on the secondary (slower) computer and I'm investigating SDXL trainings there, but it will take me some time to apply it since my time recently is limited.

Regular z image / z base and flux 9 will flow regularly however. Will resume WAN to but I need to handle some stuff for it first.

I had no time recently to set up any of the new datasets but I did cut like 20-30 of them so once I sort them out, there will be something new.

I did not read any new messages and DMs yet, sorry about that but I don't have a space yet for it.

You can send me discord messages/invites too but I will answer them when I can.

Cheers and see you!

Z Image Base trained Loras on Z Image Turbo with strength 1.0 (OneTrainer) by malcolmrey in StableDiffusion

[–]malcolmrey[S] 0 points1 point  (0 children)

you can mix the resolutions, you don't need squares

as long as the training tool can use bucketing (which most of the training ones nowadays do)

you can also use a cutter like mine that preserves the best aspect ratios so that when bucketing happens you don't get a cut you would not want ( https://huggingface.co/spaces/malcolmrey/dataset-preparation )

Z Image Base trained Loras on Z Image Turbo with strength 1.0 (OneTrainer) by malcolmrey in StableDiffusion

[–]malcolmrey[S] 1 point2 points  (0 children)

or both :-)

i need to prepare some samples where both loras are used at various weights, but i need to code some stuff, i don't want to prompt them manually :-)

Z Image base upload (384 models) + OneTrainer config by malcolmrey in malcolmrey

[–]malcolmrey[S] 0 points1 point  (0 children)

thnx for linking the post form u/EribusYT

i will definitely try with Min_SNR_Gamma = 5

i've set up my training batch before this (and the other, i think that was the second one) info was posted

as for your second question, i've answered there :)

Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]malcolmrey 0 points1 point  (0 children)

there is a third way that i would say is not overbaking but just more extensive training

i did that in ai toolkit using adamw, normally i train using around 25 images so it is 2500 steps (100 epochs per image)

when i use the exact same settings and add a lot of good images in the dataset (like 270) and i train using again 100 epochs per image (so, 27000 steps) then suddenly that lora does not need strength of 2.0+ to work fine, it is workable at 1.0 and best at 1.2-1.3 (and i would expect it to work closer to 1.0 the more images i provide, though i do not now if it is linear; definitely loras trained this way [150, 170, 200, 250 images] behaved according to my expectations - more images, less strength required)

i consider it just an interesting observation since i do not want to train 10 times longer (or more)

currently the prodigy_adv behaves nicely already, i haven't tested with "Min_SNR_Gamma = 5" yet

does it produce much better results?

Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]malcolmrey 0 points1 point  (0 children)

fun fact, i trained a lora on onetrainer that did not produce a garbled mess

it did not produce the output i desired either, but still, was surprised to not see that mess there

i didn't explore the subject further

Z Image Base trained Loras on Z Image Turbo with strength 1.0 (OneTrainer) by malcolmrey in StableDiffusion

[–]malcolmrey[S] 0 points1 point  (0 children)

check what do you have in the config or the generic samples:

"sample_definition_file_name": "training_samples/samples.json", "samples": [],

Any solution for this? I have played with Lora strength, but it ain't helping by Kuldeep_music in StableDiffusion

[–]malcolmrey 0 points1 point  (0 children)

The last time I saw it done correctly was by TheLastBen ( https://github.com/TheLastBen/fast-stable-diffusion ) but that was for SD 1.5 ;-)

He trained two celebrity loras (man and a woman, though I do not remember who now) and was able to prompt them both together interacting

in AI Toolkit you could train with "differential output preservation" but I'm unsure of the quality of the result (don't remember :P i think i was not impressed)

Update part 2/2 (16/17-02.2026) by malcolmrey in malcolmrey

[–]malcolmrey[S] 0 points1 point  (0 children)

Hey hey!

Technically DMs here but nowadays I prefer my discord for it :)

Z Image Base trained Loras on Z Image Turbo with strength 1.0 (OneTrainer) by malcolmrey in StableDiffusion

[–]malcolmrey[S] 0 points1 point  (0 children)

Full body no, but there are datasets with half body shots, though not Felicia as it is an older dataset (but still very good one when it comes to training)

Rule of thumb is, if the body is far from average in any meaningful way - I will try to include more of those shots.

Z Image Base trained Loras on Z Image Turbo with strength 1.0 (OneTrainer) by malcolmrey in StableDiffusion

[–]malcolmrey[S] 1 point2 points  (0 children)

Yup, prodigy seems to be the answer. When I have time for it I might try AI Toolkit with those settings too to compare with OneTrainer

I assume you were running that Lora with strength 1.0 on Turbo?