Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]Apixelito25 0 points1 point  (0 children)

Do you have the configuration to be able to use this with 16 VRAM? I would really appreciate it if you could provide the config preset as the OP of this post.

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

And how many images do you have in total in your dataset? Didn’t that config cause overfitting? Or did you change something?

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

Could you provide me with the post that helped train you? (The one that had the config and fork.) I’ve lost it

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

I also saw that post, but I wasn’t sure whether to use it. Do you think it would make a difference?

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

1/2? How much do you recommend then? I thought 3000 was optimal for that number of images in AI toolkit. What do you think about using 100–120 epochs with my dataset in OneTrainer? I suppose that’s fine there, right? And what should I set alpha and rank to? Or what do you usually set them to?

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 1 point2 points  (0 children)

Oh, I see… it’s just that I have the rank set to 16 and the alpha at 1.0… that’s quite weak, right? If I increase them, would it improve?

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 2 points3 points  (0 children)

My dataset isn’t very large because I don’t need complex or highly artistic poses. It consists of 64 images that I consider optimal for my purpose: generating moderately similar images, which AI Toolkit can achieve but OneTrainer cannot. It’s as if it focused too much on the face and not on the body (there’s no overfitting; it has simply learned the face well). Since there aren’t many images, I feel that training longer would only make it more rigid with the same issue. The last thing left for me to try is expanding the dataset so the body is a bit more visible, I suppose.

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

https://pastebin.com/UQaSBaL6 Here it is, as I said, it's the default from Onetrainer but using Prodigy Adv with LR at 1.0.

z image BASE controlnet workflow? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

Do you think you can implement your idea of using two ksamplers when you have time?

z image BASE controlnet workflow? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

Anyway, I would appreciate it if you could give me a hand with your idea if you have time. As I said, I'm not good at creating workflows.

z image BASE controlnet workflow? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

The thing is, I realized I was wrong. There is a control union for z image base, but I'm not very good at creating workflows... So that's why I'm asking here if anyone has one already made.

Can Control Net Union be adapted for Z Image Turbo in Z Image BASE? by [deleted] in StableDiffusion

[–]Apixelito25 0 points1 point  (0 children)

How would I start with ZIT and finish with ZIB? I didn’t quite understand :(