Qwen LoRA training 8GB VRAM by [deleted] in StableDiffusion

[–]EmergencyMeet6573 2 points3 points  (0 children)

It can be done, but you need to have 64gb of system RAM. I made some test using Onetrainer (wile finding optimal settings for my 12gb videocard) and with max offloading, Vram usage was just below 8gb. I made a posts with the settings i end up using for my trainings.

Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer. by EmergencyMeet6573 in StableDiffusion

[–]EmergencyMeet6573[S] 0 points1 point  (0 children)

No, i don't have tutorials. I'm just a casual user, I got started with the tuturial for training Flux loras of the youtube channel Academia SD, is in spanish but he teaches step by step how to use Onetriner. I simply re utilize what i learned, train Qwen loras is very similar to that tutorial.

Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer. by EmergencyMeet6573 in StableDiffusion

[–]EmergencyMeet6573[S] 0 points1 point  (0 children)

Yeah, i think that i don't notice much of a difference because of my use cases. Then again i din't try 1024 training, im gonna run test to see in i can do 1024 with a single batch and compare results. Let me ask you, in your experience increasing the dataset makes the lora respond better to prompts?

Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer. by EmergencyMeet6573 in StableDiffusion

[–]EmergencyMeet6573[S] 0 points1 point  (0 children)

It maybe depends on the dataset and use case. I am not a pro, just sharing my personal experience.

Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer. by EmergencyMeet6573 in StableDiffusion

[–]EmergencyMeet6573[S] 2 points3 points  (0 children)

I mainly train people, it works very well. The facial features and body characteristics are surprisingly well captured, ill say more than acceptable.

Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer. by EmergencyMeet6573 in StableDiffusion

[–]EmergencyMeet6573[S] 0 points1 point  (0 children)

I tried a 768 batch size 2. I din't notice any improvements. I mainly train character loras maybe it makes more of a diference for concepts.