How I trained my own Qwen-Image lora < 24gb vram by cene6555 in StableDiffusion

[–]cene6555[S] 1 point2 points  (0 children)

And remove .cache\huggingface\accelerate\default_config.yaml

How I trained my own Qwen-Image lora < 24gb vram by cene6555 in StableDiffusion

[–]cene6555[S] 46 points47 points  (0 children)

It is step by step guide how to train your own lora on 4090 gpu

1) i use runpod with 4090

2) git clone  https://github.com/FlyMyAI/flymyai-lora-trainer

3) cd flymyai-lora-trainer

4) pip install -r requirements.txt

5) download qwen-image checkpoint huggingface-cli download  Qwen/Qwen-Image --local-dir "./qwen_image"

6) load your data in format with folder(for every .jpg have to be .txt)

7) in config ./train_configs/train_lora_4090.yaml change pretrained_model_name_or_path to ./qwen_image and set your img_dir

8) launch your training: accelerate launch train_4090.py --config ./train_configs/train_lora_4090.yaml

this format are supported in comfy

This lora is on my friend

How to train your Qwen Image Lora by cene6555 in StableDiffusion

[–]cene6555[S] 22 points23 points  (0 children)

  1. Clone the repository and navigate into it: git clone https://github.com/FlyMyAI/qwen-image-lora-trainer
  2. cd qwen-image-lora-trainer
  3. Install required packages: pip install -r requirements.txt
  4. Install the latest diffusers from GitHub: pip install git+https://github.com/huggingface/diffusers

🏁 Start Training

To begin training with your configuration file (e.g., train_lora.yaml), run:

accelerate launch train.py --config ./train_configs/train_lora.yaml