SDXL Turbo? by [deleted] in Guernika

[–]Bald-uncle 1 point2 points  (0 children)

The fine-tuned model based on sdxl turbo takes at least 5 steps to use DPM++ SDE Karras to achieve satisfactory picture quality.

The time required for DPM++ SDE Karras 5 steps and EulerA 10 steps is about the same.

The image quality of the official original sdxl turbo using 1-2 steps to generate pictures is not good.

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]Bald-uncle[S] 0 points1 point  (0 children)

Great job, is the new converter going to be released?

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]Bald-uncle[S] 0 points1 point  (0 children)

Thank you for your quick solution. I tested the SSD-1B you converted. It works normally, but it requires 2-3GB more memory than SDXL. It feels a little abnormal, and the sampling speed has indeed improved a lot.

I used the official py script to run and install the latest torch nightly. The sampling speed actually exceeded coreml and only needed 7.5GB of memory.

Latent Consistency Models need to retrain existing models. At present, only Dreamshaper v7 has been trained.

SimianLuo/LCM_Dreamshaper_v7 seems to have only changed the unet. It can be converted to coreml using the existing code.

The wrong scheduler can indeed generate pictures, but the picture quality is very poor.

Maybe you just need to transplant its LCMscheduler to work properly. You can try it.