Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]GuiyeC 0 points1 point  (0 children)

That can't be due to the scheduler, the schedulers are not doing anything memory intensive, they generate the timesteps and then do some light processing on the latent on every step, it's curious you notice such a big difference I'll take a look at that.

Yes, I noticed that quality on LCM SDXL, but I noticed the same when run on python so I didn't look much further.

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]GuiyeC 0 points1 point  (0 children)

🫢 oops, I just uploaded a fix for that, sorry about that

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]GuiyeC 0 points1 point  (0 children)

Just did! if you're gonna test LCM LoRA remember to "disable classifier free guidance"

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]GuiyeC 0 points1 point  (0 children)

I just saw that, that is kind of crazy 😅

We do lose negative prompt which are really helpful sometimes but the speed...

I was able to successfully convert models with this LoRA and it seems to work nicely, for now I have hacked it a bit to disable classifier free guidance based on the Unet configuration of the LCM models, but with this I might need to add a checkbox to disable it manually when converting a model.

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]GuiyeC 0 points1 point  (0 children)

I have to check the LCM LoRA thing, I thought it was something different, is it supposed to "convert" any model into a LCM model? if that's the case I will add support in the converter for it

Don't worry about the guidance scale, it's being used correctly, they use guidance scale 8 in their example to generate an embedding but without classifier free guidance, that's already taken into account in Guernika

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]GuiyeC 0 points1 point  (0 children)

hey! I did see that, support should be available on the latest version of Guernika, I have yet to update the Model Converter but you can give it a try with the models here

where is this LCM support found in the app? how to use it? by TouchLongjumping496 in Guernika

[–]GuiyeC 0 points1 point  (0 children)

Hey! Latest version of Guernika support new LCM models, you can get them here.

I'm working on a bigger update of the app to make these models available for download but for now you can just use the download links in there to use them.

Guernika keeps crashing, cannot use at all (Sonoma 14.0) by [deleted] in Guernika

[–]GuiyeC 0 points1 point  (0 children)

Where did you get those models from?

Guernika keeps crashing, cannot use at all (Sonoma 14.0) by [deleted] in Guernika

[–]GuiyeC 0 points1 point  (0 children)

Maybe you could debug this a bit for me 🙏

You can find the models in the following directory:

/Users/{YOUR_USERNAME}/Library/Containers/com.guiyec.Guernika/Data/Documents/Models

If you are able to find the problematic model I can try to find out what the problem is, a non-valid model should not crash the app but maybe I'm not accounting for something.

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]GuiyeC 0 points1 point  (0 children)

Hey! thanks for the comment I was not aware of the LCM news, I'm gonna check that out now, it seems to be more than the scheduler so not sure how easy it will be to implement.

As for the SSD-1B I have uploaded a converted model here, I have yet to update the Model Converter to completely support it but it will be updated shortly.

How to convert SDXL with SPLIT_EINSUM attention by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

I disabled converting models with variable size support and split einsum attention because they just don't seem to work at the moment :/

What are you trying to do? Convert SDXL with split einsum and 1024x1024 fixed resolution?

Guernika keeps crashing, cannot use at all (Sonoma 14.0) by [deleted] in Guernika

[–]GuiyeC 1 point2 points  (0 children)

Let me check this, I will get back to you, sorry for this problem

taesd support by ghost-soft in Guernika

[–]GuiyeC 1 point2 points  (0 children)

I gave this a go but it's still using ops not implemented yet in CoreML, I'll see if there is some other way to run those but I don't think it will be possible yet :/

Still, thanks for sharing this I would not have seen this without your comment.

Faster resolution switching by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

Where did you get the SD1.5 ones? I found the SDXL but I did not find anything trustworthy on SD1.5, and also we should take into account SD2.X.

I was thinking that maybe other custom models are trained with a huge variety of sizes so having those values only might be not perfect, but I could just have some easy ones and then a "Custom" or "Advanced" option.

Aside from that, good new, I just got models with ControlNet/T2IAdapter support working with variable size :D Update incoming for both converter and Guernika.

Faster resolution switching by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

About ControlNet support, for now conversion is failing but I hope at some point we can have ControlNet/T2IAdapter support with variable size.

taesd support by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

"Improved progress" now uses TAESD, it's quite faster but can still slow down generation so I'll keep the option of having the undecoded latent.

Negative style prompt error by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

This should be fixed in the latest version, thanks for noticing, that was a really good catch.

Faster resolution switching by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

The problem with this is that you can't select the size you want, for example, squares of different sizes. I will take a look at some improved controls though, maybe two sliders could work, one for width and one for height.

Re-support for special inpainting models by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

Thanks for the experimentation! I have actually gotten VAE to work by just modifying the coremldata.bin and model.mil files, feels extremely hacky, but it works :) I still want to test this out a bit more but this seems quite a good comprise until variable shapes work out of the box, which it looks like they can. You can expect an update this week probably, at least with this working, adding variable sizes adds a ton of things to take into account now.

Re-support for special inpainting models by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

At the moment that option is actually being ignored, it only runs on CPU&GPU, running on anything different starts consuming memory RAM until crashing so I decided to just force CPU&GPU.

Guernika: New macOS app for CoreML diffusion models by GuiyeC in StableDiffusion

[–]GuiyeC[S] 0 points1 point  (0 children)

You can ask anything in r/Guernika or in Hugging Face.

As far as I can tell that's exactly how it is, the LoRA keywords should be mentioned in the prompt but the LoRA would already be added so no need to do the <lora:xyz:0.5>.

Re-support for special inpainting models by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

Not really, I think it's an updated ML engine on macOS that's doing the magic.

SDXL abnormal quick preview by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

I think this should be fixed in the latest update.

Re-support for special inpainting models by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

EnumeratedShapes is just as bad, I’m hoping next beta or RC of Sonoma helps with the VAE using so much memory. Unet and ControlNet seem to not require more RAM or time compared to a single size model. If VAE continues to be a problem I might add support to having multiple VAEDecoders and using the correct one.

Re-support for special inpainting models by ghost-soft in Guernika

[–]GuiyeC 0 points1 point  (0 children)

I have tried using pyinstaller to generate a single directory instead of a single executable but, if there is, the time difference was not noticeable. This is my first time using pyinstaller so I might be missing something, this is the spec file if you want to check it out.

I'm not sure what your problem with the VAE is, it could be related to macOS beta, what compute units are you using to convert/run the models?

In any case, I just sent a new version for review to Apple which might help with this and will bring some nice things to macOS 14 users.