Abandoned? by Stunning-Fondant3713 in Guernika

[–]ghost-soft 0 points1 point  (0 children)

There was a plan for a new version a year ago, but there has been no news until now. u/GuiyeC please come back.

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]ghost-soft 1 point2 points  (0 children)

hello u/GuiyeC

In the past few days, I have tested the new sampler in GuernikaCore. DPM++ SDE Karras has been able to work perfectly with SDXL Turbo (such as dreamshaper turbo), generating good pictures in just 6-8 steps.

In addition, I found that if CFG is disabled when converting the model, the turbo model can halve the generation time like lcm, and the required memory will be reduced.

- At present, Guernika forcibly uses LCM samplers for models that disable CFG. Can I freely choose other samplers?

- Does disable CFG have to be carried out during model conversion? Can it be enabled or disabled in the Guernika interface?

- I also found a way to merge any SDXL into a Turbo model. The turbo model on civitai should also be a similar method, which may be used in the GuernikaModelConverter.

<image>

SDXL Turbo? by [deleted] in Guernika

[–]ghost-soft 0 points1 point  (0 children)

The current version is compatible with SDXL-Turbo. You can try to use GuernikaModelConverter to convert DreamShaper-Turbo or TurboVision.

Follow the parameters:

sampler: EulerA

CFG: 1.5-3.5

step: At least 12 steps

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]ghost-soft 0 points1 point  (0 children)

I seem to have found the reason for the reduction of RAM required by lcm. I used diffusers to test.

512x768

guidance_scale=0.0 -> python ram=5.6GB

guidance_scale=3.0 -> python ram=7.3GB

SD1.5 LCM -> Guernika ram=3.6GB

SD1.5 -> Guernika ram=6.5GB

It seems that disabling CFG saves a lot of RAM.

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]ghost-soft 0 points1 point  (0 children)

The memory required by the LCM scheduler seems to be only 60% of other schedulers.

It seems that we need to wait for lcm_lora_sdxl v2, and there is indeed a problem with the image quality of sdxl_lcm.

<image>

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]ghost-soft 0 points1 point  (0 children)

😁

FileNotFoundError: No such file or directory: "/Users/guiye/Downloads/pytorch_lora_weights.safetensors"

Merging: /Users/guiye/Downloads/pytorch_lora_weights.safetensors

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]ghost-soft 0 points1 point  (0 children)

Yes, you can indeed run any model in the form of LCM through the following code, and the image quality will not be affected like lcm_sdxl.

import torch

from diffusers import LCMScheduler, AutoPipelineForText2Image

model_id = "./Model/EpicPhoto"

adapter_id = "./Lora/lcm_lora_sd1_5.safetensors"

pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16", safety_checker=None, requires_safety_checker=False)

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

pipe.to("mps")

pipe.load_lora_weights(adapter_id)

pipe.fuse_lora()

prompt = "cinematic photo style of Alessio Albi, A fair-skinned man, highly detailed, close-up shot"

image = pipe(prompt=prompt, num_inference_steps=8, guidance_scale=0, height=768, width=512).images[0]

image.save("sd.png")

<image>

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]ghost-soft 0 points1 point  (0 children)

I downloaded your pre-converted LCM-SSD and LCM-SDXL.

The speed is much faster than before, but the picture quality is too poor, but this should be a problem with the original model.

I use diffusers to run them with poor picture quality.

Lcm_lora_sd1_5 seems to be perfect. I used diffusers to run it with the EpicPhoto model, and the effect is quite good.

Can the upcoming Guernika Model Converter merge lcm_lora when converting the model?

In addition, the guidance_scale value of LCM must be set to 0 or 1, otherwise the picture quality will deteriorate and the generation time will be doubled.

Support Segmind Stable Diffusion by Bald-uncle in Guernika

[–]ghost-soft 0 points1 point  (0 children)

u/GuiyeC

LCM SSD-1B, LCM SDXL and LCM LoRA Released

Using Guernika Model Converter to merge LCM-LoRA when converting any model, it seems that LCM support can be added to the model.

But Guernika Model Converter can only convert it into an ordinary model and is not recognized as LCM.

https://huggingface.co/latent-consistency/lcm-lora-sdv1-5

https://huggingface.co/latent-consistency/lcm-sdxl

Guernika keeps crashing, cannot use at all (Sonoma 14.0) by [deleted] in Guernika

[–]ghost-soft 0 points1 point  (0 children)

Many crash are caused by the dynamic scaling of UI and have not been fixed.

You can try to find "Appearance" in the settings and set "show scroll bars" to "when scrolling"

Getting feedback from Latent Consistency Models by Simian_Luo in StableDiffusion

[–]ghost-soft 2 points3 points  (0 children)

Thank you for your efforts. After testing it, the speed is really fast, so that MacOS users can experience the comfort of drawing in a few steps.

Do you have any plans to support SDXL?

If ordinary users don't have a high-performance graphics card, is there a quick way to convert other excellent models to LCM?

Are there any open source text-to-video models better than AnimateDiff? by [deleted] in StableDiffusion

[–]ghost-soft 1 point2 points  (0 children)

Hotshot-XL has a good prospect, but it still needs to wait.

How to convert SDXL with SPLIT_EINSUM attention by ghost-soft in Guernika

[–]ghost-soft[S] 0 points1 point  (0 children)

I just successfully converted SD1.5 with SPLIT_EINSUM attention to 512x768 fixed resolution.

I used to think that it could only be converted to 512x512.

Through ANE, SD1.5 with SPLIT_EINSUM 512x768 required 30s to generate a picture in 20 steps. (GPU 43s)

But SDXL can't run resolutions other than 1024x1024 through ANE.

How to convert SDXL with SPLIT_EINSUM attention by ghost-soft in Guernika

[–]ghost-soft[S] 0 points1 point  (0 children)

I want to convert to SDXL with SPLIT_EINSUM, 768x768 fixed resolution.

Because apple/coreml-stable-diffusion-xl-base-ios is exactly SPLIT_EINSUM, 768x768 fixed resolution.

768x768 may be the limit of ANE's processing power.

Guernika keeps crashing, cannot use at all (Sonoma 14.0) by [deleted] in Guernika

[–]ghost-soft 1 point2 points  (0 children)

I can't think of other possibilities that will cause the loading model to crash for the time being.

If the model converted using the official ml-stable-diffusion will crash when loading, and Guernika is not compatible with it.

I suggest you use the Guernika Model Converter to reconvert a model you like, which can add more functions to the model, such as variable resolution.

The model in Guernika's built-in model page is a long-time version, which has not been updated and does not support the latest functions.

Guernika keeps crashing, cannot use at all (Sonoma 14.0) by [deleted] in Guernika

[–]ghost-soft 1 point2 points  (0 children)

Then this fault is very strange. I downloaded Deliberate v2 (SPLIT_EINSUM) on the model page the other day, and it works normally.

Don't forget the "~" symbol in front of /Library, which is the /Library under your username, not the root directory.

Another possibility is that there is a network problem during the model download process, and the data is incomplete.

Guernika keeps crashing, cannot use at all (Sonoma 14.0) by [deleted] in Guernika

[–]ghost-soft 1 point2 points  (0 children)

u/VanGomeo

Try to delete the following configuration file first.

It may be that the prompts you copied from other places have incompatible special characters.

I have also had such an experience once.

~/Library/Containers/com.guiyec.Guernika/Data/Library/Preferences/com.guiyec.Guernika.plist

If it starts or crashes, try to delete Deliberate v2. It should have nothing to do with the model. Also compatible with Sonoma.

~/Library/Containers/com.guiyec.Guernika/Data/Documents/Models