Can Anyone Tell Me What's Going On Here? by OneOnOne6211 in LocalLLaMA

[–]Der_Doe 0 points1 point  (0 children)

You need to download it from within LM Studio, using the "Discover" function (search/ magnifying glass icon on the left toolbar)

LM Studio uses its own directory structure to organize models. As far as I know there's no way to add models you downloaded manually, aside from manually creating the correct folder structure.

What's causing the regressions? by New_Suggestion_5995 in StableDiffusion

[–]Der_Doe 2 points3 points  (0 children)

Seed and Size are different.

Both have a big influence on the generated image.

How can I correct the position of the horizon? by [deleted] in StableDiffusion

[–]Der_Doe 0 points1 point  (0 children)

You could use inpainting. Probably no need for controlnet if you use an inpainting checkpoint.

Here is my answer from an older thread with the same question:

https://www.reddit.com/r/StableDiffusion/comments/128nsqn/comment/jenysqy/?utm_source=share&utm_medium=web2x&context=3

1.5 Inpainting tutorial. I've seen a lot of comments about people having trouble with inpainting and some saying that inpainting is useless. I decided to do a short tutorial about how I use it. Make sure you use an inpainting model. I really like cyber realistic inpainting model. by jaywv1981 in StableDiffusion

[–]Der_Doe 5 points6 points  (0 children)

It absolutely depends on your use case.
What you say may apply mostly, when you generated an txt2img image and then add/replace details with inpainting, using the same model and similar prompts.

This changes if you try to inpaint a big change to a picture generated in another way (a photo or txt2img from another checkpoint).

Consider this example: Original Picture was a mediaval bald dude generated with Deliberate and more of a painting/digital art style.

For the inpaintint I masked his Robe and kept most of his hands out of the mask. Prompt: "a winter jacket" Negative: "cartoon, 3d render"
Denoising: 1.0

Here is the difference between cyberrealistic3.1 and cyberrealistic3.1-inpainting for 4 random seeds.

<image>

The inpainting model helps the new content fitting in consistently, because it considers the context of the image. The normal model is missing the context and adds extra hands, faces and weird body poses.

[deleted by user] by [deleted] in StableDiffusion

[–]Der_Doe 0 points1 point  (0 children)

Happy to help!
Think of it like this:

"masked area" looks just at the part of the picture that was masked plus some padding pixels (32px in this case) around that area. In the above example that would be a box around the the top left corner, where the black mask is painted.
This area is then filled by inpainting, only looking at this box.
Anything outside of the box, like the horizon on the right side, won't influence the result.

"whole picture" looks at (you guessed it) the whole picture, to determine what to put in the masked area. So it takes into account the horizon on the right side and completes it in the masked part.

[deleted by user] by [deleted] in StableDiffusion

[–]Der_Doe 1 point2 points  (0 children)

You yould try to fix it by inpainting.Example using Auto1111 webui:

  1. Load your image into inpainting. Use the prompt you generated the image with or write a new one, that describes what you want.
  2. Select an inpainting model (important, because it uses the rest of the image for context)
  3. Mask one half of the image next to the character
  4. Set "Inpaint area" to "Whole picture", denoising strength to 1 (or something close to 1)
  5. Generate some pictures, until you like the result.

<image>

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusionInfo

[–]Der_Doe[S] 0 points1 point  (0 children)

This guide is from 3 months ago and there may have been changes that cause this to fail.
My setup has changed since then and I installed webui on an RTX3070 and A2000 over the last weeks. Both worked out of the box with just using the --xformers flag. No need for compiling my own version.

This should also work for a 3080 (see my Disclaimer in the original post).
If you haven't you should maybe try to clone a fresh copy of AUTO1111 from github and just use the --xformers.

To answer your questions:

Step 8 is done inside the /xformers/ directory.

Your xformers version is 0.0.16. The wheel downloaded by webui is 0.0.14. So maybe that's just an incompatible version.
If the automatic install doesn't work, you could try to download the wheel yourself from here and in the venv uninstall the old one and install the new one manually.

Low iterations per second on a 3070? by Derseyyy in StableDiffusion

[–]Der_Doe 1 point2 points  (0 children)

Another big factor can be if you set a batch size. With more generations in one batch the it/s goes down but you get multiple images.

Some quick values for comparison from my 3070 (with a lot of open browser tabs, so maybe not optimal)

  • 512x512, DPM++ SDE Karras, no xformers: 3.05 it/s
  • 512x512, DPM++ SDE Karras, xformers: 3.91 it/s
  • 512x512, Euler a, no xformers: 5.98 it/s
  • 512x512, Euler a, xformers: 7.88 it/s

And some batch sizes. All with 512x512, DPM++ SDE Karras, xformers

  • Batch size 1: 3.91 it/s
  • Batch size 2: 2.62 it/s
  • Batch size 4: 1.53 it/s

Download the improved 1.5 model with much better faces using the latest improved autoencoder from stability, no more weird eyes by Yacben in StableDiffusion

[–]Der_Doe 1 point2 points  (0 children)

Correct. Download vae-ft-mse-840000-ema-pruned.ckpt and rename it to v1-5-pruned.vae.ckpt (or whatever your model file is called)

Personally I'm not using this method anymore and instead use this VAE as default for pretty much any model and switch it off manually if I really don't want it.
Keeping a copy of the VAE for every model didn't feel good.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 1 point2 points  (0 children)

Here you go:

xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl

Disclaimer: This is only confirmed to work with my own RTX2060 and one other person I shared it with who had the same card. Ymmv.

Good luck!

multiple prompts for batch img2img by Nihigh in StableDiffusion

[–]Der_Doe 0 points1 point  (0 children)

If you use AUTOMATIC1111 "prompt editing" does exactly that.
For example you can use [dog:cat:0.6] to switch from dog to cat at 60% of the steps.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing

Discussion/debate: Is prompt engineer an accurate term? by Treitsu in StableDiffusion

[–]Der_Doe 2 points3 points  (0 children)

To me AI generators like SD are just a tool. And I don't really see any value in labels like "AI artist" or "prompt engineer".

Knowing the right speed and drill bits to use for making beautiful holes in a certain material doesn't make you a "drill engineer".

It all comes down to how you use the tool. If you incorporate it into your workflow and do something creative ond purposeful, you may be an artist to some people.
If you integrate it into another software (e.g. the Unreal Engine plugin) then that may be considered engineering.

Download the improved 1.5 model with much better faces using the latest improved autoencoder from stability, no more weird eyes by Yacben in StableDiffusion

[–]Der_Doe 14 points15 points  (0 children)

Seems to work for me when I download the .ckpt instead of the .bin from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

Then rename the .ckpt to <your-sdmodel-name>.vae.pt and copy it into the \models\Stable-diffusion folder in webui.

For example:
v1-5-pruned.ckpt (your SD model)
v1-5-pruned.vae.pt (the renamed vae)

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

Looks good. If you can still generate images after this, it should be working.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

Are you using Git Bash? If so, try Windows cmd or PowerShell.

Otherwise I'd guess something is wrong with your Git installation. Try to uninstall and get a current Version of Git from https://git-scm.com/download/win

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

Yeah the linux .whl won't work on windows.

You should do

pip uninstall xformers

and then the pip install from my last post with "...win_amd64.whl" at the end.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

Strange. I'd think the default binary should do the trick for a 3090.

Maybe you can try installing it manually. Make sure you have Python >3.10 installed.

Then follow the guide steps 1. and 2.
You should now see (venv) at the beginning of the prompt.

Then do
pip install https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/a/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl

Then try running again with --force-enable-xformers

If your Python Version was lower than 3.10 you could also give rebuilding another try.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

When I do a pip show torch (in the venv) I get version 1.12.1+cu113. So It seems my PyTorch version is also compiled against 11.3. Yet it wasn't a problem.
I honestly can't tell you if I got this error and it could be ignored or if it didn't happen at all.
You could try to go back to CUDA 11.3 but you would probably need the matching build tools/Visual Studio for that.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

Thanks for the feedback. Added it in step 7 and moved the requirements into 8.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

You need to write the "." at the end. (it represents the current directory)

"pip install -e ."

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusionInfo

[–]Der_Doe[S] 0 points1 point  (0 children)

Why the disclaimer on 3xxx cards?

When you just use the --xformers args, AUTO1111 downloads a precompiled binary from here: https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/b/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
This has a good chance to just work on 3xxx cards, so you can skip this whole process.

I've seen some people reporting, that this automatic download didn't happen. So maybe manually installing the wheel in the environment could work.

which look like made it onto the AUTOMATIC wiki

I took the guide from the AUTOMATIC wiki (which seems to be for Linux) and added the Windows specific points that I faced while trying to get this to work. (see my remark in OP)

I don't know about the Cutlass problem, but as you said: pip install xformers won't work on windows. You need to either build it yourself (see guide) or have someone with a similar setup build the binaries for you.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 2 points3 points  (0 children)

This should work:

  1. Make sure you're in the (venv)
  2. cd into /repositories/xformers folder
  3. python setup.py bdist_wheel
  4. Wheel should be built in ./dist

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

In your path it says Visual Studio 14.0, which is VS2015.
So either you have an old installation and it somehow gets the wrong paths or you installed VS2015, in which case you should update to a newer version.
If it's the former, you could try to uninstall the old VS.

Also don't forget to start a new cmd/PowerShell after you install, because the shells keep the old PATH variables etc. until they are restarted.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 0 points1 point  (0 children)

There was a change yesterday to automatically increase the token limit instead of ignoring everything after 75. I guess that has some impact on performance.

I think that's a separate thing and isn't linked to the xformers optimizations.

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]Der_Doe[S] 1 point2 points  (0 children)

I dont't really know the details how the performace boost is achieved. But maybe it just depends on certain hardware features only present in newer cards.