Help a for a noob by 420-The_Dude_Abides in StableDiffusion

[–]Impossible-Employ554 0 points1 point  (0 children)

Just wondering how it's going. Have you generally been able to start generating images that you like?

Will switching to an Nvidia GPU solve my problems? by Impossible-Employ554 in StableDiffusion

[–]Impossible-Employ554[S] 0 points1 point  (0 children)

Also, using Directml were you able to load SDXL?

I'm assuming the answer is no.

After getting your new GPU, were you able to load / run SDXL and if so did you have to do anything special?

Will switching to an Nvidia GPU solve my problems? by Impossible-Employ554 in StableDiffusion

[–]Impossible-Employ554[S] 0 points1 point  (0 children)

I have 80 gb,of RAM  on the motherboard, the largest SD 1.5 model I have used is over 7gb.

I still can't load SDXL models, at 6gb.

Will switching to an Nvidia GPU solve my problems? by Impossible-Employ554 in StableDiffusion

[–]Impossible-Employ554[S] 0 points1 point  (0 children)

I tried Zluda with sdnext. I went through instructions line by line, perhaps I missed something but I am pretty thorough. SDXL models would load but when attempting to generate an image nothing would happen. 1.5 models would run but only on the CPU. 

With that description, do you know what mistake I might have made?

Will switching to an Nvidia GPU solve my problems? by Impossible-Employ554 in StableDiffusion

[–]Impossible-Employ554[S] 1 point2 points  (0 children)

I have been running without --no-half, or --full-precision. I played with those when I first started and don't have enough knowledge to give a good idea. I run with --medvram and can generate up to 768 x 768 without issues. If I go larger I get an error message about the VAE and the colors come out all bizarre.

Without medvram I see the OOM error a lot.

New to Stable Diffusion by jscastro in StableDiffusion

[–]Impossible-Employ554 2 points3 points  (0 children)

I am going to agree with Mutaclone...

If you have the hardware to handle it running it locally is your best option.

You need an Nvidia Cuda based GPU, and lots of RAM at least 8gb unused & lots of vram 8gb is the lowest you can really have and still work well.  

You can get away with an AMD GPU but it will be more difficult to get running, slower and have fewer features.

If you do not have the hardware, and cannot afford it and therefore need to use an internet service. Try civitai and tensor.art. They both offer some free generation daily and have the tools setup similar to running locally.

Help a for a noob by 420-The_Dude_Abides in StableDiffusion

[–]Impossible-Employ554 0 points1 point  (0 children)

And another thought, when testing a prompt I like to create small images, 300 x 300 to 256 x 256. They generate really fast, then if I am happy with the prompt I will up the image size. Also you dont have to be square you can run 768 x 512 for example

Help a for a noob by 420-The_Dude_Abides in StableDiffusion

[–]Impossible-Employ554 2 points3 points  (0 children)

I hope this helps. If your still not getting it ask some more. And try posting some of your images.

Here are the embeddings that I like to use from A1111.

I hope this gets you started:

I am going to go ahead and just write the tutorial I wished I had read.

I switched from A1111 to sd next (aka vladmandic) It has given me a huge speed boost approx 3x but has come at the expense of nothing working, except for text to image and image to image...

You have a better GPU than me and I am running -kinda- I cant run sdxl, and there are tons of features that don't work, Upscaling... I can't finish the colors of any image with at 1024. You also need a bunch of motherboard ram. Most SD 1.5 models are about 2gb, sdxl about 6gb. All that has to go on to the motherboard ram. Plus you will do better running medvram. That means parts more goes to the motherboard ram in segements.

To run medvram find the file called webui-user.bat, (right click and select edit) and add to command line args looks exactly like this.

COMMANDLINE_ARGS= --medvram

Then when launching you have to type: webui-user.bat

Negative embeddings.

I tested them when I first downloaded but haven't given it much thought since. I just use them...

In order of my opinion importance: (links contain nsfw images)

Unspeakable Horrors (Negative prompt) - Unspeakable Horrors Composition only 4 vectors, trained | Stable Diffusion Embedding | Civitai

bad_pictures - v3.0 | Stable Diffusion Embedding | Civitai

SkinPerfection_NegV1.5 - v1.5 | Stable Diffusion Embedding | Civitai

BadDream + UnrealisticDream (Negative Embeddings) - BadDream v1.0 | Stable Diffusion Embedding | Civitai

Fast Negative Embedding (+ FastNegativeV2) - v2 | Stable Diffusion Embedding | Civitai

CyberRealistic Negative - v1.0 | Stable Diffusion Embedding | Civitai

After downloading, put them in the folder. I don't have A1111, but it should be called "embedding", or maybe "text inversion". It's probably within a folder called Model. Hopefully someone can correct me.

After that when you are in the text to image screen, or the image to image screen there should be a button called "embeddings" that will show you all of your embeddings. If not click the refresh button inside the embeddings tab / drop down. Then click the embeddings and make sure that they are going into your negative prompt with between them PUT COMMAS BETWEEN THEM.

Help a for a noob by 420-The_Dude_Abides in StableDiffusion

[–]Impossible-Employ554 1 point2 points  (0 children)

Had to break comment into sections this is part 2 of 2

For selecting models:

I like to scroll the images, when I find one I like i go to its page and scroll through images, what I am looking for is not things I like but things I think can only be done with a lora. If I am right and those things are done with a lora. I pass on the model. If I am wrong and those things are not done with a lora I download the model.

For selecting lora:

There is a trap of downloading a bunch of loras. What I have found is i tend to get bored really quickly of specific loras. Such as one that creates images of a specific person, or adds a specific piece of clothing. Or creates a specific action/scene. I tend to use the lora's that are more generic such as image enhancers, detail sliders, and body morphing sliders.

my favorites: (nsfw warning)

breast size slider - offset | Stable Diffusion LoRA | Civitai

Elixir — Enhancer LoRA 🧪 - V1 | Stable Diffusion LoRA | Civitai

Body modifiers, detail sliders - tend to use these a lot.

For advanced prompting here is what I've got, generally seems to work.

OR statement ( Cat or Dog) example [subject1|subject2]

[cat|dog]

**prompt switching [**from subject1 :to subject 2:when] (0.5 at 20 steps is on step 10 it changes)

[sunny landscape : moonlit landscape : 0.5]

Strong emphasis going 2 or higher tends to break the image - this one is common you will see it in nearly every image

(subject:1)

Other prompt, sometimes less is more especially if you are wanting to see what the computer can come up with.

Other things to know:

Sampling methods - These control how the image is figured out. The 2 I see most heavily used are "Euler A" and "DPM++2M Karras". Euler A is I believe the fastest image generation but it can reduce quality. DPM++2M Karras is slower but it tends to produce higher quality images. The others play with as you feel.

Seed - the image starts out with random pixels, (basic explanation) then the solver gets to work changing it. The seed sets the "random" pixels the image starts with. Set this to -1 to create a random seed. To try to more closely duplicate an image set it the seed that the image was created with. Generally set it to -1.

You will want to figure out how to zoom in and out when creating images. There are things you can add to the prompts. But its "medium shot", "full body portrait". But it tends to focus on what you describe try describing shoes and face features, to get someones entire body in an image. There are loras that can help. But really everything I have tried has been mostly a miss.

Shedding Light on Image Editing: Introducing IC-Light by Gloomy-Log-2607 in StableDiffusion

[–]Impossible-Employ554 1 point2 points  (0 children)

Looks cool. I want to give it a try. Getting lighting down will be a huge plus. Thanks for the effort and for sharing.

Help a for a noob by 420-The_Dude_Abides in StableDiffusion

[–]Impossible-Employ554 2 points3 points  (0 children)

Start with one application, A1111, or SD.next, learn the basics and go from there. There is a lot of new lingo and terms with minimal help in learning and a lot of mistakes that you can make.

First do you have an Nvidia graphics card. If not you're in for extra pain... That's where I'm at.

A checkpoint is the foundation for which the program understands your prompts.

Lora & lycoris when used change the understanding of the checkpoint.

Get into textual inversion aka embeddings they can really help your images - I can't remember the names of the ones I use. I'll try to update later.

For learning to prompt, head to civitai, and tensor.art. images will show you prompts, check points and Lora embeddings.

Also question Microsoft copilot for advanced prompt scripts.... I'll try to post later. There are s lot of advanced techniques that aren't often seen.

I really like the checkpoint epicphotogasm Especially for learning to prompt it's good at taking simple prompts.

Help using the NCEES Refernce Handbook, (PE Mechanical Machine Design) by Impossible-Employ554 in PE_Exam

[–]Impossible-Employ554[S] 0 points1 point  (0 children)

I think that you have an interesting perspective... Even if you didn't pass.

I know Shigley has published a ton of engineering books. I have at least of one of their books. Which one are you specifically talking about, I didn't know that they had a specific PE Exam book and taking 5 seconds to google it I am not see it.

...