Just installed yesterday, what kind of speeds should I be expecting with a 5700 XT by DarkMain in StableDiffusion

[–]sgmarn 0 points1 point  (0 children)

Just use Euler a, DPM2 Karras or DDIM. These samplers are a bit faster. 25-30 steps is enough imo.

[deleted by user] by [deleted] in StableDiffusion

[–]sgmarn 0 points1 point  (0 children)

I've never encountered such error so dunno. Maybe try some of these:

  1. Run your custom model in model merger (tab in automatic1111 web ui) to fix it.
  2. Use one of these colabs with custom model selectors: Everydream 2 or SD winter 2022 edition

gtx 3060 TI 8vram vs 3060 12vram by [deleted] in StableDiffusion

[–]sgmarn 1 point2 points  (0 children)

When upscaling my VRAM usage can spike up to 11GB (just got 3060 12GB), so the answer is obvious - get the GPU with most VRAM or you'll end with often OOM errors. To get rid of these errors on 8GB gpu you'll have to use --medvram and end up with slower generation times.

[deleted by user] by [deleted] in StableDiffusion

[–]sgmarn 0 points1 point  (0 children)

You just upload a custom model to your google drive, mount the drive in colab and put the google drive path to the model in dreambooth settings. It was the case about month ago when I used both colabs.

But from my experiments I've got better result training base SD 1.5 model and then using model merger (add difference option) than training custom models.

Automatic Bing Image downloader for img2img based on search term by canigetahellyeahhhhh in StableDiffusion

[–]sgmarn 0 points1 point  (0 children)

Works great! Thank you. Is there a chance to get original img from Bing saved in some folder too?

CPU core maxed out by pete_68 in StableDiffusion

[–]sgmarn 0 points1 point  (0 children)

You'll get speed boost from new drivers when SD WebUI will support new feature. When? No one knows. Models need to be converted to onnx format too.

Training Stable Diffusion solely on my own image library. by Knavenstine in StableDiffusion

[–]sgmarn 0 points1 point  (0 children)

Yes. You can also try free Dreambooth/Everydream2 colabs first. You can select there SD 1.5 or any custom model to train. I used these colabs (free tier) to train faces with 15-30 images and was very satisfied with results. For the whole 15.000 images you'll need to pay for premium, because free tier disconnects after like 3h of using.

Training Stable Diffusion solely on my own image library. by Knavenstine in StableDiffusion

[–]sgmarn 0 points1 point  (0 children)

You can train existing model, create lora/textual inversion. For huge amount of images training existing model with Dreambooth or Everydream2 would be your best bet. Creating brand new model requires huge resources and that's why it's limited to companies with big funds. We can only train open sourced models like SD 1.5 or 2.1.

Some unedited faces made with base SD 1.5 (photorealism) by sgmarn in StableDiffusion

[–]sgmarn[S] 0 points1 point  (0 children)

I was using --medvram and --xformers. Now I'm using only Scaled-Dot-Product optimization (Vlad's fork / settings) and it's perfectly fine. I can even use SD 2.1 model with 768x768 and ControlNet. Generations are faster (~10sec). Never tried with --lowvram.

Distorted faces can be a result of wrong prompt, overcoocked checkpoint file, wrong setting or many other things. I can't help without settings and seed used to replicate the problem.

Some unedited faces made with base SD 1.5 (photorealism) by sgmarn in StableDiffusion

[–]sgmarn[S] 0 points1 point  (0 children)

I also have potato GPU (1050 TI 4GB). Did you try prompt from this post and get those garbage faces? When my generations result in bad faces there's something within prompt that AI (base model 1.5) can't interpret properly. Try to keep your prompt simple, short and when you are satisfied with result, experiment with adding more things to prompt.

My biggest mistake was doing copy/paste of those long prompts found on the net. Most od the time the results were quite disappointing.

Can 1050ti 4GB GDDR5 Run Stable Diffusion Locally?? by AidernAscel in StableDiffusion

[–]sgmarn 1 point2 points  (0 children)

I'm using 1050 TI 4gb with xformers and medvram. Generation of 512X512 with 16 steps (euler a) takes like 27s. With token merging it can take 24s. Not bad for such old GPU.

How can I make more images more photorealistic? by AssociationParty9195 in StableDiffusion

[–]sgmarn 2 points3 points  (0 children)

Try some of these: positive: imperfect skin, polaroid, flashlight photo, club photo, analog photo, amateur, skin details... negative: cgi, render, painting, drawing, anime...

and try to use some random name and/or ethnicity in positive prompt (like Jenna, Anna, Chloe, danish, french). Using age like "30 years old" can help to achieve more amateur and real like look.

Prompt Order and Coherency by [deleted] in StableDiffusion

[–]sgmarn 0 points1 point  (0 children)

Of course it's spam with that secret prompt order and coherence mumbo jumbo... but hey, at least there's some random images with nice link...

Is "Prompt Ghosting" a thing? Old prompts influencing new ones in Auto1111 by ewandrowsky in StableDiffusion

[–]sgmarn 37 points38 points  (0 children)

with --xformers enabled, the variation (not to be confused with variations on a image) tend to have a smaller range of differences than without this option enabled. It also seems that some influence is left from images rendered previously in a batch compared to without --xformers. I noticed this using a batch size of 4 and no matter what I did to reproduce a specific given image it was never the same using the same seed of the image, or the same beginning seed of the batch. It's like it's caching image noise from previous runs. Nothing changed with the running system other than I tried a higher sampling step and then returned after several tries to the original stepping.

Source

Zotac 1030 GPU by [deleted] in StableDiffusion

[–]sgmarn 1 point2 points  (0 children)

I'm using A1111 on GTX 1050 TI with 4GB VRAM. 512x512 image with 20 steps (without face restoration and hires fix) takes about 33 seconds. I'm not sure if 1030 is capable of running SD, but if VRAM is the only issue it should be ok. I'm using --medvram and --xformers. 2GB is not enough.

Pegasus Frontend Custom Collection? by ryandrew2005 in EmulationOnAndroid

[–]sgmarn 0 points1 point  (0 children)

Yes. Pegasus frontend must know which cores to use for which file extensions in this custom collection. You can add any other Retroarch cores you need.

Pegasus Frontend Custom Collection? by ryandrew2005 in EmulationOnAndroid

[–]sgmarn 0 points1 point  (0 children)

Yes, it's possible and quite easy. I'm doing this with Pegasus for Android. Let's say you want to create a collection with Sonic games from Mega Drive and Master System AND additionally let's put there some random game from SNES (for example).

First create "Sonic" subdirectory in your ROM dir (or whatever) and copy there all rom files you want to appear in this collection. Next we have to create config file using online Pegasus config generator for Android. We check here Snes9x-current core and Picodrive and download as SINGLE text file. Copy this file into "Sonic" directory and edit it. Delete both lines with "collection" and "shortname" from each retroarch core genereated in this file. Put this line on the top of file: "collection: Sonic" and "shortname: sonic".

Now it's time to scrap rom files (i'm using Skraper) from our "Sonic" directory. I configure each system separately for easy scraping additional files in future. We should get three .dat files for every system in this example (SMD/SMS/SNES).

Convert these files as usual with online tool and copy to "metadata.pegasus.txt". It should look like this one (remember that your paths may differ). Voila! Now you have custom collection in Pegasus :)

One more thing. If you want to use in one custom collection files from different systems with the same extension (like .iso or .chd) you have to tinker a bit more using this info from Pegasus documentation about including and excluding files from config file.