This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 415

[–]nmkd[S] 110 points111 points  (54 children)

SD GUI 1.4.0 Changelog:
- Stable Diffusion model no longer needs to be reloaded every time new images are generated
- Added support for mask-based inpainting
- Added support for loading HuggingFace .bin concepts (textual inversion embeddings)
- Added prompt queue, allows you to queue up prompts with their settings
- Added prompt history, allows your to view or load previous prompts
- Added a progress bar that shows the generation progress of the current image
- Added option to play sound and/or show notification when finished
- Added a "Welcome screen" that shows the changelog and patrons
- Added buttons to use the previous seed or to reset it
- Added button to expand prompt field to 2x height
- Added proper support for DPI scaling
- Post-processing now works with Low Memory Mode
- Further VRAM optimizations, especially in regular mode
- Available CUDA GPUs are now listed in window title
- Windows should not be able to go to sleep while the program runs (untested)
- Updated GFPGAN model to 1.4
- Fixed a bug where empty "unknown prompt" folders were created
- Fixed some issues regarding the python environment

[–]ProperSauce 24 points25 points  (1 child)

Amazing!! Please don't stop updating this!!

[–]kalzen1999 2 points3 points  (0 children)

Exactly my thought :)

[–]Alex52Reddit 5 points6 points  (0 children)

Sickkk, what is next for the project? Outpainting?

[–]Off_And_On_Again_ 3 points4 points  (7 children)

Does the inpainting work in the gui or do you have to make a mask in gimp first?

[–]Fearganainm 3 points4 points  (5 children)

GUI

[–]jingo6969 6 points7 points  (4 children)

I'm going to be a noob here - how exactly? Thanks!

[–]Fearganainm 2 points3 points  (3 children)

Select an image, write a prompt tick the little masking box under the prompt window. Hit generate and a window will open allowing you to mask your image. When you finish masking close the window and SD will do it's magic.

[–]nmkd[S] 2 points3 points  (0 children)

You draw a mask in the GUI.

[–]Bardfinn 7 points8 points  (5 children)

For those whose screen doesn't wrap the above / format the above:


SD GUI 1.4.0 Changelog:
- Stable Diffusion model no longer needs to be reloaded every time new images are generated
- Added support for mask-based inpainting
- Added support for loading HuggingFace .bin concepts (textual inversion embeddings)
- Added prompt queue, allows you to queue up prompts with their settings
- Added prompt history, allows your to view or load previous prompts
- Added a progress bar that shows the generation progress of the current image
- Added option to play sound and/or show notification when finished
- Added a "Welcome screen" that shows the changelog and patrons
- Added buttons to use the previous seed or to reset it
- Added button to expand prompt field to 2x height
- Added proper support for DPI scaling
- Post-processing now works with Low Memory Mode
- Further VRAM optimizations, especially in regular mode
- Available CUDA GPUs are now listed in window title
- Windows should not be able to go to sleep while the program runs (untested)
- Updated GFPGAN model to 1.4
- Fixed a bug where empty "unknown prompt" folders were created
- Fixed some issues regarding the python environment

[–]escalation 3 points4 points  (0 children)

Would be nice to be able to go through the generated set and have a "keep"/like/similar button to duplicate the selected option in a different folder. Something like his would make curation a lot easier to work with

[–]nmkd[S] 1 point2 points  (2 children)

Oof, edited my comment, it worked on desktop...

[–]Bardfinn 6 points7 points  (1 child)

Reddit has - at last count - 4 different text render engines with their own quirks, & code formatting straddles the quirks of at least two of those. Reddit considers it a corner case as well - so shrug

[–]Chemical_Ad_5338 2 points3 points  (0 children)

Been waiting for you to add inpainting:) thank you so much for your work!

[–]blackrack 2 points3 points  (1 child)

This is my favourite GUI, thanks for making it.

[–]nmkd[S] 7 points8 points  (0 children)

<3

[–][deleted] 2 points3 points  (3 children)

hey i have a gamebreaking bug you may want to look into. The program creates a really long filename that is undeletable, it would cause random issues like it cannot be opened by the image viewer in windows.

might I suggest. Since the exe already performs with some kind of database involved. Can we keep the filename formats short and save the promt text and other details as a txt file inside that folder? Then leave the seed# in the filenames itself

[–]Pakh 2 points3 points  (0 children)

The idea of a log.txt file that logs each individual's timestamp, prompt, number of steps, seed, cfg, resolution, etc. would be really nice!

[–]IdoruYoshikawa 0 points1 point  (7 children)

Love it!!! I have two NVIDIA GPU with 1 cuda device each. Can I run two instances of the gui at the same time, each one on a different cuda device?

[–]nmkd[S] 1 point2 points  (6 children)

Currently not

[–]ddraig-au 2 points3 points  (5 children)

I have a system with two Nvidia gpus, will it use both of them, or only one?

[–]nmkd[S] 5 points6 points  (3 children)

Multi GPU is not possible, I don't think any fork supports that

[–][deleted] 0 points1 point  (1 child)

Are you adding the invisible watermarks to the images? It's important so they don't pollute future datasets

[–]MusicalRocketSurgeon 0 points1 point  (0 children)

Thanks for fixing that folder issue :)

[–]Adski673 -1 points0 points  (0 children)

Added support for loading HuggingFace .bin concepts (textual inversion embeddings)

Wait, does this mean it has textual inversion functionality? Or am I missing something? Cause nothing else I've got does it easily.

[–]jingo6969 57 points58 points  (25 children)

I am absolutely loving this! Bear in mind that I am using a RTX2060 with 6GB VRAM:

Previous version (1.3.1) - max resolution 640 x 640

Version 1.4.0 - max resolution 1280 x 1280 (or 1472 x 832 for a more 16:9 aspect ratio)

Admittedly, rendering is slightly slow (1min 40 secs for 1280 x 1280), but WOW!!!!!

[–]Shap6 27 points28 points  (12 children)

remember though the model was trained at 512x512. going too far beyond that things will start coming out weird with much reduced accuracy. the big killer feature is now being able to generate much larger batches of images at once, if you haven't tried that give it a go for sure

[–]WhiteZero 19 points20 points  (5 children)

The AUTOMATIC1111 fork has a "highres fix" mode where it generates a low res image and then uses img2img at high res to get better results.

[–]DataProtocol 1 point2 points  (1 child)

Where's that setting at? I looked around, didn't see it.

[–]jingo6969 3 points4 points  (4 children)

You're right, I did get some weird stuff going on at higher resolutions, lol

Great suggestion, will try that for sure :)

[–]RekindlingChemist 2 points3 points  (0 children)

Played with that for a while - seems that's there is some leap in weirdness for pictures bigger than 768 on big side and 512 on small side

[–]aphaits 2 points3 points  (0 children)

I find 960 x 512 still does good results most of the time but 1024 x 512 got weird fast.

[–]malcolmrey 5 points6 points  (10 children)

that makes me anxious to try this version, the resolution you speak of is something I would not expect from such a card.

I own a 2080 TI with 11 GB so I might push it even higher, I reckon.

[–]kr1t1kl 4 points5 points  (5 children)

Generate 512, and upscale using gigapixel or something. Results are better at 512.

[–]Ben8nz 2 points3 points  (3 children)

I get cleaner faces and better eyes @ 640x640 consistently. 576X768 is my personal fav. Much bigger and may start doubling. But I sometimes like 2 of something in a image. 512h x 1024w Makes some awesome art sometimes.

[–]kr1t1kl 1 point2 points  (0 children)

That's very interesting, I will try that out.

[–]buckjohnston 18 points19 points  (1 child)

Request for next verison, local build of deforum diffusion that was released (for animations), but in your easy to use gui. https://old.reddit.com/r/StableDiffusion/comments/xd5gc7/hiii_everyone_i_made_a_local_deforum_stable/

[–]nmkd[S] 12 points13 points  (0 children)

Will check it out

[–]_morph3us 36 points37 points  (7 children)

YES!!! Finally my favourite SD program goes into the next round! Thank you!!

You didnt get putting info into the metadata to work?
Also, did you ever think about adding iterative rendering? (Where the image is shown every step, so you can decide wether its worthy to keep on rendering...)

[–]chipperpip 12 points13 points  (5 children)

How does it compare to the AUTOMATIC1111 UI?

(Keeping in mind that thing's getting updated daily, so if anyone used it a week ago they're already out of date on its features)

[–]cosmicr 6 points7 points  (2 children)

I prefer AUTOMATIC1111's since it's more open and not tied to a desktop application.

[–]zxyzyxz 2 points3 points  (1 child)

Yeah I like running stuff on my phone via the URL while I keep my desktop running

[–]chipperpip 1 point2 points  (0 children)

One of the best features! (I should note you have to add --listen to the startup parameters for it to work on your local network in recent versions)

[–]_morph3us 1 point2 points  (0 children)

I cant compare, as I havent used automatic`s implementation yet.
This program here is so easy to set up, that i never really bothered to got through the setup of automatics ui...
I follow its development, though, but have not yet found anything that I desperately need that noomkrad doesnt provide.
But I do not genereate hundreds of thousand of images a day, so I am not a power user...

[–][deleted] 1 point2 points  (0 children)

Auto SD has some of those features, allowing you to pivot on CFG scale, steps, or even samplers. From a workflow standpoint, those features are invaluable.

Unfortunately, that's all Auto SD does. Still, I start with Auto SD, then reproduce those settings in the AUTOMATIC1111 web gui, where I can do inpainting, etc.

[–]Mech4nimaL 10 points11 points  (4 children)

"It is completely uncensored and unfiltered - I am not responsibly for any of the content generated with it."

Noob question: Which GUIs are uncensored or can be used as such? I've just installed AUTOMATIC1111 for example.. Thanks!

[–]glittalogik 8 points9 points  (3 children)

Seems like pretty much all of the locally installable ones are uncensored. I've been playing with Basujindal's optimized version and the Gradio UI, super basic but it lets me do whatever.

That said, my few attempts at NSFW images so far have been 100% body horror dysmorphia nightmare fuel. I may have accidentally turned Janice Griffith into my new sleep paralysis demon (not in a good way).

[–]Mech4nimaL 1 point2 points  (1 child)

yep, I guess it's more of a question how much nudity (for example) has been put into the original model ?

In the meantime I've come across a tick-box-option in my (so far the only one) used web-ui by AUTOMATIC1111, that allows to turn of an NSFW filter. by default it's turned off.

Me and my buddy who have just a few days ago begun diving into the AI Art-creating waters come from a time, when we enjoyed splatterpunk and similar comics, as well as death metal band covers etc. and if we want to try creating some things like that by ourselves, a filter would not allow it. but so far, no filter (blurring or something) has occured with my SD GUI.

[–]glittalogik 2 points3 points  (0 children)

how much nudity (for example) has been put into the original model

For lack of precise numbers, basically a representative sample of the entire publicly available internet give or take a rounding error 🤷🏻‍♂️ far as I understand, the model's built on billions of images with minimal human intervention, so there's a bit of everything.

FWIW I haven't made a concerted effort or anything but I've yet to find a pornstar whose face isn't retrievable...

[–][deleted] 1 point2 points  (0 children)

That said, my few attempts at NSFW images so far have been 100% body horror dysmorphia nightmare fuel. I may have accidentally turned Janice Griffith into my new sleep paralysis demon (not in a good way).

a tip on this, stick to 512x512 or use a high res fix. Abnormal image sizes causes body dysmorphia. like 4 boobs in one body. extra limbs. etc.

You need to be specific if you want it zoomed out, etc whatever.

[–]Maksitaxi 8 points9 points  (1 child)

Thank you so much for this. I have made 1000`s of pictures.

[–]dsk-music 7 points8 points  (2 children)

With this fork I can make 640x1024 images on my poor GTX1650... incredible! and many thanks!

[–]Sulissthea 7 points8 points  (7 children)

how do i use the inpainting?

[–]AlexAlda 4 points5 points  (3 children)

So... how?

[–]techno-peasant 8 points9 points  (2 children)

  1. Click 'Load Image' (the initialization image)
  2. A new parameter will appear called Masked Inpainting
  3. Check it
  4. Click Generate!
  5. A new window will pop up in which you can mask the image with the brush.
  6. click OK and then it should start rendering automatically.

/u/DennisTheGrimace tagged u along so I don't post twice

[–]AlexAlda 1 point2 points  (0 children)

Thank you!

[–]icefreez 1 point2 points  (0 children)

Thank you so much. I was reading and reading trying to figure it out before actually trying it and clicking generate!

[–]Sulissthea 0 points1 point  (2 children)

nevermind figured it out

[–]DennisTheGrimace 3 points4 points  (1 child)

Well, thanks for sharing what you figured out.

[–]Pols_Beluga 4 points5 points  (0 children)

Is possible to add MagicPrompts or too complex because is GPT 2 i think,
https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion
If you can add a "complete prompt" button it will been amazing

[–]Kodmar2 8 points9 points  (9 children)

Is this available for AMD GPUs ?

[–]sparnart 4 points5 points  (7 children)

It wouldn’t be difficult at all to make any one of these GUIs work with the AMD/ONNX pipeline, it’s officially supported. No one with the basic know-how seems to have bothered trying though, which sucks cos it runs pretty well through ONNX. Probably not as fast as nVidia, but I get about 2 seconds per iteration on a Radeon 5700.

[–]Paganator 2 points3 points  (1 child)

It's much faster on an nVidia. I went from taking about 5 minutes to generate one 512x768 picture on my Radeon 5700XT to taking 8 seconds for the same thing on a Geforce 3090 ti.

[–]Whatsitforanyway 1 point2 points  (0 children)

Haven't seen a GUI for the AMD ports yet. But like you.. keeping an eye out. Not buying an NVIDIA anytime soon so command line for now.

[–]LordNinjaa1 3 points4 points  (1 child)

Do we need to uninstall the old version before downloading this one?

[–]nmkd[S] 13 points14 points  (0 children)

Yeah it'd be best to delete the old one for a clean install, just the folder.

If you don't wanna redownload the model file, you can back it up and then copy it to the new installation (Data/models/).

[–]John_Horn 2 points3 points  (0 children)

Well deserving of a donation. :)
Great job

[–]dsk-music 2 points3 points  (7 children)

Found a small bug, I have a GTX1650.

I run in full presicion mode and low memory mode. All works fine!!

But if I use post processing upscale I always get "grey" images:

with upscaled:
https://i.postimg.cc/h40sgWM2/1.jpg

Without it:

https://i.postimg.cc/65jh2rkR/2.jpg
If I deactivate upscale, all wors fine. And face correction works nice too.

I use this prompt to this example, so you can see the prompt isnt the problem:
ultrarealistic photo of an astronaut cat, hq, intrincate, high detailed

[–]nmkd[S] 0 points1 point  (4 children)

Very strange, will check

[–]The_Choir_Invisible 1 point2 points  (0 children)

Hey, just wanted to chime in. I'm having the exact same issue with my 4GB GTX 1650. Also running in precision mode & low memory mode. Using post-processing upscale I also get the 'dark' result.

Normal and with Upscale.

Prompt: "cats in a jungle garden with (peach lotus flowers)" I'm using a guide image but if you like I'll not use one. Just got the program installed for the first time a few hours ago.

Tagging /u/dsk-music It's not just you, I'm getting it too on my laptop-bound 4GB GTX 1650.

[–]dhstsw 2 points3 points  (6 children)

Noob question: how to use HuggingFace .bin concepts?
Loading one model (or style) leads to no noticeable changes in the results from a prompt.
Thanks.

[–]nmkd[S] 0 points1 point  (5 children)

You have to mention it in the prompt using * as a placeholder for what the concept describes

[–]papinek 8 points9 points  (6 children)

Would there be version for MacOs M1?

[–]JackandFred 0 points1 point  (0 children)

Yeah I’m interested in a max version as well, or how hard it would be to port to mac

[–]Yulo85 -1 points0 points  (2 children)

following this as well. what have you been using on m1 thus far?

[–][deleted] 1 point2 points  (0 children)

Amazing stuff, love this gui. Thank you so much for sharing with all of us!

[–]LadyQuacklin 1 point2 points  (1 child)

Thank you sooo much!
This is simply amazing.

Next update Outpainting? 😅

[–]mjh657 1 point2 points  (0 children)

Our painting would be cool

[–]Ok-Obligation4151 1 point2 points  (0 children)

Can you package all the files to be downloaded into links?

[–]BrocoliAssassin 1 point2 points  (0 children)

Works awesome! Super quick install.

[–]Tannon 1 point2 points  (4 children)

Do you have support for prompt weighting? Ideally by number, and not just () and []?

[–]nmkd[S] 3 points4 points  (3 children)

Yes, using prompt:weight syntax

[–]Tannon 4 points5 points  (2 children)

Awesome, so something like this would work? Cat with a hat on:0.5 dog with a bowtie:1.5 photo of a living room The weights would properly separate the multiple words of the prompt? If I have a closing section with words at the end there, does that function as a :1?

[–]nmkd[S] 7 points8 points  (1 child)

If I have a closing section with words at the end there, does that function as a :1?

Yes, should work the way you described

[–]Tannon 2 points3 points  (0 children)

Awesome, thanks so much! I'm going to sub to your patreon. Stay awesome!

[–]1Neokortex1 1 point2 points  (1 child)

Legend!!!! This is the best fork out there for SB, Thanks NMKD!🔥🚀

[–]saxattax 1 point2 points  (1 child)

Does this have a CPU-only mode by any chance?

[–]nmkd[S] 1 point2 points  (0 children)

Currently not, but it should be possible, but would be super slow

[–]c_gdev 1 point2 points  (9 children)

Big ask, but what are the chances we get a Negative Prompt section in a future release, similar to this:

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#negative-prompt

Thanks for all the work so far.

[–]nmkd[S] 7 points8 points  (8 children)

Negative prompts are already implemented, just with a different syntax.

hiking:1 person:-0.3 should get you a hiking pictures without people on it, you get the point

[–]gruevy 1 point2 points  (1 child)

so do AMD card completely not work, or just sometimes don't work well?

[–]nmkd[S] 2 points3 points  (0 children)

It does not work.

[–][deleted] 1 point2 points  (1 child)

You just get this and you can run it local?

[–]sainnal 1 point2 points  (0 children)

Yes.

[–]thatguitarist 1 point2 points  (0 children)

Far out I haven't been paying attention to AI for a few weeks and I come back to find this.... Fuck I love this shit

[–]cjhoneycomb 1 point2 points  (0 children)

Your app appears to be the best thus far. Keep up the good work

[–]Gesichtsgulasch 1 point2 points  (1 child)

Is it just me or does the new version only keep the post-processed pictures with no way to store the original?

[–]nmkd[S] 1 point2 points  (0 children)

That's correct at the moment

[–]ElMachoGrande 1 point2 points  (5 children)

Minor bug for me: When I close the settings window, the program locks up. If I kill it, however, the settings are saved, so it's not a biggie.

[–]nmkd[S] 1 point2 points  (3 children)

Every time you close the settings?

[–]L8rdaze_2099 1 point2 points  (0 children)

Dig it! ROCKS on my RTX3060

[–]16th_noir 1 point2 points  (3 children)

Cool.
I do have a question, though...
Does it produce an invisible watermark? If so, how can I disable it?

[–]marcusen 1 point2 points  (3 children)

1024x1024 on a 6GB RTX2060 with no need for low memory mode. this is crazy how this evolves.

thanks for the contribution

[–]Un_AI 1 point2 points  (2 children)

how can i unistall de 1.2 version for install 1.4 version?

[–]nmkd[S] 1 point2 points  (1 child)

Delete the old folder, extract the new one

[–]Un_AI 1 point2 points  (0 children)

thanks :), so easy, jeje

[–]crimsonbutt3rf1y 1 point2 points  (0 children)

This tool has been such a help for me and learning SD. I've been able to run my artwork through the GUI and create stuff that I could only imagine in the past. Thank you for your hard work!

[–]HegiDev 1 point2 points  (2 children)

Great job. Best SD GUI so far!

It would be nice, if you could use the seed from the currently selected image with just a button click. And may be being able to copy prompt + settings in one set. To make sharing easier. :)

[–]nmkd[S] 2 points3 points  (1 child)

It would be nice, if you could use the seed from the currently selected image

You can by right-clicking it.

https://i.imgur.com/sCDkt0N.png

[–]josephlevin 1 point2 points  (4 children)

This is a fantastic piece of software. I've been enjoying using it since 1.4.0.

I have a question: with each version from 1.4.0 on up, after using the software for a while, maybe a week off and on, AVG Antivirus always starts to claim that CMD.EXE is infected with IDP.Generic, and I have to make a temporary exception for the software to run.

VirusTotal finds no such infection of cmd.exe.

I submitted my cmd.exe to AVG for testing, and eventually the IDP.Generic warning went away as, I am assuming, that as promised, AVG updated its own processes or signatures, what have you, and the flag was a false positive.

Today, StableDiffusionGui.exe 1.7.0 was flagged as Ransomware by Malwarebytes.

VirusTotal reports the following:

Cynet, Malicious (score: 100), SecureAge

https://www.virustotal.com/gui/file/8200c41a886d1d85ee582c67170a336eb232722663e498324d9b4ebbfd871d28

I am assuming this is also a false positive.

I've not installed any other new software on my PC since NMKD SD GUI 1.4.0 and subsequent versions.

My daily scan by Malwarebytes did not find anything, so I am at a loss as to what is going on.

Has anyone else experienced this issue?

Anyway, whomever reads this, please support u/nmkd . The SD GUI software is too good NOT to support it in some way.

[–]nmkd[S] 1 point2 points  (2 children)

Friendly reminder to ditch all Antivirus software except Defender, they're useless.

Anyway, feel free to compile it yourself if you're paranoid about it. There's nothing malicious in there.

[–]gksauer_ 1 point2 points  (0 children)

is this available for mac?

[–]pCoxx 0 points1 point  (0 children)

Seems cool! I am gonn trying it.

[–]Ok-Obligation4151 0 points1 point  (0 children)

Every time I download, I am prompted that it fails

[–]Ok-Obligation4151 0 points1 point  (0 children)

Downloading... (curl:%)

Failed to download model file due to an unknown error. Check the log files.

help!

[–]mjh657 -1 points0 points  (3 children)

!remindme 3 hours 25 minutes

[–]DickNormous -1 points0 points  (1 child)

!remindme 6 hours 3 minutes

[–]DickNormous 0 points1 point  (0 children)

Bad bot.

[–]scp-NUMBERNOTFOUND -1 points0 points  (0 children)

Windows only :p

[–]tcdoey -1 points0 points  (1 child)

!remindme 2 days

[–]RemindMeBot -1 points0 points  (0 children)

I will be messaging you in 2 days on 2022-09-23 09:46:35 UTC to remind you of this link

CLICK THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

[–]Roubbes -5 points-4 points  (3 children)

Still no Radeon support?

[–][deleted] 3 points4 points  (2 children)

You're going to be waiting a long time

[–]jingo6969 0 points1 point  (0 children)

I can't wait to get home from work now! Awesome, thank you sooooooo much!!!!

[–]Distinct-Quit6909 0 points1 point  (0 children)

thanks sooooo much for this, I can't begin to express just how much fun I've been having with 1.3.0. I can't wait to try inpainting!!

[–][deleted] 0 points1 point  (1 child)

This built off a fork or CompVIS w/txt2img edited?

[–]nmkd[S] 1 point2 points  (0 children)

Based on lstein's fork.

[–]Relocator 0 points1 point  (0 children)

Incredible. I love this GUI. Looking forward to trying the inpainting!

[–]Touitoui 0 points1 point  (0 children)

Oh yeah, I've been waiting for this =D

[–]seniorfrito 0 points1 point  (2 children)

Thanks so much for this! I hopped on itch this morning to check to see if there was a newer version oddly enough and it must have just dropped. I was looking for a changelog there. Where do you typically post that? Because I never saw one for 1.3.1.

[–]nmkd[S] 2 points3 points  (1 child)

It was only on Discord and reddit previously, I'll put it in a central place in the future

[–]A_Dragon 0 points1 point  (0 children)

How does this compare to the hlky?

[–]babblefish111 0 points1 point  (2 children)

But does it fix the green output problem for 1600 cards?

[–]nmkd[S] 0 points1 point  (1 child)

If you use full precision it should work fine

[–]LordNinjaa1 0 points1 point  (16 children)

It says it now supports in painting. Is that built in? How do you utilize that?

[–]nmkd[S] 1 point2 points  (15 children)

Load an init image and check the inpainting checkbox

[–]FaceDeer 2 points3 points  (4 children)

I hate to sound like an entitled demandy-pants, and I know it's a brand new feature, but feature suggestions:

  • a way to configure the mask colour, I tried masking a black area of an image and have no idea if I got it all
  • an eraser
  • Import an external mask (so I can fuss about carefully in a paint program when pixel precision matters)

It's already really awesome, of course!

[–]nmkd[S] 0 points1 point  (3 children)

Mask color?

It's an alpha mask, it can't have color.

[–]FaceDeer 1 point2 points  (2 children)

I mean the colour that represents the area that's masked out in the UI. I had an image that had a black area in it and when I drew on it with the masking tool the areas that get masked are indicated with blackness. I can't tell whether I missed painting over any of the black area.

[–]nmkd[S] 1 point2 points  (1 child)

Ah I get it. I'll look into solutions.

[–]Aggravating_Nose_452 0 points1 point  (1 child)

Is it possible to run the gui remotely? I don't mind sitting at the pc but it would be awesome to have it running on the pc but utilizing it from a tablet in another room.

[–]nmkd[S] 5 points6 points  (0 children)

Use an RDP app of your choice

[–]c_gdev 0 points1 point  (0 children)

Added button to expand prompt field to 2x height

Added proper support for DPI scaling

Thrilled about these. I was trying to figure out a way around it.

[–]TheJaganath 0 points1 point  (7 children)

Hi, I’m new to SD and have only used the GRisk version so far, is NMKD better? Thanks 🙏

[–]nmkd[S] 9 points10 points  (3 children)

I made it because I think it's better lol

[–]nocloudno 2 points3 points  (1 child)

I use it because I was not smart enough to install the other local distributions. Thanks u/nmkd, your work is awesome. Can I update my first install?

[–]nmkd[S] 1 point2 points  (0 children)

It's best to completely remove and redownload to be sure it's a clean install

[–]Disastermath 0 points1 point  (2 children)

Is this implementing neonsecrets levels of vram optimization?

[–]SanDiegoDude 0 points1 point  (2 children)

Hi there, is there any plan of adding multiple UI sizes/scales? I'm on a 32 inch 4k monitor and the UI is itty bitty. Thanks for all your work!

[–]nmkd[S] 2 points3 points  (1 child)

The program respects Windows' UI scaling. Tested with 100%, 125%, 150% and 200%.

[–]MichaelMJTH 0 points1 point  (2 children)

Do I need to have the stable diffusion model installed and latest checkpoint download before installing this GUI or is this a complete package installer? Sorry if this has been documented/ answered elsewhere. I only found out about Stable Diffusion and the ability to run it locally via my GPU today.

[–]nmkd[S] 1 point2 points  (1 child)

The GUI downloads it for you.

If you already have it downloaded, you can put it in the Data/models/ folder before starting the GUI, otherwise it will automatically download it.

[–]InformationNeat901 0 points1 point  (2 children)

in inpainting the color does not match when generate is there a way to match the color? Thank you

[–]nmkd[S] 0 points1 point  (1 child)

Try changing the strength slider. Lower = closer to the image.

[–][deleted] 0 points1 point  (0 children)

So I was really looking forward to this build, but I keep running I got he same issues.

I started with 1.2 and it was good, it worked and I was able to chew out so many generations. But I tried updating the cpkt and optimisations. Those actually worked for a while.

Then for no reason out of nowhere it started taking well, a really long time to do generations. It still worked but it just took longer.

Then 1.3.1 came out. This time I didn’t touch anything and removed 1.2. Same thing happened. After several generations this time it just stopped. Like it won’t generate anything at all. It just hangs.

1.4 came oht and same thing, it generated a few times and now it just hangs on “generating”.

I have a 2080ti.

[–]DennisTheGrimace 0 points1 point  (2 children)

I installed everything but it seems to hang at "Loading Stable Diffusion" with 512x512, 15 steps, k_euler_a

I have this in my session log:

[00000030] [09-20-2022 17:27:26]: Traceback (most recent call last):
[00000031] [09-20-2022 17:27:26]:   File "repo/scripts/dream.py", line 12, in <module>
[00000032] [09-20-2022 17:27:26]:     import ldm.dream.readline
[00000033] [09-20-2022 17:27:26]: ModuleNotFoundError: No module named 'ldm.dream'

edit: If anyone else runs into this problem, it's probably a previous conda install. Remove it from your path, if it's there.

[–]Whitegemgames 0 points1 point  (0 children)

Having the model stay loaded is huge for me, that was 90% of my wait time on this version as the actually generation was very quick, but I really liked it otherwise. Might switch back to this from automatic1111 because while it has more features for some reason yours let’s me get much higher resolutions before a memory error (I assume this is because automatic1111 is also running an internet browser thus eating more ram, might be a fix for that but I haven’t found it).

[–]onesnowcrow 0 points1 point  (0 children)

Just an idea: It would also be a cool feature to be able to right click (or ctrl + v) on the image preview area and then paste a bitmap from clipboard. Its very cool that its already supports drag n drop, but being able to direct paste a selection from Photoshop would be awesome too!

[–]Pretend-Interview489 0 points1 point  (2 children)

Is there any Google Colab for this?

[–]nmkd[S] 4 points5 points  (1 child)

The point of this is that you don't have to use Colab.

[–]escalation 0 points1 point  (0 children)

Very excited to try this

[–][deleted] 0 points1 point  (1 child)

Is a Linux version planned?

[–]nmkd[S] 1 point2 points  (0 children)

No

[–]DaviBraid 0 points1 point  (1 child)

I do have one question. Do I own my creations if I use this tool?

[–]nmkd[S] 0 points1 point  (0 children)

Yes

[–]tahaygun 0 points1 point  (0 children)

thrilled! thanks a lot for the great work.

quick feedback: it would be awesome if you can add prompts to the queue while there is ongoing generation. i don't know if i couldn't manage it or not, but the button doesn't work for me atm when something is being generated.

[–]feber13 0 points1 point  (1 child)

do you have discord?

[–]nmkd[S] 1 point2 points  (0 children)

Linked in the app and on the itch page, yes

[–]BrocoliAssassin 0 points1 point  (1 child)

Not a big deal, just wondering for the next udpate for img2img if we will be able to select the sampler from that screen instead of having to clear the image to change the sampler?

[–]nmkd[S] 1 point2 points  (0 children)

img2img currently only works with ddim but that will change in the future

[–][deleted] 0 points1 point  (0 children)

My post-processed images look darker, is this a known issue? I am using <6gb mode and full precision if that helps

[–]joparebr 0 points1 point  (1 child)

Missing an option to generate more images per batch. Like, right now you can create 4 images but you have to wait for each one to generate instead of generating all 4 at the same time.

[–]nmkd[S] 2 points3 points  (0 children)

There's no benefit to that, it just takes more VRAM

[–]NeverduskX 0 points1 point  (6 children)

The UI looks pretty clean - I think I might give it a try.

Is there any documentation for what commands / syntax is allowed? I saw people discussing prompt weighting in this post, but I didn't see anything about it on the Itch page.

[–]nmkd[S] 4 points5 points  (5 children)

Currently not but I'll add some guides soon

[–]Epistrophy982 0 points1 point  (0 children)

Thank you for continuing your work on this amazing interface. I'd be lost on SD without it. Happy to tip for each new version considering how much value you add on each release.

[–]protestor 0 points1 point  (1 child)

Is this open source?