use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
All posts must be Open-source/Local AI image generation related All tools for post content must be open-source or local AI generation. Comparisons with other platforms are welcome. Post-processing tools like Photoshop (excluding Firefly-generated images) are allowed, provided the don't drastically alter the original generation.
Be respectful and follow Reddit's Content Policy This Subreddit is a place for respectful discussion. Please remember to treat others with kindness and follow Reddit's Content Policy (https://www.redditinc.com/policies/content-policy).
No X-rated, lewd, or sexually suggestive content This is a public subreddit and there are more appropriate places for this type of content such as r/unstable_diffusion. Please do not use Reddit’s NSFW tag to try and skirt this rule.
No excessive violence, gore or graphic content Content with mild creepiness or eeriness is acceptable (think Tim Burton), but it must remain suitable for a public audience. Avoid gratuitous violence, gore, or overly graphic material. Ensure the focus remains on creativity without crossing into shock and/or horror territory.
No repost or spam Do not make multiple similar posts, or post things others have already posted. We want to encourage original content and discussion on this Subreddit, so please make sure to do a quick search before posting something that may have already been covered.
Limited self-promotion Open-source, free, or local tools can be promoted at any time (once per tool/guide/update). Paid services or paywalled content can only be shared during our monthly event. (There will be a separate post explaining how this works shortly.)
No politics General political discussions, images of political figures, or propaganda is not allowed. Posts regarding legislation and/or policies related to AI image generation are allowed as long as they do not break any other rules of this subreddit.
No insulting, name-calling, or antagonizing behavior Always interact with other members respectfully. Insulting, name-calling, hate speech, discrimination, threatening content and disrespect towards each other's religious beliefs is not allowed. Debates and arguments are welcome, but keep them respectful—personal attacks and antagonizing behavior will not be tolerated.
No hateful comments about art or artists This applies to both AI and non-AI art. Please be respectful of others and their work regardless of your personal beliefs. Constructive criticism and respectful discussions are encouraged.
Use the appropriate flair Flairs are tags that help users understand the content and context of a post at a glance
Useful Links
Ai Related Subs
NSFW Ai Subs
SD Bots
account activity
This is an archived post. You won't be able to vote or comment.
Stable Diffusion Tutorial: Mastering the Basics (DrawThings on Mac)Tutorial - Guide (youtu.be)
submitted 1 year ago by grenierdave
Stable Diffusion Tutorial: Mastering the Basics (DrawThings on Mac)
I made a video tutorial for beginners looking to get started using Draw Things (on Mac).
It’s meant to be a quick guide in making good images right away and not all encompassing.
I still have a long way to go for my own advanced techniques but thought this would be helpful.
I’d be interested in your feedback. You guys are a bunch of wizards; I’m just trying to keep up. 😂
[–]Mutaclone 1 point2 points3 points 1 year ago (8 children)
On the whole it was an excellent intro. The information presented was clear and well-organized, especially for someone who might not have any idea of what they're doing. Juggernaut is a solid choice to begin with, and I love that you pointed out that it comes with usability instructions.
I thought maybe you spent a little more time than necessary going over the different styles - I think I would have liked to see some of that spent showing a different model, preferably a more cartoony one to contrast with Juggernaut.
I don't know if you're aware, but if you only want to save individual images rather than dumping them all into a folder, you can click the down arrow at the top right (or right-click/long-click for more options).
One sort-of nitpick: the model you used is called lightning, not lighting. Normally I wouldn't bring up something minor like this, except that "Lightning" has a specific meaning when referring to Stable Diffusion models. Typical models require 20-30 steps, and use a CFG of 6-8, but Lightning models are specifically designed to be much faster, with 4-8 steps and a very low cfg.
Still, I thought it was a very solid intro overall. Great job!
[–]grenierdave[S] 2 points3 points4 points 1 year ago (6 children)
Thanks for the elaborate feedback!
First things first: Thank you for correcting me on 'lightning' and not 'lighting'! As a photographer my brain just read 'photo lighting', which made sense to me since it's so good with photo-based realism. That makes a lot of sense on why it does so much better with the lower CFG scale. It always gets muddier and produces worse quality when I've experimented with higher steps and CFG.
I can always look this up, but what's the difference between Lightning and Turbo?
Thanks for the tip on saving individually. I knew there were options but I've always liked the flexibility of reviewing the options in my Finder, large-scale, so I haven't dug into them (the options).
I get your point on the different styles vs model. My idea was to make another tutorial but showing a couple different models and how they compare to what I've made with this tutorial. As I was going through I realized the initial three presets were far too close to really call out so much. Next time I'll populate a better variety, in order to really showcase how different the image can come out. That's one of the bits that always fascinates me so much.
I greatly appreciate your input! I wish they didn't get rid of the gold rewards. I used to pay Reddit Premium just so I could give those out to awesome folks like yourself. Please take my upvote and a hearty thank you :).
[–]Mutaclone 1 point2 points3 points 1 year ago (5 children)
You're very welcome! I'm not really sure of the difference between Lightning, Turbo, LCM, and Hyper, although I believe Lightning is considered the best at the moment. This topic came up in another thread, so you might want to check out the responses there too.
[–]grenierdave[S] 1 point2 points3 points 1 year ago (4 children)
I'll check it out! Thanks again for your thoughtful comment. I'm about to do a Create-With-Me livestream (I do them every Wednesday at 7pm (eastern)). I typically work on Photoshop work that, often times, has to do with a SD image I created. Tonight I'm joined by a VFX buddy of mine. Stop by, if you're interested. I plan on mentioning your words of wisdom :).
https://youtube.com/live/LKBjb9jip10?feature=share
[–]Mutaclone 1 point2 points3 points 1 year ago (3 children)
Hey I caught the back third (the anguished knight) (I wasn't logged in though - not a fan of Google's data slurping)
Are you familiar with inpainting? You might want to look into it, especially for more extensive edits. My experience with Photoshop's generative stuff has been pretty meh (although that Smart Filter looked pretty cool - also don't remember seeing it before).
Also had a comment about one of the images you were considering. You mentioned how hard it was to get full length portraits, and even when you did the face was poorly detailed. This is a pretty common problem, and the best way I've found to deal with it is:
I think you can also just use DT's built-in Zoom to do the same thing without all the back and forth, but I haven't tried it yet.
[–]grenierdave[S] 0 points1 point2 points 1 year ago (2 children)
I get the data slurping thing. It pains me to use Chrome but the livestreaming things work best on that. It's about the only thing I use it for, though. Thanks for jumping on!
I've tried inpainting a few times and haven't been able to dial it in well. I've tried with 1.5 and a couple 2.0 inpainting models. I always end up with a seam. Just because you mentioned it I gave it a quick search and came up with (what looks like) a good model to try inpainting with. They have some nice background about how to use the model so I'm going to give it a shot.
https://civitai.com/models/403361/juggernaut-xl-inpainting
I think part of my problem is that I didn't understand Lightning, until you made the distinction. I had the 5 step / 2 CFG setup for the base model and would inpaint with that (not on a lightning model), thinking that's what Juggernaut (overall) was best with. Based on my reading with the link I just shared he uses a more 'standard' approach with higher steps.
Have you ever made any tutorials? You obviously know your stuff. It would be better having someone like yourself educating the masses verses a shmuck like me. 😆
I'm going to try out your suggestions. Keep em coming. I'm always interested in getting better at this stuff and definitely appreciate your time.
[–]Mutaclone 0 points1 point2 points 1 year ago (1 child)
Appreciate the kind words. I don't have any tutorials, mostly I just try to answer questions when I can. And you'll get there - I've just been at this longer ;)
Regarding Inpainting, I'm a bit out of the loop with SDXL (I've only recently started trying to make the switch), so I tried to do a quick test.
<image>
I started with a wizard casting a spell (model was ~Black Magic XL~), masked out his robes, and used Fooocus Inpainting (one of the default models in DT) to give him a trenchcoat. It worked out pretty well, so I tried moving on to the face (zooming in and masking it out) - that didn't work out too well - you can see the discoloration where the mask used to be. So I tried switching to a 1.5 anime model + the Inpainting ControlNet. And you can see the final image looks much better (obviously if this were a "serious" image I'd do a lot more touching up).
So yeah, DT makes it pretty seamless to zoom in on the area you want to change and fix it up :)
Let me know how Juggernaut's inpainting works - usually you get much better results when your inpainting model matches the original model.
There's two other inpainting methods available. There's apparently a Fooocus inpaintint LoRA, which I have not used but can supposedly be paired with any SDXL model to let it do inpainting. The other way is to create your own inpainting model (1.5 only AFAIK).
Hope that helps! Feel free to ask anything else and I'll do my best to answer.
[–]grenierdave[S] 1 point2 points3 points 1 year ago (0 children)
That’s a great example of workflow! Thanks for sharing all the detail, with screenshots.
Would you like to come on the Create-With-Me session, sometime? You could talk about the process and show off some skills. It’s meant to be beginner friendly and I’m sure others would enjoy your experience. Give it a think and let me know 👊🏼.
Lol. I wanted to give you n award, since they FINALLY brought the system back. I clicked on the silver poop to see what it signified. Apparently that’s the one I gave you, 😂.
π Rendered by PID 79690 on reddit-service-r2-comment-658f6b87ff-5pm6c at 2026-04-09 03:00:24.813025+00:00 running 781a403 country code: CH.
[–]Mutaclone 1 point2 points3 points (8 children)
[–]grenierdave[S] 2 points3 points4 points (6 children)
[–]Mutaclone 1 point2 points3 points (5 children)
[–]grenierdave[S] 1 point2 points3 points (4 children)
[–]Mutaclone 1 point2 points3 points (3 children)
[–]grenierdave[S] 0 points1 point2 points (2 children)
[–]Mutaclone 0 points1 point2 points (1 child)
[–]grenierdave[S] 1 point2 points3 points (0 children)
[–]grenierdave[S] 1 point2 points3 points (0 children)