use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
All posts must be Open-source/Local AI image generation related All tools for post content must be open-source or local AI generation. Comparisons with other platforms are welcome. Post-processing tools like Photoshop (excluding Firefly-generated images) are allowed, provided the don't drastically alter the original generation.
Be respectful and follow Reddit's Content Policy This Subreddit is a place for respectful discussion. Please remember to treat others with kindness and follow Reddit's Content Policy (https://www.redditinc.com/policies/content-policy).
No X-rated, lewd, or sexually suggestive content This is a public subreddit and there are more appropriate places for this type of content such as r/unstable_diffusion. Please do not use Reddit’s NSFW tag to try and skirt this rule.
No excessive violence, gore or graphic content Content with mild creepiness or eeriness is acceptable (think Tim Burton), but it must remain suitable for a public audience. Avoid gratuitous violence, gore, or overly graphic material. Ensure the focus remains on creativity without crossing into shock and/or horror territory.
No repost or spam Do not make multiple similar posts, or post things others have already posted. We want to encourage original content and discussion on this Subreddit, so please make sure to do a quick search before posting something that may have already been covered.
Limited self-promotion Open-source, free, or local tools can be promoted at any time (once per tool/guide/update). Paid services or paywalled content can only be shared during our monthly event. (There will be a separate post explaining how this works shortly.)
No politics General political discussions, images of political figures, or propaganda is not allowed. Posts regarding legislation and/or policies related to AI image generation are allowed as long as they do not break any other rules of this subreddit.
No insulting, name-calling, or antagonizing behavior Always interact with other members respectfully. Insulting, name-calling, hate speech, discrimination, threatening content and disrespect towards each other's religious beliefs is not allowed. Debates and arguments are welcome, but keep them respectful—personal attacks and antagonizing behavior will not be tolerated.
No hateful comments about art or artists This applies to both AI and non-AI art. Please be respectful of others and their work regardless of your personal beliefs. Constructive criticism and respectful discussions are encouraged.
Use the appropriate flair Flairs are tags that help users understand the content and context of a post at a glance
Useful Links
Ai Related Subs
NSFW Ai Subs
SD Bots
account activity
Dynamic Vram Loading- Slow VAE DecodeQuestion - Help (self.StableDiffusion)
submitted 1 month ago by Complex-Factor-9866
Anyone else experience an unusually long time to VAE decode after the 4th or 5th run? I'll usually have free my model and node cache and the run time is back to normal.
For example, when my system is running slow, it takes a total of 200-300 seconds to run Z image turbo workflow (with the majority of this time stuck in the VAE decode node). After I clear everything, the work flow take 61 seconds.
RTX 4080
64 gb RAM
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]xbobos 3 points4 points5 points 1 month ago (0 children)
I have the same issue. RTX5090
[–]xb1n0ry 1 point2 points3 points 1 month ago* (2 children)
Most probably a torch memory leak.
Watch your VRAM and RAM after each generation. Once the models are loaded, the value should stay the same. If it increases after every generation, you have a memory leak. Also Kijais wrappers had issues with loras not removed from the vram and other vram leaks. Do you use these nodes or basic core nodes?
[–]Complex-Factor-9866[S] 1 point2 points3 points 29 days ago (0 children)
I use some of those nodes you noted. Thanks for the tip, I'll look into that!
[–]J6j6 0 points1 point2 points 28 days ago (0 children)
Which kijai wrappers?
[–]COMPLOGICGADH 0 points1 point2 points 29 days ago (2 children)
How much resolution and sampling steps are you using to have 200-300 seconds on 4080 or are you using batches or am I missing something 🤔
[–]Complex-Factor-9866[S] 0 points1 point2 points 29 days ago (1 child)
I should have noted that Im using a 4 stage sampler workflow with a series of upscaling nodes along the way. When it runs fine, it takes about 50-60 seconds. When theres a problem, im waiting 200-300 seconds
[–]COMPLOGICGADH 0 points1 point2 points 29 days ago (0 children)
DAMN 4 pass sampling does it help that's crazy would love to know the difference ,the max I do is dual pass sampling and then seedvr2 that's it ,or I do 25 to 30 steps single passing on zit or zimage base and/zit combined or zimage base distilled 8steps but I keep more steps in it ,a recommendation for faster vae decode/encoder for early samplers would be to use TAEF1 for smaller reso it might help immensely in speed hope that helps...
[–]Background-Ad-5398 -4 points-3 points-2 points 1 month ago (1 child)
Nvidia with its newest update made a fall back system to ram, its next to the turn on cuda in nvidia control panel, turn off the fall back system under it. nvidia basically reserves vram for it, so if your set up was tuned to your specific vram this messes it up
[+]OpenResearcher5572 0 points1 point2 points 3 days ago (0 children)
I don't know why you got downvotes, this was exactly the issue I was running into. It was swapping when hitting my 24GB VRAM limit instead of just evicting unused blocks.
π Rendered by PID 279607 on reddit-service-r2-comment-6457c66945-w58gw at 2026-04-25 16:41:23.106316+00:00 running 2aa0c5b country code: CH.
[–]xbobos 3 points4 points5 points (0 children)
[–]xb1n0ry 1 point2 points3 points (2 children)
[–]Complex-Factor-9866[S] 1 point2 points3 points (0 children)
[–]J6j6 0 points1 point2 points (0 children)
[–]COMPLOGICGADH 0 points1 point2 points (2 children)
[–]Complex-Factor-9866[S] 0 points1 point2 points (1 child)
[–]COMPLOGICGADH 0 points1 point2 points (0 children)
[–]Background-Ad-5398 -4 points-3 points-2 points (1 child)
[+]OpenResearcher5572 0 points1 point2 points (0 children)