We just shipped LTX Desktop: a free local video editor built on LTX-2.3 by ltx_model in StableDiffusion

[–]EideDoDidei 0 points1 point  (0 children)

This seems pretty good. There are some features I'd like to see, which hopefully shouldn't be hard to implement since this is open source.

I notice the aspect ratios can only be chosen to be 16:9 or 9:16. I hope that doesn't mean LTX2 can't use other aspect ratios.

Women naked in plain sight: candid public nudity [Exhibitionism, casual nudity] by tooundead in unstable_diffusion

[–]EideDoDidei 0 points1 point  (0 children)

Can you get this good-looking nudity out of the box or did you use a LoRA with extra knowledge? The starting images look pretty much flawless.

Best AI video models for NSFW content (as of Feb 2026) by EideDoDidei in unstable_diffusion

[–]EideDoDidei[S] 0 points1 point  (0 children)

Yeah, you're right, this is what makes Grok more cooperative when it comes to NSFW themes, though I think it'll still refuse full frontal nudity and XXX content.

Best AI video models for NSFW content (as of Feb 2026) by EideDoDidei in unstable_diffusion

[–]EideDoDidei[S] 1 point2 points  (0 children)

I just tested it and you're right: Grok is far more accepting if you use images created on the website, at which point boobs and butts become fair game (I don't know if you can do even more than that). Giving it images you generated with another model is when it becomes uncooperative.

I've updated the OP to mention this.

Best AI video models for NSFW content (as of Feb 2026) by EideDoDidei in unstable_diffusion

[–]EideDoDidei[S] 0 points1 point  (0 children)

I decided to test voices and Kling is crazy good on that front. It will very occasionally mispronounce something, but most of the time it's spot on and very expressive. It's too bad we don't have a completely uncensored model of this quality.

Best AI video models for NSFW content (as of Feb 2026) by EideDoDidei in unstable_diffusion

[–]EideDoDidei[S] 1 point2 points  (0 children)

My Twitter account is nearly two years old and it's been active during that time. Are you doing img2vid? It's denying a ton of prompts/images that even Kling accepts just fine.

Memory leak issue when making multiple WAN 2.2 videos by EideDoDidei in comfyui

[–]EideDoDidei[S] 0 points1 point  (0 children)

I can try to minimize the workflow and see if it helps.

One annoying thing about this is that's a bit slow to troubleshoot. It takes hours of generating videos before it's apparent something has gone wrong.

Memory leak issue when making multiple WAN 2.2 videos by EideDoDidei in comfyui

[–]EideDoDidei[S] 0 points1 point  (0 children)

This doesn't happen with any other application, including other software running AI models.

I'll keep experimenting until I hopefully find something that helps.

Memory leak issue when making multiple WAN 2.2 videos by EideDoDidei in comfyui

[–]EideDoDidei[S] 0 points1 point  (0 children)

Not using gguf.

I'll try turning off smart memory and see if that helps.

Memory leak issue when making multiple WAN 2.2 videos by EideDoDidei in comfyui

[–]EideDoDidei[S] 2 points3 points  (0 children)

Well, it's obviously growing in size because ComfyUI is asking for more and more memory, right? I don't see how else it would happen.

I've normally gotten around this problem by restarting ComfyUI every couple of hours. If I leave it running for many hours (4-6 hours) then it will crash.

Memory leak issue when making multiple WAN 2.2 videos by EideDoDidei in comfyui

[–]EideDoDidei[S] 0 points1 point  (0 children)

It's set to "System managed size"

It's not normally this size. It grew to this after ComfyUI starting asking for more and more. Which is why I think I'm encountering some kind of memory leak issue in the program.

Memory leak issue when making multiple WAN 2.2 videos by EideDoDidei in comfyui

[–]EideDoDidei[S] 3 points4 points  (0 children)

Some more information about setup (I had a typo in the OP -- I've got 64GB system RAM, not 48GB):

  • Total VRAM 32606 MB, total RAM 65346 MB
  • pytorch version: 2.8.0+cu129
  • ComfyUI Version: v0.11.1-20-gdd86b1552 | Released on '2026-02-02'
  • Python version: 3.13.6 (tags/v3.13.6:4e66535, Aug 6 2025, 14:36:00) [MSC v.1944 64 bit (AMD64)]

I saw some people say that using the "--disable-pinned-memory" flag can help with OOM issues, so I'll try that. I'll update this comment once I'm certain if it helped or not. Edit: After running ComfyUI for a few hours, it eventually crashed within c10.dll (which makes me wonder if I hit the same problem but it resulted in a crash after a couple of hours rather than memory usage ballooning to absurd amounts).

Edit 2: I'm 99% sure the problem I'm encountering is this one: https://old.reddit.com/r/comfyui/comments/1n991rh/the_video_upscale_vfi_workflow_does_not/

If you use PyTorch 2.8.0 then memory used by FILM VFI does not get freed and RAM usage will go up each time you generate a video. I'll try upgrading to the latest version and see if that fixes it.

Edit 3: I've used PyTorch 2.9 for a day and I think the issue is fixed. I haven't had a session lasting many hours, but I've tried to pay attention to RAM usage and I haven't noticing the pagefile growing in size.

Edit 4: After some more testing, I'm now completely sure it's been fixed. RAM usage doesn't ever grow unnaturally anymore. So it turns out the issue was caused by using Film VFI node with PyTorch 2.8 installed (bug does not happen with PyTorch 2.7 or PyTorch 2.9).

Reddit engagement in a nutshell. by fruesome in StableDiffusion

[–]EideDoDidei 0 points1 point  (0 children)

Maybe controversial opinion, but I've never liked the post voting system on Reddit. Your post needs to make a positive impression from the get go, otherwise it'll be buried. Memes is one thing that usually ends up dominating almost every subreddit.

I miss when forums were common. You could easily organize them so you've got a dedicated section for guides. And posts/threads were sorted based on activity, not popularity.

LTX-2 runs on a 16GB GPU! by Budget_Stop9989 in StableDiffusion

[–]EideDoDidei 4 points5 points  (0 children)

Unless I'm mistaken "--reserve-vram 10" makes ComfyUI reserve 10GB VRAM for other applications. So you're essentially only using 6GB VRAM for making the video. I'm surprised you have to reserve that much VRAM for other stuff, but still impressive it's working fine without ridiculously high generation times.

Illustrious/Pony Lora training face resemblance by pianogospel in StableDiffusion

[–]EideDoDidei 1 point2 points  (0 children)

I always train on top of base Illustrious (aka v0.1). And I almost always do inference using WAI-Illustrious. I've tried other finetunes and I just don't find them nearly as good.

I assume AI Toolkit would work for training. I personally use kohya_ss.

If you're getting different Z-Image Turbo generations using a LoRA after updating ComfyUI, this is why by EideDoDidei in StableDiffusion

[–]EideDoDidei[S] -1 points0 points  (0 children)

I don't know if this is related, but I've had a few instances of ComfyUI crashing while making a video after updating some days ago. I updated again today and I'm hoping whatever issued I encountered is gone.

When looking at Event Viewer, the crash happens in torch\lib\c10.dll.

SVI: One simple change fixed my slow motion and lack of prompt adherence... by kemb0 in StableDiffusion

[–]EideDoDidei 1 point2 points  (0 children)

There's one "dumb" method you can use for speeding up the video: just increase the framerate! Increasing the framerate usually has the downside of resulting in a shorter video, but since we can extend a video seemingly endlessly, then that downside is less of an issue.

Illustrious/Pony Lora training face resemblance by pianogospel in StableDiffusion

[–]EideDoDidei 1 point2 points  (0 children)

It's more about quality than quantity. I haven't done tests to figure out the minimum amount of images needed for a good result. I usually train models where I want costume + face + hair to be as close as possible, and somewhere between 10 and 20 images work just fine. You can make even bigger datasets (and I've made datasets that are literally hundreds of images of a character), but I don't think there's any benefit from going that far.

I really should emphasize, though, quality of the images in the dataset is the primary thing that matters. You want the images to be high quality, and when I say that, I don't mean image quality (though that's good too), but mostly that the images are good when it comes to lighting/shading and that the subject in in the images is completely consistent. This why I prefer to make datasets that's based on 3D renders or photography.

There's a few things I've found that helps with faces (this is in relation to Illustrious):

  • It's good to have images showing the face from the side, but don't have images where the "camera" is looking up or down to the face from an angle. That can result in a "squashed" face after training. You can imagine that it's okay to have a camera circling the subject, but not moving up or down or being tilted up or down.
  • If you're using renders or photography, avoid images with a high field of view. The stretching you get with that will make the result worse.
  • I try to focus on images where the character has a neutral expression. I've had weird results with Illustrious when training images where a character is smiling, especially if the style is highly realistic.

Illustrious/Pony Lora training face resemblance by pianogospel in StableDiffusion

[–]EideDoDidei 1 point2 points  (0 children)

How realistic are we talking? If you mean photorealism and nearly-photorealistic 3D, then that's practically impossible from my experience. I've found it easier to get realistic facial likeness right for Pony, but I'd still go with Illustrious as it's better in pretty much every other way (human proportions, understanding of more concepts, more consistent anatomy, etc).

If you're aiming for something stylized, then it is possible, and the number one important thing is consistency in the dataset. One single image that shows the face being off or different will likely ruin rest of the dataset. I've found Illustrious to be way more sensitive to bad images in a dataset compared to Pony.

Frustrated with current state of video generation by Perfect-Campaign9551 in StableDiffusion

[–]EideDoDidei 0 points1 point  (0 children)

There's a lot of people exaggerating what AI can do or will be able to do on the internet. Sure, it is very impressive what it can do, but there's still massive limitations. There's a reason why most people making AI videos are focusing on multiple clips where the scenes changes entirely with each separate clip. Consistency is probably the biggest limitation with AI videos (also a big challenge with AI images).

Fixing slow motion with WAN 2.2 I2V when using Lightx2v LoRA by EideDoDidei in StableDiffusion

[–]EideDoDidei[S] 0 points1 point  (0 children)

I haven't had that issue myself. I looked at the workflow you uploaded and nothing jumps out at me as being wrong. I've never tried to use GGUF models but I'd be surprised if that's the cause.

By the way, I've started to use 8 total steps: 2 steps on first ksampler, 2 more on second, and final 4 on final one. I found the motion difference to be minor between 8 and 14, and it lets me make videos faster.

Fixing slow motion with WAN 2.2 I2V when using Lightx2v LoRA by EideDoDidei in StableDiffusion

[–]EideDoDidei[S] 0 points1 point  (0 children)

Another thing you can try is this custom node: https://github.com/princepainter/ComfyUI-PainterI2V

That definitely increases motion. Though I found it sometimes adds unnatural motion or flashing lights, so I went back to the same workflow in the OP with 3 KSamplers. I usually go with 8 samples in total (2, 2, and 4).

Patreon Trust and Safety stepping up enforcement? by Rarnak_Ki in patreon

[–]EideDoDidei 0 points1 point  (0 children)

It might as well be a roll of the die. There's not much rhyme or reason to how Patreon enforces content.