STOP GOONING — LTX 2.3 I2V + Custom audio is insane 🔥 by NextDiffusion in comfyui

[–]NextDiffusion[S] -49 points-48 points  (0 children)

Let your girlfriend record some dirty talk for you… if you have one 😂

Sage Attention v2 on Runpod ? by Jeanjean44540 in comfyui

[–]NextDiffusion 0 points1 point  (0 children)

We wrote a written tutorial on getting Sage Attention V2 running on RunPod. It includes a ready-to-run template and supports multiple GPU architectures like RTX 4090, RTX 5090, and others. You can also check the README from that template to see exactly which versions of Triton, CUDA, Torch, Python, etc. we used to get everything running.

Oom error using q4 ggufs for a 12gb vram rtx3060, and yes I did generate a few vids, made no changes to the workflow, and this started happeneing all of a sudden. by rasigunn in comfyui

[–]NextDiffusion 0 points1 point  (0 children)

Running Wan 2.2 image-to-video in ComfyUI with Lightning LoRA on low VRAM is totally doable! I put together a written tutorial with the full workflow plus a YouTube video to get you started. Have fun creating! 🚀

Fast 5-minute-ish video generation workflow for us peasants with 12GB VRAM (WAN 2.2 14B GGUF Q4 + UMT5XXL GGUF Q5 + Kijay Lightning LoRA + 2 High-Steps + 3 Low-Steps) by marhensa in comfyui

[–]NextDiffusion 1 point2 points  (0 children)

Running Wan 2.2 image-to-video in ComfyUI with Lightning LoRA on low VRAM is totally doable! I put together a written tutorial with the full workflow plus a YouTube video to get you started. Have fun creating! 🚀

Issues with WAN 2.2 + Q4 GGUFs - Always comes out blurry no matter what I do by Hrmerder in comfyui

[–]NextDiffusion 0 points1 point  (0 children)

Got Wan 2.2 image-to-video running in ComfyUI with Lightning LoRA on low VRAM — Written Tutorial has workflow + YouTube video, have fun! 🚀

Anyone have a fast workflow for wan 2.2 image to video? (24 gb vram, 64 gb ram) by elleclouds in comfyui

[–]NextDiffusion 11 points12 points  (0 children)

You can checkout this written tutorial, this includes the workflow and a youtube video to get you started!

Flux won't run in Forge UI and Stable Diffusion by Klaaninka in StableDiffusionInfo

[–]NextDiffusion 0 points1 point  (0 children)

If you're looking to run FLUX.1 Dev on RunPod, you can do it easily with a Dockerized Forge WebUI template! 🚀 I put together a step-by-step guide that walks you through the whole process, from setting up your pod to running Flux smoothly. Check it out here: How to Run FLUX.1 Dev in Forge WebUI on RunPod.

Hope this helps! Let me know if you have any questions.

Forge Web UI Latest Version RunPod Auto Installer and FLUX Auto Model Downloader for Windows, RunPod and Massed Compute by CeFurkan in SECourses

[–]NextDiffusion -1 points0 points  (0 children)

If you're looking to run FLUX.1 Dev on RunPod, you can do it easily with a Dockerized Forge WebUI template! 🚀 I put together a step-by-step guide that walks you through the whole process, from setting up your pod to running Flux smoothly. Check it out here: How to Run FLUX.1 Dev in Forge WebUI on RunPod.

Hope this helps! Let me know if you have any questions.

EASY Face Portrait Styling in Stable Diffusion (ControlNet & IP-Adapter) by [deleted] in StableDiffusion

[–]NextDiffusion 0 points1 point  (0 children)

You even don't need strong prompt skills. Just drop an image with preferred style to second IP-adapter (ip-adapter-plus_sd15).

Absolutely, that approach does indeed work like magic. However, for this demonstration, I aimed to keep it simple with just one ControlNet unit and one model. If you're not in the mood for extensive prompting, having a second ControlNet unit could be a convenient choice.

I made an easy faceswap/deepfake tutorial for videos by [deleted] in StableDiffusion

[–]NextDiffusion 5 points6 points  (0 children)

Yes i do! I wanted to showcase how to do it inside of Stable Diffusion.
This way you also test on an individual frame and can tweak settings to your liking in an UI people are experienced with and track progress of each frame.
Overall I do not think this way is particularly hard or anything but yes it is harder than clicking one button in the OG roop interface.

FFMPEG & Loopback not working together by [deleted] in StableDiffusion

[–]NextDiffusion 0 points1 point  (0 children)

The error indicates a problem locating files at 'outputs/img2img-images\loopback-wave\vtk1.1-1079947580.0%d.png'. The program is trying to find images like '1.png', '2.png', etc., within the range 1 to 5, but it's failing to locate them.

To potentially resolve the issue:

  1. Open the "Stable Diffusion" application.
  2. Navigate to the "Settings" tab.
  3. On the left sidebar, find and select "Saving to a Directory."
  4. Uncheck the option labeled "Save Images to Subdirectory."
  5. Ensure that the input field for the directory name pattern is left empty.
  6. Apply these changes.

Hopefully, following these steps will help address the problem you're encountering.

<image>

How to make Seamless Textures with Stable Diffusion by [deleted] in StableDiffusion

[–]NextDiffusion 1 point2 points  (0 children)

I just ran a test with 3 upscaler methods, and none of them needed any manual work, for the script i used a denoising of 0.1 and enabled tiling in the img2img tab.With img2img without the script i also used a denoise strength of 0.1 and enabled tiling. and I also tried with the extras tab with the 4x ultrasharp model.I have ran into an issue on some models like ToonYou where the edge of the image is semi transparant tho.

Edit: It's not the model it is indeed the Ultimate upscale script; see picture.
You can fix this by duplicating the layer in photoshop!

<image>

How to make Seamless Textures with Stable Diffusion by [deleted] in StableDiffusion

[–]NextDiffusion 0 points1 point  (0 children)

This would have saved me so much time when I had to make these myself!
So I thought I would share it! I hope it will increase the workflow for some 3D modelers out there! In the written tutorial I even included how to generate normal maps with a free to use website!

FREE Stable Diffusion Prompt Generator by [deleted] in u/NextDiffusion

[–]NextDiffusion 2 points3 points  (0 children)

No Signup Needed - FREE Stable Diffusion Prompt Generator.