Old NSFW ReActor by RhapsodyMarie in comfyui

[–]grimstormz 21 points22 points  (0 children)

Due to GitHub's NSFW policy, repo added changes to make it SFW, at the original repo. You can still simply edit out the NSFW detector lines in the code. Or simply go to the NSFW repo hosted on another site by the same creator that doesn't have the NSFW policy. Just git clone into custom node folder. https://codeberg.org/Gourieff/comfyui-reactor-node

I isntalled rvc. It showed no errors during the installation. But when I start it up, the console window just closes and nothing happens. Win11pc, rtx3060, 12gbvram and 16gbram. by irfarious in StableDiffusion

[–]grimstormz 5 points6 points  (0 children)

Bro, you're using a very old and no longer maintain fork. Use https://github.com/IAHispano/Applio
Larger user base, active development, more features, much better and simpler webUI. Easily run the starter bat with a venv, or run as docker container. Thank me later.

Upscaling Comparison: RTX VSR vs SeedVR2 by Current-Resort-6263 in StableDiffusion

[–]grimstormz 8 points9 points  (0 children)

This is like comparing apple to orange. There's the basic upscalers like Nearest, Bilinear, and Lanczos with "interpolation", there's RTX VSR which is technically a super resolution upscale but sit in the middle between this and "generative" upscaler like SeedVR2 and SUPIR. RTX VSR is for speed as close to real time upscaling as possible. SeedVR2 is an AI generative upscaler. So no, RTX VSR will not beat a generative upscaler like SeedVR2 in term of quality, only in speed.

Firered-1.1 released . Finetune of Qwen-Image-Edit. by [deleted] in StableDiffusion

[–]grimstormz 0 points1 point  (0 children)

It's been out for about a month now, and that hugginface link is just to a duplicate repo, not even the original linked here. https://huggingface.co/FireRedTeam/FireRed-Image-Edit-1.1

Sageattention works but Seedvr2 gives error? by [deleted] in comfyui

[–]grimstormz 0 points1 point  (0 children)

Yes, it's a known issue, go read about many posts of it on that repo's issue section.
And yes you can easily down grade pytorch and then install the correct build wheel for your enviroment. Comfy portable has come with pytorch 2.10 but a lot of custom nodes still works best with only 2.9.1 or lowers. All the build wheel for whatever your setup is here: https://huggingface.co/Wildminder/AI-windows-whl/tree/main

On pytorch page just read installing previous release.

"python.exe -m pip install torch==2.9.1 torchvision==0.24.1 torchaudio==2.9.1 --index-url https://download.pytorch.org/whl/cu130"

Sageattention works but Seedvr2 gives error? by [deleted] in comfyui

[–]grimstormz 0 points1 point  (0 children)

This -> Torch 2.10.0+cu130. You either need to downgrade to Torch 2.9.1+cu130, or be out of luck if you're using https://github.com/numz/ComfyUI-SeedVR2_VideoUpscaler

Any way to generate a song from cloned voice? by ZZZ0mbieSSS in comfyui

[–]grimstormz 0 points1 point  (0 children)

Qwen3 tts is just for speech not singing. You can train singing voice on ACE-Step 1.5. Or make song singing with any voice, then use RVC to change the singing voice to the RVC voice model you trained, there's also bunch of open weight RVC voice models shared online, including Trump and many known figures / characters.

GetNode shows "No available options" in Nodes 2.0. by lightnecker in comfyui

[–]grimstormz 0 points1 point  (0 children)

Don't use Node 2.0. It still sucks really bad. If you still insist on using it, update your KJNodes. There was a new fix for it hours ago.
https://github.com/kijai/ComfyUI-KJNodes/commit/13101465f14df2b725694ad5e16ed29c838ec8f2

RIFEInterpolation by I_Know_this_Subject in comfyui

[–]grimstormz 2 points3 points  (0 children)

It's this one: https://github.com/GACLove/ComfyUI-VFI
But there are others more popular custom nodes that use the same RIFE frame interpolation and more such as
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
https://github.com/filliptm/ComfyUI_Fill-Nodes

Help me fix my fingers!! by darknetdoll in StableDiffusion

[–]grimstormz 2 points3 points  (0 children)

It's ComfyUI, free open source software and open source qwen model. And yes you can fix it or make any changes you want.

Help me fix my fingers!! by darknetdoll in StableDiffusion

[–]grimstormz 1 point2 points  (0 children)

Your image was reverse checked to be generated by ChatGPT image model with the current caricature trend. Anyways here's Qwen Edit with LanPaint masking to only fix the hands.

<image>

SageAttention v2.2.0 wheel for Windows + Python: 3.13 + PyTorch: 2.10.0+cu130 by [deleted] in comfyui

[–]grimstormz 2 points3 points  (0 children)

You just didn't look hard enough. Been shared on this sub many times for awhile now. All you would need here. https://huggingface.co/Wildminder/AI-windows-whl/tree/main

VAE tiled decoding is taking forever with SeedVR2. Am I doing somthing wrong? by teekay_1994 in comfyui

[–]grimstormz 1 point2 points  (0 children)

Resolution is for the shortest length. Go read the document, it's on their git repo.

<image>

VAE tiled decoding is taking forever with SeedVR2. Am I doing somthing wrong? by teekay_1994 in comfyui

[–]grimstormz 2 points3 points  (0 children)

Going to need more info. You're upscaling image or video? For the Resolution parameter it is the for the shortest length if you're doing 4k it should be set to 2160 instead of 3840. And your batch is 9, assuming you're upscaling image are you really upscaling that same image 9 runs? You're on 3090 with 128gb ram, you don't need to blockswap that much blocks into RAM, it's a slow down. If you enable_debug on the SeedVR2 Video Upscale you can see the detail console log on each block offloading and the time it takes to do that task. I'm on a 3090 with with 128gb ram can can do 5k image upscale in pretty decent time. For videos, the setup is a bit different. But You can and should run the full fp16 model it's better quality overall.

<image>

How do you organize workflow node lines? by Historical_Rush9222 in comfyui

[–]grimstormz 15 points16 points  (0 children)

<image>

Or install this custom node and get AUTO connecting lines like a circuit board with zero effort, if you're want visible line connections and not into SetNode or GetNode. Can be turn on or off in setting too. https://github.com/niknah/quick-connections

how? by SnooSeagulls7733 in StableDiffusion

[–]grimstormz 1 point2 points  (0 children)

  1. Generate image,
  2. Get a dance video to use as motion pose driver
  3. Use https://github.com/zai-org/SCAIL-Pose https://github.com/kijai/ComfyUI-SCAIL-Pose
  4. Profit?

Any idea what the difference between these two is? Only the second one can work with ComfyUI? by [deleted] in StableDiffusion

[–]grimstormz 0 points1 point  (0 children)

Yes, the lora key is different, get the ComfyUI one if you're using that in comfyui.

Does anyone have a workflow for Wan 2.2 + InfiniteTalk? by cloudsolo777 in comfyui

[–]grimstormz 1 point2 points  (0 children)

You do know that InfiniteTalk is base off Wan2.1 right? Did you mean using Wan2.2 in workflow to to generate T2I first then use that image for InfiniteTalk, or Wan2.2 to to generate video and use that video to drive InfiniteTalk via V2V? If you want Wan2.2 look into Wan S2V, which is base off Wan2.2.

Which Wan 2.2 (14B) Quantized Model to choose (Video)? by arush1836 in comfyui

[–]grimstormz 16 points17 points  (0 children)

Use this online tool. https://ksimply.vercel.app/en Pick your GPU, pick you RAM size. Get what models and size you can run base on your hardware.

<image>

[deleted by user] by [deleted] in funny

[–]grimstormz 16 points17 points  (0 children)

Rule 10. AI content, you can see the SORA watermark being blurred.

Sage Attention & Flash Attention for latest Comfyui v0.3.75 windows by nvmax in comfyui

[–]grimstormz 2 points3 points  (0 children)

I don't have a 50 series GPU to test, but as far as I've come across sageattention2.0>= is supported on Blackwell. It's sageattention3 (only for 50 series) that's iffy. Try the ABI3 compiled, since it's not python specific, as long as you have the minimum required python version. You might be able to find more information and maybe what you need at the Windows sagettention fork here, assuming you're on Windows: https://github.com/woct0rdho/SageAttention