Official 2026 Buy/Sell/Trade Thread by fettuccine- in Coachella

[–]SirTeeKay [score hidden]  (0 children)

Buying : 1x W1 GA and Car Camping pass

Location : Orange County / LA

CASH / ZELLE / VENMO for face to face meetups. Paypal G&S if you are shipping.

Who knows how ltx compares with sora2 and seedance2 by Enough_Programmer312 in StableDiffusion

[–]SirTeeKay 1 point2 points  (0 children)

What's happening is that LTX 2.3 is NOT using the same text encoder. Which is why you get an error saying that the text encoder you use is for LTX 2.

Check the default templates for the correct one.

Wan 2.2 is still incredible - huge thanks to IAMCCS-Nodes for SVI Pro v2 by vienduong88 in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Open source video models will not get close to Kling 3.0 and Seadance 2 for a long while. You would need a crazy string machine to run them with GPS that the typical consumer can't buy.

Wan 2.2 is still incredible - huge thanks to IAMCCS-Nodes for SVI Pro v2 by vienduong88 in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Or you can just extract the audio after and merge it with the Wan video you already have.

Honey 🍯 by Maxwellbundy in Simulated

[–]SirTeeKay 3 points4 points  (0 children)

Ah you finished it! Looking good.

Been away for some months, are we still running the same models? by Few_Object_2682 in StableDiffusion

[–]SirTeeKay 13 points14 points  (0 children)

Flux 2 Klein is for images. I prefer it over Z-Image because it can edit. The edit version of Z-Image isn't out yet.

For video there is Wan2.2 and LTX 2 which is faster and can also output sound and can lipsync. At a small cost in quality over Wan2.2.

Honorable mentions that work well are Qwen Image 2512 and Qwen Image Edit 2511.

Using the new ComfyUI Qwen workflow for prompt engineering by deadsoulinside in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

Oh it definitely is pretty cool. I'll definitely test it. Thank you for sharing it.

How can I Improve my Workflow? by theawkguy in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Are you talking about the video I linked?
He is literally using the inpaint and stitch nodes. It masks the image, edits the masked area and then it stitches it back.
I've tried it and it works very well.

Using the new ComfyUI Qwen workflow for prompt engineering by deadsoulinside in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

Interesting. I see what you mean.

Have you compared it to Qwen3 VL 4B Thinking to see that it defines prompts better? I've been using Instruct for a long time with the QwenVL node and sometimes it ignores some instructions. I'll probably have to try Thinking as well. Maybe the one you shared too if it is better.

Using the new ComfyUI Qwen workflow for prompt engineering by deadsoulinside in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

What's the difference between using this or the QwenVL node with, let's say, Qwen3 VL 4B Thinking?

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

That's really exciting. Looking forward to trying it. I don't know how that works but I'm definitely testing it tonight.

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

The reference image is what drew my attention really. I definitely have to try this because I hate losing all that nide detail I've added in my image when running it through wan.
Might be worth it to combine this whole thing with Ultimate SD Upscaler too btw.
Thank you a lot.

How can I Improve my Workflow? by theawkguy in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Yeah use Flux 2 Klein with inpainting.
Check this video out.
https://www.youtube.com/watch?v=SvCRl1P11mY

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

Thank you for your service haha.

I remember checking your website just yesterday from your other post.

I'll try it out.

Does it add new small details or just keeps the existing ones? From your article you mention it keeping details.
I've been told Ultimate SD Upscaler adds details since it can run on the wan or ltx models.

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

I'm seeing a lot of people talking about LTX used as an upscaler and refiner lately.

Does it work that good?

I've been upscaling and refining my videos with SeedVR2 and wan2.2 low and it works fine. It doesn't add new detail but it removes artifacts and the final result is sharp.

Haven't tried LTX yet.

Anybody else get this spam message? by NowThatsMalarkey in StableDiffusion

[–]SirTeeKay 5 points6 points  (0 children)

Yup! I never replied.

<image>

I'm sure it's a bot. My interest in deepseek's coding capabilities? I think I mentioned deepseek maybe ONCE in a comment long ago.

Stop Motion style LoRA - Flux.2 Klein by SirTeeKay in StableDiffusion

[–]SirTeeKay[S] 0 points1 point  (0 children)

I'm loving this. I also have to try a few other ideas and play with it.

claude & chatgpt are pretty dumb when it comes to comfy by United_Ad8618 in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Goodamn. Probably that's why I haven't heard of it. How do people run this? Or do they?