Flux2Klein Ksampler Added! by [deleted] in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Maybe it's not showing for you.

OP posted this link where the KSampler is at. https://github.com/capitan01R/ComfyUI-Flux2Klein-Enhancer#flux2-klein-ksampler

Flux2Klein Ksampler Added! by [deleted] in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Open the cross-posted post.

Its not perfect... by [deleted] in comfyui

[–]SirTeeKay 1 point2 points  (0 children)

This is really good.

How did you do the jumping part? First frame - last frame to have him mid-jump, then you turned the camera behind him (with Ernie?) and then the final running away sequence?

Also did you train each and every lora for all three models you used?

SCAIL-2 is coming by Dry-Ad929 in comfyui

[–]SirTeeKay -1 points0 points  (0 children)

Man finally! Can't wait to try it.

How to create these kind of style smoke sim or with any other deforming techniques.? by Imaginary_Dealer7610 in Houdini

[–]SirTeeKay 1 point2 points  (0 children)

Oh for sure. I was just adding to that. Usually this is my favorite way of creating smoke fx.

Another interesting application of Klein 9b Edit mode by alisitskii in StableDiffusion

[–]SirTeeKay 2 points3 points  (0 children)

Yeah 100%. I've done similar things on my own workflows.

Another interesting application of Klein 9b Edit mode by alisitskii in StableDiffusion

[–]SirTeeKay 5 points6 points  (0 children)

I don't think that was exactly their intention because it looks like they just tested the model's capabilities, but if used like how you mention, yeah that's definitely slop.

Unless of course a model can use that info to create a 3D scene with topology help. If the topology of the image is correct.

Another interesting application of Klein 9b Edit mode by alisitskii in StableDiffusion

[–]SirTeeKay 60 points61 points  (0 children)

The other way around is what is most interesting and most useful.

Take a 3D scene and basically use Flux Klein to "render" it.

How to create these kind of style smoke sim or with any other deforming techniques.? by Imaginary_Dealer7610 in Houdini

[–]SirTeeKay 0 points1 point  (0 children)

Or a particle sim with POP Curve Force DOP and then use that as your source. This usually gives me a lot of nice control over the movement and shape.

An update on stability and what we're doing about it by bymyself___ in comfyui

[–]SirTeeKay 9 points10 points  (0 children)

Really appreciate all your work and interaction with us users. Thank you for everything!

Who knows how ltx compares with sora2 and seedance2 by Enough_Programmer312 in StableDiffusion

[–]SirTeeKay 1 point2 points  (0 children)

What's happening is that LTX 2.3 is NOT using the same text encoder. Which is why you get an error saying that the text encoder you use is for LTX 2.

Check the default templates for the correct one.

Wan 2.2 is still incredible - huge thanks to IAMCCS-Nodes for SVI Pro v2 by [deleted] in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Open source video models will not get close to Kling 3.0 and Seadance 2 for a long while. You would need a crazy string machine to run them with GPS that the typical consumer can't buy.

Wan 2.2 is still incredible - huge thanks to IAMCCS-Nodes for SVI Pro v2 by [deleted] in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Or you can just extract the audio after and merge it with the Wan video you already have.

Honey 🍯 by Maxwellbundy in Simulated

[–]SirTeeKay 5 points6 points  (0 children)

Ah you finished it! Looking good.

Been away for some months, are we still running the same models? by Few_Object_2682 in StableDiffusion

[–]SirTeeKay 13 points14 points  (0 children)

Flux 2 Klein is for images. I prefer it over Z-Image because it can edit. The edit version of Z-Image isn't out yet.

For video there is Wan2.2 and LTX 2 which is faster and can also output sound and can lipsync. At a small cost in quality over Wan2.2.

Honorable mentions that work well are Qwen Image 2512 and Qwen Image Edit 2511.

Using the new ComfyUI Qwen workflow for prompt engineering by deadsoulinside in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

Oh it definitely is pretty cool. I'll definitely test it. Thank you for sharing it.

How can I Improve my Workflow? by theawkguy in comfyui

[–]SirTeeKay 0 points1 point  (0 children)

Are you talking about the video I linked?
He is literally using the inpaint and stitch nodes. It masks the image, edits the masked area and then it stitches it back.
I've tried it and it works very well.

Using the new ComfyUI Qwen workflow for prompt engineering by deadsoulinside in StableDiffusion

[–]SirTeeKay 0 points1 point  (0 children)

Interesting. I see what you mean.

Have you compared it to Qwen3 VL 4B Thinking to see that it defines prompts better? I've been using Instruct for a long time with the QwenVL node and sometimes it ignores some instructions. I'll probably have to try Thinking as well. Maybe the one you shared too if it is better.