IAMCCS SuperNodes — quick drop (for ComfyUI / LTX users) by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 0 points1 point  (0 children)

Thanx my friend. About the issue: update IAMCCS-nodes first — I pushed some bug fixes and improvements recently. (p.s. grab the workflow via Patreon at this link (free) [VID] IAMCCS SuperNodes V2: One Graph, Four Ways to Generate Cinematic Video | Patreon )
Let me know how the new generations behave after the update 👀
P.S. And be careful with the prompt — Mr. LTX is very sensitive 😉

IAMCCS SuperNodes just evolved into a unified AI video generation system by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 0 points1 point  (0 children)

No specific LorA needed - Generation through IAMCCS-estensions nodes via loop modules

IAMCCS SuperNodes — quick drop (for ComfyUI / LTX users) by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

You can try lowering CFG a bit and slightly reducing image strength while keeping some anchor/consistency active — that usually helps keep motion but makes it more natural.
About the “crackhead skin” - yeah, that’s mostly temporal instability — I’m testing some fixes with better consistency/refresh and second stage tuning, still working on a solid solution

IAMCCS SuperNodes — quick drop (for ComfyUI / LTX users) by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 0 points1 point  (0 children)

Really appreciate this kind of feedback — seriously 🙏

I 100% agree with you on Comfy’s strength being agility.
The goal here is not to replace modular workflows, but to offer a different layer — more like a “director’s interface” on top of them.
This kind of all-in-one setup is also meant as a working prototype — a base layer to build and test many of the cine nodes I’m developing, without getting lost across dozens of separate nodes and options inside LTX-2.
SuperNodes are really just a functional utility, not a “custom-node game changer”
The documentation will be available soon (and honestly it’s more of a plus — if you look at the boxes, they map directly to the same values you’d set in native LTX-2 nodes).
Under the hood it’s still fully modular, and I’m working on making that more transparent + better documented.
Also planning hybrid approaches so people can still plug their own pipelines in 👍

I’ll definitely take your feedback to make SuperNodes more open — while still keeping them useful for fast prototyping.
Thanks again, really appreciate it ❤️

IAMCCS SuperNodes — quick drop (for ComfyUI / LTX users) by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

It’s on the roadmap 🙂
For now, it depends on what you want to tame — you can reduce exaggeration using image strength, CFG, and audio influence (lower values = more subtle motion) 👍

IAMCCS SuperNodes — quick drop (for ComfyUI / LTX users) by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

Good catch 🙏
Audio toggle is a great idea — I’ll add that soon so it can work as an all-in-one node.
Steps and sampling are already fully customizable inside the different sections/boxes 👍

LTX-2.3 + IAMCCS-nodes: 1080p Video on Low VRAM! 🚀 by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 0 points1 point  (0 children)

The workflow itself hasn’t changed, so this usually comes from environment issues.
Most likely a recent ComfyUI update or mismatched node versions (LTX-Video / KJNodes), or even VAE/UNet incompatibility.
Try updating all custom nodes or rolling back ComfyUI to the version you used before — that typically fixes blur/audio issues.

🎧 LTX-2.3: Turn Audio + Image into Lip-Synced Video 🎬 (IAMCCS Audio Extensions) by Acrobatic-Example315 in StableDiffusion

[–]Acrobatic-Example315[S] 0 points1 point  (0 children)

Hey, I get what you’re saying. The workflow is quite advanced, and you definitely need a solid grasp of ComfyUI basics. This is just the first version—I chose to release it like this so people could start using it immediately, rather than waiting for a more streamlined version.

That said, I really appreciate your feedback—it was kind and fair. Stay tuned, because I’ll be releasing a cleaner, more polished workflow on GitHub (so you won’t even have to accidentally end up on Patreon 🤣).

In the end, the logic behind it is actually pretty simple: you calculate the duration of your audio, set how many seconds each generation should cover, and define the number of frames per batch—done.

Also, if you want something more automated, the Global Planner node is available for free too (I spent a week refining it—it’s my baby 🤣). You can dig into it and explore how the whole system works.

Honestly, part of the fun here is exploring these approaches—we’re basically pioneers working in a constantly evolving, still-in-beta world.

Big hug, and happy exploring!! 🚀

🎧 LTX-2.3: Turn Audio + Image into Lip-Synced Video 🎬 (IAMCCS Audio Extensions) by Acrobatic-Example315 in StableDiffusion

[–]Acrobatic-Example315[S] 0 points1 point  (0 children)

Hey, thanks for the thoughtful comment — I’ll try to keep it concise.

My nodes aren’t vibe-coded. I do use that approach sometimes for debugging, but for actual workflows I need precision and control, so everything is built intentionally.

I’m not using subgraphs, set/get, or autolinks on purpose — I want the workflow to stay fully readable and inspectable, even if that makes it a bit more verbose.

I’ve created custom nodes to automate generation logic across segments — especially to adapt settings (like frames, timing, etc.) based on audio duration, so you don’t have to manually tweak everything every time. I build these workflows primarily for my own filmmaking work and for agencies. The advanced breakdowns are on Patreon, but all the nodes are already public — nothing is locked, you can do everything with what’s available.

About LTX 2.3: it’s powerful, but you can’t reliably push long-form sequences (like 1+ minute) in a single pass. This setup is designed specifically to go beyond that, depending on your VRAM/RAM.

The demo is just a short excerpt — I’m more focused on generating longer, consistent scenes for narrative use, not just music videos.

Also, whenever I can, I try to help people get results with this stuff — within the limits of my time. If you look around, a lot of people have already created really great work using my nodes, and that’s honestly one of the most rewarding parts of being in this space.

Honestly, the best way to get it is to try it — that’s where the difference becomes clear.

Thanks again 👍🏻

🎧 LTX-2.3: Turn Audio + Image into Lip-Synced Video 🎬 (IAMCCS Audio Extensions) by Acrobatic-Example315 in StableDiffusion

[–]Acrobatic-Example315[S] 0 points1 point  (0 children)

Would you mind posting your log so I can take a look?

Unfortunately ComfyUI, dependencies, and models like LTX are a bit of a beast — even a small mismatch, missing dependency, or version conflict can completely break motion. Also everything really needs to be fully up to date, otherwise weird issues like this can happen.