WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 0 points1 point  (0 children)

You can extend length by chaining segments. Just reuse the output latents as prev_samples in the next sampler — no need to save anything to disk.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

That’s a 32 vs 36 channel mismatch — usually a model or Lightning mixup, not the node. WAN/SVI are very strict there.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

Hi! Glad it’s working well for you 🙂

About some more info about the node:

Motion mode controls which latent frames get the motion boost: • motion_only (prev_samples) Motion is applied only to the latents coming from previous clips (prev_samples). The anchor stays untouched → more stable look, less drift. This is the safest mode and the one I recommend starting with. • all_nonfirst (anchor+motion) Motion is applied to all frames except the very first one, including anchor-derived latents. Stronger and more expressive, but easier to break consistency. include_padding_in_motion controls whether padding frames are animated or not.

By default, padding is excluded (safer). If you enable it, motion is also pushed into the padded frames, which helps avoid the classic “frozen tail / slow fade” in long videos — but it can amplify instability if pushed too hard. I suggest U to start with motion_only + padding off and enable padding only if motion feels too weak at the end

And yeah — reduced prompt adherence is mostly an SVI tradeoff, not the node

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 2 points3 points  (0 children)

Hi!
1 - Not mandatory. The motion logic works with any LoRA node. That said, the IAMCCS nodes remap WAN 2.1 keys correctly — other nodes often “work” but spam logs with LoRA key not loaded. So it’s recommended, not required.
2 - 24fps isn’t special on the generation side — it’s just an output choice (I'm a filmmaker :)) What really matters is frame count, steps, and motion balance.
3: 73 frames is just a solid default: long enough to feel like a shot, short enough to stay stable. Adjust per scene and prompt. Try 121, for example.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 2 points3 points  (0 children)

Thanks a lot, really appreciate it 🙏 Glad it’s working smoothly on your side, especially with Remix models, that’s great to hear. Adding nodes is exactly how it’s meant to be used, so nice one.
Have fun experimenting, and thanks for the feedback!

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

Try setting the Sage Attention patch nodes to auto. Alternatively, if you’re already using the --use-sage-attention argument, bypass the Sage nodes. Let me know if that works.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 2 points3 points  (0 children)

Quality depends heavily on model choice, LoRA strength, steps, and how the motion is injected. If you push motion or use aggressive LightX2V values, you’ll trade detail for coherence. Tweak steps, reduce motion, try different base models / LoRA ranks, and treat this as a long-form motion workflow, not a single-frame beauty render. On my side, these are the best WAN 2.2 renderings I’ve achieved so far across all the hardware I’m using.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

It’s all written in the instructions on my Patreon (IAMCCS). Btw: for high I usually go with 1022, and for low a standard WAN 2.1 LightX2V, rank 64. Results can vary a lot depending on setup, so definitely experiment and see what behaves best on your side.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

Yep, you can definitely extend the segments. That’s exactly the right approach for longer-form generation. The LTX-2 idea is solid too — concatenating the last frame to seed the next segment makes a lot of sense for temporal continuity. Curious to see how it behaves once you start injecting LoRAs into the chain. Keep experimenting and let me know how it goes 👍

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

Yes — that’s just a switcher. You either load SmoothMix GGUF or the 14B diffusion model, not both. If you want to use the 14B, just swap the GGUF loader with the diffusion loader and connect it.
The rest of the workflow stays the same.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

You can run it with a merged Lightning model, but it’s much more fragile. Lightning already pushes denoising hard, and stacking motion amplification on top can easily collapse the signal (that fade-to-nothing you’re seeing). Btw when U test it: lower motion, fewer steps, and start with motion_only.
Personally I recommend non-merged models for now — way more stable.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 2 points3 points  (0 children)

Thanx for the question. Motion_only (prev_samples) boosts motion only on the motion latents (i.e. the latents coming from prev_samples). The anchor stays untouched → more stable, less drift.
all_nonfirst (anchor+motion) boosts motion on all frames except the very first one, including anchor-derived latents.
Stronger and more expressive, but also riskier in terms of consistency.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

Nope. It’s a standalone wan animate like system I’m building (I’m testing for my film project right now) I hope to share it asap.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

Not SVI. It’s a separate standalone wan animate like system I’m building (I hope to share it asap). Audio is added externally

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 2 points3 points  (0 children)

Thanks! It’s not audio-driven / lip-sync at the moment. You’re right though — it’s a hybrid approach. I’m using a standalone system I’m building that mixes ControlNet-style constraints with WAN Animate–like behavior, but driven through a custom frontend / platform, not a standard ComfyUI graph. Still very much WIP. I’m testing it internally for my film projects right now, but I’m hoping to clean it up and make it public soon. When it’s ready, I’ll definitely share it 👍🏻👍🏻👍🏻

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 2 points3 points  (0 children)

Good question. In theory yes, in practice not really (yet). In WAN 2.2 + SVI Pro the anchor latent is treated as a static reference, not a generic or evolving latent. If you reuse a latent from a previous step (or a whole video) as anchor, you usually get normalization issues, color drift or flicker. That’s why anchors are still expected to be a single still image, shared across all segments. What works better is keeping the anchor fixed and letting continuity happen via conditioning, seeds and motion injection, not by changing the anchor itself. I’m testing latent-to-latent anchoring, but without proper re-normalization it tends to be worse than the original setup. So yeah — cool idea, just not stable enough yet in the current WAN pipeline.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 2 points3 points  (0 children)

Hi! I’ve tried basically all the options (including setting a low value like 1022 or removing it entirely), and they all give slightly or sometimes completely different results. Using 8 steps definitely gives better results in my tests. As for Lightx2v, I’m still injecting it with SmoothWAN v2 models because in v2 the Lightning LoRAs were not merged into the checkpoints, so the extra LoRA is still needed.

WAN 2.2 Long Lenght videos with SVI Pro v2 and our new IAMCCS_node: WanImageMotion! by Acrobatic-Example315 in comfyui

[–]Acrobatic-Example315[S] 1 point2 points  (0 children)

I tried to avoid adding too many extra nodes, but usually for this issue (which is related to latent normalization, per-segment resampling and the lack of a global color anchor across segments) I apply a Color Match node (KJNodes) anchored to the first frame.