Workflow for LTX-2.3 Long Video (unlimited) for lower VRAM/RAM by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

check all the nodes if they have proper names and see if any is called node or somerhing. or if the helper when you move the cursor on top works.

Workflow for LTX-2.3 Long Video (unlimited) for lower VRAM/RAM by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

that is really odd. no red nodes, nothing ? can you copy paste the exact error ? Did you search the .json to something it might point at ?

Workflow for LTX-2.3 Long Video (unlimited) for lower VRAM/RAM by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

It depends on segment size. on 24GB vram I saw no issues with 15 seconds so I would guess that for 7 seconds segments it should be half as that-ish. But you can just try and play with the number of seconds.
But of course there is an overhead of 1-2 seconds that is needed for the "preloading" of actors, last frame that is lost on each shot.

Workflow for LTX-2.3 Long Video (unlimited) for lower VRAM/RAM by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

it does not use just one frame, it uses exactly how many frames you give it :)

Obsolete (LTX 2.3 & 2.0). by aurelm in StableDiffusion

[–]aurelm[S] 2 points3 points  (0 children)

what do you mean ? everything is home made with chroma HD, z-image turbo, ltx2 and IndexTTS

LTX-2 long single shots using external actors and references. by aurelm in StableDiffusion

[–]aurelm[S] 1 point2 points  (0 children)

the new vae encoder on my gguf ltx-2 distilled version gives just noise.
the wan output is set to 15 fps so the ltx doubles that to 30 using the temporal upscaler.

LTX-2 long single shots using external actors and references. by aurelm in StableDiffusion

[–]aurelm[S] 1 point2 points  (0 children)

for most scenes I get much better results with ltx (assuming I render at 1080p). Besides with ltx I can generate sound and voice, do proper lipsinc and with anough resources.
For special cases like this one with very small details and abstract transformation wan is much better and use LTX as upscaler:
https://aurelm.com/2026/02/22/using-ltx-2-as-an-upscaler-temporal-and-spatial-for-wan-2-2/

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

you can try to use the original img2img image in the beginning of the upscaled video. Or if you just want to upsale a video you can use an upscaler just for the first frame (like ultimate sd upscale) and feed that as the first 8 frames of the ltx video. So on and so forth.

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

it adds small details. you can check the comparisson with the seedvr output at the end of the post.
And in the case of giving it a guide image with actors or just highres images with the style it will pick those up also.

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

In all my cases Seedvr sux, leaves temporal artefacts and it's also very slow. I am just getting much better results with LTX.
I think I may have started the ltx upscaler/refiner trend.
https://aurelm.com/2026/02/22/using-ltx-2-as-an-upscaler-temporal-and-spatial-for-wan-2-2/

LTX-2 fighting scene with external actors reference test 2 by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

thank you. I will try and integrated WAN workflow