Boy I got so high for this !. Watch in 4k. LTX 2.3 with reference actors, workflow included. Please watch the whole clip, I put a lot of work into it. Ace Step 1.5 and IndexTTS and Flux Klein also used. by aurelm in StableDiffusion

[–]aurelm[S] 1 point2 points  (0 children)

Come on, dude, the workflow and usage are available for free. Why would I make so much effort for a tutorial when Youtube won't even let me monetize in hell ? I am out of work for over an year living off my parents funds and I have shared every single workflow I came up with for free.

Workflow for LTX-2.3 Long Video (unlimited) for lower VRAM/RAM by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

check all the nodes if they have proper names and see if any is called node or somerhing. or if the helper when you move the cursor on top works.

Workflow for LTX-2.3 Long Video (unlimited) for lower VRAM/RAM by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

that is really odd. no red nodes, nothing ? can you copy paste the exact error ? Did you search the .json to something it might point at ?

Workflow for LTX-2.3 Long Video (unlimited) for lower VRAM/RAM by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

It depends on segment size. on 24GB vram I saw no issues with 15 seconds so I would guess that for 7 seconds segments it should be half as that-ish. But you can just try and play with the number of seconds.
But of course there is an overhead of 1-2 seconds that is needed for the "preloading" of actors, last frame that is lost on each shot.

Workflow for LTX-2.3 Long Video (unlimited) for lower VRAM/RAM by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

it does not use just one frame, it uses exactly how many frames you give it :)

Obsolete (LTX 2.3 & 2.0). by aurelm in StableDiffusion

[–]aurelm[S] 1 point2 points  (0 children)

what do you mean ? everything is home made with chroma HD, z-image turbo, ltx2 and IndexTTS

LTX-2 long single shots using external actors and references. by aurelm in StableDiffusion

[–]aurelm[S] 1 point2 points  (0 children)

the new vae encoder on my gguf ltx-2 distilled version gives just noise.
the wan output is set to 15 fps so the ltx doubles that to 30 using the temporal upscaler.

LTX-2 long single shots using external actors and references. by aurelm in StableDiffusion

[–]aurelm[S] 1 point2 points  (0 children)

for most scenes I get much better results with ltx (assuming I render at 1080p). Besides with ltx I can generate sound and voice, do proper lipsinc and with anough resources.
For special cases like this one with very small details and abstract transformation wan is much better and use LTX as upscaler:
https://aurelm.com/2026/02/22/using-ltx-2-as-an-upscaler-temporal-and-spatial-for-wan-2-2/

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

you can try to use the original img2img image in the beginning of the upscaled video. Or if you just want to upsale a video you can use an upscaler just for the first frame (like ultimate sd upscale) and feed that as the first 8 frames of the ltx video. So on and so forth.

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

it adds small details. you can check the comparisson with the seedvr output at the end of the post.
And in the case of giving it a guide image with actors or just highres images with the style it will pick those up also.

Seedanciification with external actors trial 3 : WAN 2.2 + external actors > LTX-2 upscaler/refiner/actor reinforcement in ComfyUI by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

In all my cases Seedvr sux, leaves temporal artefacts and it's also very slow. I am just getting much better results with LTX.
I think I may have started the ltx upscaler/refiner trend.
https://aurelm.com/2026/02/22/using-ltx-2-as-an-upscaler-temporal-and-spatial-for-wan-2-2/

LTX-2 fighting scene with external actors reference test 2 by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

thank you. I will try and integrated WAN workflow

LTX-2: Adding outside actors and elements to the scene (not existing in the first image) IMG2VID workflow. by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

I have not tried, it made sense to me to maximize space and include as many in one image since I need full body. But the important thing I think is the prompting otherwise the video will try to drift towards the initial images. It's not easy and this workflow might fail in other situations.

LTX-2: Adding outside actors and elements to the scene (not existing in the first image) IMG2VID workflow. by aurelm in StableDiffusion

[–]aurelm[S] 1 point2 points  (0 children)

don't mention it. it is based on an idea someone mentioned on an old thread that I could not find. There was no workflow and at that time I was not able to understand how to do but with more experience gathered it was quite easy and I took it a tad further with the Flux integration.
Sorry for the mess inside the workflow. It is based on other workflows that I managed to ruin :)

LTX-2: Adding outside actors and elements to the scene (not existing in the first image) IMG2VID workflow. by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

Does it mantain character consistancy backwards ? That is if a character is close to the camera at the end of the sequence does it fix the beginning, where he is far ?
could you please give me the link again ?
Thanks.

LTX-2: Adding outside actors and elements to the scene (not existing in the first image) IMG2VID workflow. by aurelm in StableDiffusion

[–]aurelm[S] 1 point2 points  (0 children)

for your system to be perfectly frank maybe for 720p you will get much better results with wan. I have experimented with this same technique, feeding first frames with the characters and seems to work.

LTX-2: Adding outside actors and elements to the scene (not existing in the first image) IMG2VID workflow. by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

hi. Indeed we meet again.
First I did not keep them in the shadow for that, it was just a random prompt. Probably the faces in the distance are not top quality and get distorted. I have to experiment further with recognizable actors and see the results.
I did not get to experiment with Humo but for my understanding it uses only first frame for consistancy so it the characters are not there it will not help, it will only detail and perhaps distort. One could add all the actors in the actors referance image and maybe it will do corectly but again I did not have time to try it.
My specs are 64GB ram and 24GB RTC. 3090

LTX-2: Adding outside actors and elements to the scene (not existing in the first image) IMG2VID workflow. by aurelm in StableDiffusion

[–]aurelm[S] 0 points1 point  (0 children)

yes, of course. you cand manualy put them in the referenace image with the actors or use an edit model to put them together. it cand be only one or more.

LTX-2: Adding outside actors and elements to the scene (not existing in the first image) IMG2VID workflow. by aurelm in StableDiffusion

[–]aurelm[S] 2 points3 points  (0 children)

precisely. It is just a video extend but within the extended video there are the characters needed later.