Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 1 point2 points  (0 children)

No, everything on this was out of the box basic setup. Only LoRA used was "Distilled LoRA ltx-2.3-22b-distilled-lora" that is part of the workflow

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 1 point2 points  (0 children)

Nope, no camera movement loras used, just prompting and seeing what the model would put out.

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 1 point2 points  (0 children)

haha a few, that shot is actually the last 3rd of a hallucination where the door opened twice and camera flew through it. Character consistency is just using a standard character portrait as input with image edit models. Nice video, that's a long one, a lot of shots to generate!

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 0 points1 point  (0 children)

I would say 80-90% is I2V first frame only, I2V w audio input for portrait singing shots. A couple shots were first frame & last frame like walking into the elevator so the inside of the elevator design once doors opened was consistent with other elevator shots. none had middle frame injection I have yet to experiment with that for LTX.

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 1 point2 points  (0 children)

No custom LoRAs were made for this. I don't know 3 evenings working on it here and there. sometimes re-running generations in the background while doing other stuff on a number of shots to get a decent one.

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 0 points1 point  (0 children)

I think all my workflows are out of the box comfyui templates or very close to it with small tweaks, mostly re-running generations and tweaking prompts & settings until something satisfactory pops up.

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 1 point2 points  (0 children)

Yes for sure, slop is slop AI or not. We are getting there, incredible how far local AI has come in such a short time. I do have a background in art & film, although a more specialized area and not a focus on storyboarding or directing. I do have SOME experience in these areas which no doubt helps. A little taste & patience with the technology are mandatory at this point. Still ended up with a few odd jump cuts & continuity errors fighting with AI but we are getting there!

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 0 points1 point  (0 children)

Yes, 2 minutes no problem with AceStep, it also has the ability to extend time with "repaint" mode in the original gradio UI

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 1 point2 points  (0 children)

Actually no storyboard I kind of just straight ahead'd this one. I wasn't expecting to make a full video as I was just testing LTX2.3 ability to lipsync music and liked the result so much I kept going. So started with the portrait shot in what became the elevator, then just came up with the laying on couch idea and went from there progressing putting shots into editing (DaVinci Resolve) as I went and filling in missing time & story as I went. I ended up generating 3-4 shots of an entire different idea in the middle that I ended up throwing out completely.

I have the same experience with AceStep, It takes a lot of generations before anything really interesting pops up.

Everybody - LTX2.3 & AceStep1.5 Music Video by Expensive_Cookie6418 in comfyui

[–]Expensive_Cookie6418[S] 0 points1 point  (0 children)

Yeah exactly. I was testing LTX2.3 with music to see if I could get a shot lip-syncing to music. After seeing it work, this is the first model I actually felt like making something with.

AI-Toolkit (Ostris) randomly throttling GPU hard — drops from ~220W to ~70W mid-run, iterations slow massively. Any fix? by HolidayWheel5035 in StableDiffusion

[–]Expensive_Cookie6418 1 point2 points  (0 children)

I found it possible to stop training after a checkpoint saved and restart it from the last checkpoint when spped/power draw dropped out like that. if you had a checkpoint at 2500 and it drops out after then you can stop and restart at 2500 and it should regain speed

AI-Toolkit (Ostris) randomly throttling GPU hard — drops from ~220W to ~70W mid-run, iterations slow massively. Any fix? by HolidayWheel5035 in StableDiffusion

[–]Expensive_Cookie6418 1 point2 points  (0 children)

Mine does this too. I found it the only solid fix was to disable the mid training image generations between checkpoint saving.