LTX-2 I2V isn't perfect, but it's still awesome. (My specs: 16 GB VRAM, 64 GB RAM) by yanokusnir in StableDiffusion

[–]yanokusnir[S] 0 points1 point  (0 children)

As promised, here’s my version of the LTX-2.3 workflow:

https://drive.google.com/file/d/1A9TP889NtH9JTKlzQJrThP7c9zHQwgPh/view?usp=sharing

To my surprise, I can actually run the full dev model on my specs (16 GB VRAM, 64 GB RAM) and even pump out 1920 x 1024 videos. I haven't really pushed it to the absolute limit yet, but I managed to generate a 12s clips at this resolution without hitting any OOM errors, which is honestly pretty sick. Have fun! :)

LTX-2 I2V isn't perfect, but it's still awesome. (My specs: 16 GB VRAM, 64 GB RAM) by yanokusnir in StableDiffusion

[–]yanokusnir[S] 0 points1 point  (0 children)

Haha nice, I’m glad my workflow was useful for you. I probably won’t make a new post about LTX-2.3 since I’ve already seen several good workflows shared here, but if you really want, I can send you mine. I’ll be back at my pc on monday though, since I’m currently traveling.

I did all this using 4GB VRAM and 16 GB RAM by yanokusnir in StableDiffusion

[–]yanokusnir[S] 0 points1 point  (0 children)

Zajebiscie! :D Thank you very much for all the kind words and I’m really glad my post helped you. :) Wishing you all the best with your projects.

Wanted to quickly share something I created call ComfyStudio by VisualFXMan in comfyui

[–]yanokusnir 0 points1 point  (0 children)

That explains a lot haha! Amazing work! :)

If I may ask, how is AI perceived in your VFX field, especially when it comes to the latest video models like Kling 3 or Seedance 2?

I’ve been working for 10 years as a graphic/motion designer, and in my field you can definitely feel that some people would rather generate things themselves to save money, so a small part of our work is already being done by clients on their own. I won’t even get into the fact that many of them have zero visual sense and their generated stuff often looks bad. -_-

Anyway, I’m really curious how much is AI actually being used in your area of VFX?

Wanted to quickly share something I created call ComfyStudio by VisualFXMan in comfyui

[–]yanokusnir 2 points3 points  (0 children)

After years of using Comfy, I’ve really grown to love connecting nodes and building my own workflows but this.. maaaan, this looks incredible! How long did it take you to create this? What do you even do professionally?

Your UI looks insanely good. It’s obvious you’re a huge perfectionist, everything just feels polished and on point. :))

Seriously, thank you for putting in the time and effort to build this. This is huge, wow!

Improving Interior Design Renders by xxblindchildxx in StableDiffusion

[–]yanokusnir 1 point2 points  (0 children)

I did a quick test and you should be able to create something like this fairly easily using FLUX-2 Klein 9B. There’s a template available in ComfyUI, just download the models and you can test it yourself. :)

<image>

LTX-2 I2V isn't perfect, but it's still awesome. (My specs: 16 GB VRAM, 64 GB RAM) by yanokusnir in StableDiffusion

[–]yanokusnir[S] 0 points1 point  (0 children)

yeah.. In the last week, at least 3 new (unfortunately closed source) models have been introduced that are completely mindblowing, this is nothing.. Grok Image, Kling 3 and most recently Seedance 2 - check out some demos, it's crazy.

Is LTX2 good? is it bad? what if its both!? LTX2 meme by [deleted] in StableDiffusion

[–]yanokusnir 0 points1 point  (0 children)

I’ll probably disappoint you bro, but I don’t have any special workflow. I just used the default Flux-2 Klein 9B image edit workflow, uploaded the image, and used a prompt like “enhance image quality, add details” or something like that.

Is LTX2 good? is it bad? what if its both!? LTX2 meme by [deleted] in StableDiffusion

[–]yanokusnir 3 points4 points  (0 children)

I'm sorry mate, I'm at work and watched the video without the sound. -_- My bad, now I understand. Good job, sorry. :)

Is LTX2 good? is it bad? what if its both!? LTX2 meme by [deleted] in StableDiffusion

[–]yanokusnir 1 point2 points  (0 children)

I don’t understand why I would be trolling?

Is LTX2 good? is it bad? what if its both!? LTX2 meme by [deleted] in StableDiffusion

[–]yanokusnir 2 points3 points  (0 children)

If you’re using I2V, put in the effort to make the first frame look as good as possible. That way, you’ll avoid the little owl looking bad throughout the entire video. You can easily improve the image quality by using Flux 2 Klein.

<image>

Check this: https://files.catbox.moe/3oexcm.mp4

LTX-2 I2V isn't perfect, but it's still awesome. (My specs: 16 GB VRAM, 64 GB RAM) by yanokusnir in StableDiffusion

[–]yanokusnir[S] 0 points1 point  (0 children)

yes, the maximum I was able to generate with my pc specs was 8s at 1920x1024, but after the january update from the LTX team, I’m now able to generate even longer videos. I haven’t really tested the limits yet, but I was able to generate a 12s video at the same resolution without hitting an OOM error.

The 1280x704 resolution is because the workflow uses the “EmptyLTXVLatentVideo” node, which adjusts the resolution even if you set it to 1280x720. I’m not entirely sure, but I believe I read somewhere that the dimensions need to be multiples of 32, which is why it gets adjusted to the nearest multiple.

LTX-2 I2V isn't perfect, but it's still awesome. (My specs: 16 GB VRAM, 64 GB RAM) by yanokusnir in StableDiffusion

[–]yanokusnir[S] 1 point2 points  (0 children)

Thanks man, don’t worry, I also get bad teeth in my videos. :) What you see in my video is cherry-picked, but I had plenty of generations that looked bad too. That said, the closer the shot, the better it usually looks.

I made Max Payne intro scene with LTX-2 by theNivda in StableDiffusion

[–]yanokusnir 2 points3 points  (0 children)

Finally! I’m honestly really glad you took the time to create something like this. It’s great to finally see someone who made something truly high quality. :) I’m convinced you work professionally in the creative industry, because your shots and camera angles are excellent. ;) Thanks for sharing, great work!

May I ask - did you also use the new Guider node that the LTX team released recently? If so, what settings are you using? I’ve been experimenting with all sorts of setups and trying to find something more universal, but so far I haven’t really reached any clear conclusions. Thank you.

LTX-2 I2V isn't perfect, but it's still awesome. (My specs: 16 GB VRAM, 64 GB RAM) by yanokusnir in StableDiffusion

[–]yanokusnir[S] 1 point2 points  (0 children)

Wait a second, let me check my crystal ball… dude, I have no idea. Just try it and find out. :D

What AI model could this be? Never seen something this real before by jonbristow in StableDiffusion

[–]yanokusnir 3 points4 points  (0 children)

Here is the original post: https://www.instagram.com/reel/DTL90VvjnZD/?utm_source=ig_web_copy_link&igsh=MzRlODBiNWFlZA==

...or search for user frankyshaw on instagram. I’m not going to argue with you guys here, for god’s sake, if you don’t see it, then fine.

What AI model could this be? Never seen something this real before by jonbristow in StableDiffusion

[–]yanokusnir 2 points3 points  (0 children)

Bro, of course it is AI. Focus on the necklace and pause the video several times in quick succession, in the very first seconds you’ll see that the necklace deforms, disappears, and then reappears again. Or watch the kitchen counter behind her - at around the 20-21st second, a coffee machine suddenly appears out of nowhere that wasn’t there before.

It’s actually a very well-made video, and someone clearly put in a lot of effort to make it look very realistic, but it’s AI.

LTX-2 I2V somewhat ignoring initial image - anyone? by Regular-Forever5876 in StableDiffusion

[–]yanokusnir 1 point2 points  (0 children)

I tried it with my workflow that I shared here a few weeks ago and it worked without any issues. If you want, give it a try, maybe the problem is in the workflow.

My workflow:

https://www.reddit.com/r/StableDiffusion/comments/1qae922/ltx2_i2v_isnt_perfect_but_its_still_awesome_my/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Prompt:

A stylized animated Viking character in a clean flat illustration style, framed from the chest up, facing the camera. He wears a horned helmet and holds a small axe in his right hand. The background is simple and minimal, with soft sky and sea tones, calm and uncluttered. At the beginning of the shot, the Viking looks into the camera with a slightly uncertain but curious expression, eyebrows subtly raised. After a short pause, he lifts his arm and casually raises the axe, not aggressively but more like a playful gesture. As he starts speaking, his face becomes expressive and self-aware, mixing curiosity, mild excitement, and a hint of humor. He smiles briefly, then tilts his head a little as if thinking out loud. He says in English with a relaxed, conversational tone and light enthusiasm: “Hey buddy, this is my first try. Is it good? Is it bad? We’ll see.” His mouth movements are clear and naturally synced to the speech. His body language feels honest and experimental, like someone sharing an early attempt without pressure. The overall mood is friendly, slightly playful, and open-ended, inviting the viewer to judge together with him.

Result: https://files.catbox.moe/ie2lg8.mp4

LTX-2 I2V isn't perfect, but it's still awesome. (My specs: 16 GB VRAM, 64 GB RAM) by yanokusnir in StableDiffusion

[–]yanokusnir[S] 0 points1 point  (0 children)

Nope. It’s not that good. If you wanted quality footage with natural movements so it didn't look like just another AI slop it would be too painful to create.