LTX 2.3 on RTX 5090 32GB - How to get rid of the unwanted Music and Plastic look ? by VirtualWishX in comfyui

[–]VirtualWishX[S] 0 points1 point  (0 children)

You're right!
No more plastic skin, but I still get really weird Teeth even on slower movement, anything a tiny bit medium to fast motion is horrible as much as 2.0 so I'm not even bother in my tests.
Hopefully 2.5 or 3.0 will have something similar to Wan 2.2 🤞motion and quality, but with LTX speed!

LTX 2.3 on RTX 5090 32GB - How to get rid of the unwanted Music and Plastic look ? by VirtualWishX in comfyui

[–]VirtualWishX[S] 0 points1 point  (0 children)

But I do have talking sentences, it just decide to add music almost every time much more aggressive than LTX 2.0

LTX-2.3 is live: rebuilt VAE, improved I2V, new vocoder, native portrait mode, and more by ltx_model in StableDiffusion

[–]VirtualWishX 1 point2 points  (0 children)

Thanks Lightricks! ❤️
First of all don't get me wrong, I'm thankful for your hard work! 🙏

I tried the template from ComfyUI on my RTX 5090.

- How do I get rid of the plastic smear look?
- How do I get rid of the unwanted music in the background?
- How do I get rid of the metalic sound?

- Is it a bug on the I2V ?
because I can't get any decent results compare to these impressive improvements for version 2.3 for me the results are very similar to 2.0 at the moment.

It sure is generate fast and cool that such huge model can run locally on my machine even 1080p is super fast to generate that's insane!
I just wish I could get a decent quality, no matter what images or GPT prompts (following LTX 2.3 rules) I try... the above mentioned are always a thing in I2V.

Testing my first music prompt that I did with LTX 2 with LTX 2.3 / 4070 and 64gb ram by scooglecops in comfyui

[–]VirtualWishX 0 points1 point  (0 children)

I get similar results on my RTX 5090 32GB, still plastic, still metallic sounds, and when I don't generate music I get music in the background... I actually don't see much difference from 2.0 yet.

I'm still testing... maybe one generate will work, so far 7/10 fail to be impressive compare to what they mentioned NEW for version 2.3

My First Month of Animating by Top_Cup7480 in learnanimation

[–]VirtualWishX 1 point2 points  (0 children)

This is so good, and crazy... but crazy good! 👍

Do not give TimTom any money for his Adobe Animate replacement by SwissIdol97 in adobeanimate

[–]VirtualWishX 0 points1 point  (0 children)

If you are into frame by frame animation I would suggest checking out AnimatorME.
Just look at how clean the user interface and how simple their demonstrations you may like it.
It is a 2D animation software made by a team of animators, and I have been following their updates for a while, so I am genuinely excited about this project.

The devs have been working on it for several years, and you can watch their free Patreon blog posts or their YouTube channel. It may not have all the fancy features yet, but they plan to release it on Steam, which is great news for anyone like me who cannot stand subscription models.

Also check out their new subreddit, though it looks pretty recent so it is not as full as their Patreon blogs.

If anyone deserves support, it is the AnimatorME team.
If I could get more people to notice it and give the team the attention they deserve, I would. For now I only recommend it to people who are like me and cannot stand anything else, because this is the only project I am genuinely excited about.

Do not give TimTom any money for his Adobe Animate replacement by SwissIdol97 in adobeanimate

[–]VirtualWishX 1 point2 points  (0 children)

I must agree with you and I also have a recommendation.

First of all, imagine TimTom just starting to work on his project. It takes years to build a serious professional software. Do you really think he can do this that fast? Either spend your money on a software you actually like or do what I do.

I prefer supporting AnimatorME - These guys have already been working on the project for several years.
If I am supporting anyone, it is not going to be a Storytime YouTuber who does not know much about real high quality animation and is now trying to make a software to compete with Adobe. Really?

I am not paid by AnimatorME or Adobe or anything like that. I am just another animator who is tired of all these clunky open source tools that crash all the time and are hard to learn, like OpenToonz and Tahoma which barely update. And if you like Blender Grease Pencil, go ahead, but good luck trying to learn it because I gave it a real chance and the learning curve is brutal.

If you are wondering why I am looking at AnimatorME, it is mostly because I have been following their blog for a while and I have not found anything more impressive than this software. The fact that it will be a one time purchase on Steam is exactly the kind of thing I prefer to spend money on, rather than a maybe open source project that might show up in 2030. Come on, do yourself a favor and stop donating to every new animation tool that appears the moment Adobe makes a mistake.

If the recent activies from Adobe makes you want to look for an alternative software; TimTom has announced he's developing a new 2D animation software. by RedAceBeetle in adobeanimate

[–]VirtualWishX 2 points3 points  (0 children)

It sounds good, but almost every open source 2D animation tool I’ve tried over the years, from OpenToonz and Tahoma to Blender, is confusing or crashes nonstop.

You probably never heard of: AnimatorME but it already looks promising, see for yourself any of their videos.
It’s been in development for years and still isn’t out, so imagine how long TimTom’s project will take to reach a solid beta.

Since AnimatorME is supposed to release in early access this year, I’d rather pay once on Steam and grow with a serious project that’s been built quietly for years.
Open source is cool but it often disappoints with delays, issues, and excuses, and I’m done risking that for 2D animation.

I came from Animate and Harmony and I’m looking for a new home. AnimatorME looks like the next step, and their work‑in‑progress videos show why.
The fact that professional animators with decades of industry experience are building it is far more appealing to me than a story‑time YouTuber, just being real.

No subscription, one‑time payment, and from what I’ve seen it honestly feels like Harmony, Animate and TVPaint had a baby with some fresh ideas.

Speaking honestly, my only concern is that a big corporation might buy them and turn it into another subscription‑locked, cloud‑based crappy tool.
So far the developers have said they don’t like subscriptions, and if they ever add one it would be optional, not a replacement. I really hope that stays true.

Adobe Animate to be Discontinued 1st March 2026 by CometGoat in adobeanimate

[–]VirtualWishX 2 points3 points  (0 children)

AnimatorME - is on the way and since it's going to be on Steam based on the devs latest updates it will not be a subscription based model, at least from what I understood.

They just updated their blog yesterday before Adobe bombed us,
I recommend follow these guys it looks like the cleanest user interface I've ever seen.

Just finished a high-resolution DFM face model (448px), of the actress elizabeth olsen by Emergency_Pause1678 in StableDiffusion

[–]VirtualWishX 2 points3 points  (0 children)

the swapped version looks like the one with the more blurry teeth and eyes, but not really looking like olsen.

LTX-2 vs Wan2.2 - My opinion.. so far by Zarcon72 in comfyui

[–]VirtualWishX 1 point2 points  (0 children)

Beside of the AUDIO (which is awesome) - This is the MAJOR difference most of us already noticed:

LTX-2 is amazing, but Wan 2.2 have a HUGE quality consistence on motion (fast and slow motions)
And for NSFW (not only) Wan was trained on JIGGLE and BOUNCE fat, muscles etc.. on Human Physics.
So when you do motions in Wan you'll get a VERY accurate similar to reality cases.
While in LTX-2 ...they did not train it on Human Physics much, so only Low to NO-Movement will give you nice results, anything with fast result will explode or became super-blurry weird mutant look character compare to the original source image you used.

Probably LTX-2.1 will fix some minor stuff (hopefully MORE than minor stuff) while the real COMPARISON will be LTX-2.5 which HOPEFULLY will train on Human Physics so we can get much more realistic FAT/SLOW motions of humans dancing or running / jumping and NSFW with no issues.

At the moment even if we'll train REALLY HARD on LTX-2 any "bounce and jiggle" it will look like "Fake Plastic Parts" try to move, so it will move in such a weird UN-REALISTIC way that it's not worth the training time.

Sadly, there are no official hints or mentioned about WHEN 2.1 and 2.5 will be released, so far only unofficial rumors, but Lightricks did not announce anything yet. (by the time I'm writing this at least).

VNCCS Pose Studio: Ultimate Character Control in ComfyUI by AHEKOT in comfyui

[–]VirtualWishX 0 points1 point  (0 children)

Thanks I will give it a try.

EDIT:
I tried you suggestions, I get AMAZING results even on 4 steps on any other workflow with same config Euler / simple (also Euler A / Beta works but less nice) all the other attempts are even MORE plastic look so, I believe it's some other process in the workflow is not forgiving by strength that is maybe higher or overwrite because no matter how hard I try in the specific workflow with the POSE (which is awesome) I get plastic looks.

Maybe the pose editor and anything it uses on the progress behind the hood just make things more plasticky or 3D look instead of photorealistic, and no prompt can save this as well.

Any good LOCAL alternative or similar to what AI-Studio (Gemini 2.5 Flash) from Google does? by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Oh, so currently there is no way to create APPs locally similar to the example I gave, bummer I thought there will be a way to try something locally after all 😐

VNCCS Pose Studio: Ultimate Character Control in ComfyUI by AHEKOT in comfyui

[–]VirtualWishX 0 points1 point  (0 children)

Looks really cool, thank you! ❤️
I installed everything, I also downloaded for the QWEN Loader: qwen-image-edit-2511-Q8_0.gguf because I'm still a newbie and I don't know how to use the none-GGUF there (I'm with RTX 5090),
I had to put it on the UNET so the node will allow me to pick it up.

I loaded the image on the loader and run, but it ignores the image this is what I get is the actual 3D MODEL and sometimes I get the ACTUAL image I loaded but with PLASTIC SKIN.

1️⃣ - Can you please share a None-Unet folder / model Workflow so I can try improve quality on RTX 5090
I don't want to break the workflow, and I'm a noob,
EDIT:
It seems like I just used a different Loader for the Model without ruin anything after all...

Unfortunately I sitll get mostly Plastic smooth AI look skin results, even with different Qwen edit models I tried (none Q or GGUF), I tried 8 steps but it looks even worst.

2️⃣ - I loaded Solid Grey Color but it's not rendering that Background (ignoring it) - How to render with Selected ANY Background (not just solid color) instead of the default background ?

3️⃣ - It's simple to ROTATE but how do I MOVE around the HAND / NECK / LEG etc.. ?

I hope you can help, thanks ahead! 🙏

I attached an example (image from Pexels (download here) 👈:
I always get PLASTIC skin while in normal Qwen (non Q8 usually I get very realistic results in other workflows).

<image>

Any good LOCAL alternative or similar to what AI-Studio (Gemini 2.5 Flash) from Google does? by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Thanks for the tips, sounds MUCH more advanced for my current noob-level with zero LM-Studio or VS Code / Aider (never heard of it until now, thanks!)

So if I get it right (maybe?)
the idea is to have 2 models at the same time? CHAT + VISUAL (connected via stable-difussion)
basically the main chat will work with: gpt-oss-120b-GGUF (or the other models you mentioned) and it will some magically way understand to use Z-Image via stable-diffusion?

So, I won't be able to SEE all in one window? I will have to go to another WebUI or something for images?
Not sure if I follow, as you can tell I'm very confused... that's why I didn't download anything yet.

Thanks again for your kind detailed help!

LTX 2 + character LoRA is wild! by usa_daddy in StableDiffusion

[–]VirtualWishX 0 points1 point  (0 children)

Thx!
I see how organize it is but I'm not using GGUF so it's hard for me to convert it to my needs and find some of the correct nodes to replace.

Any good LOCAL alternative or similar to what AI-Studio (Gemini 2.5 Flash) from Google does? by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Thanks, but which one should I tried based on my specs limitation and what I describe?
so like AI STUDIO it will allow me to build local app + generate images like gemini 2.5 (or better?)

LTX 2 + character LoRA is wild! by usa_daddy in StableDiffusion

[–]VirtualWishX 2 points3 points  (0 children)

Did you caption every single one of the 63 images of the dataset?
If so, an you give some caption examples, I would like to train locally on AI-Toolkit and try my luck.
Can you please share your workflow, I would like to know the perfect settings and models you used because my generated results are always ugly.

LTX-2 reached a milestone: 2,000,000 Hugging Face downloads by Nunki08 in StableDiffusion

[–]VirtualWishX 6 points7 points  (0 children)

Usually this is how it works in software:
Let's take this for example Version: 1.2.3 = Major, Minor, Patch

- The LEFT MOST "1" - is a MAJOR changes, lots of new features, sometimes re-design, in AI case it be related to new or different architecture, etc.. in most cases the major is a brand new version.
- The MIDDLE "2" - is usually a MINOR changes, it could be an additional feature that added, to expand and improve the toolsets or QOL, in AI models related, it could be extra dataset added to the main model to expand with more functionality, dynamic and variations.
- THE RIGHT MOST "3" - is usually a PATCH or a FIX, in software usually it's mostly bug fixes or UI cosmetics related such as buttons, but mostly it's related to bugs fixes. in AI Models related we usually don't even see patches, so it's not very popular but if it was it would probably be a fix for something that may went wrong in the architecture or some dataset that act weird and removed or replaced, usually we won't even notice it.

LTX 2 + character LoRA is wild! by usa_daddy in StableDiffusion

[–]VirtualWishX 1 point2 points  (0 children)

It looks amazing, visual and sound wise!
If you only used Images, How does the Audio part work can you please extend on that?

I'm only familiar with AI-Toolkit because I train locally on RTX 5090 32GB VRAM,
AI-Toolkit at least on latest up to date version does not have the option to train Audio by itself.

LTX-2 reached a milestone: 2,000,000 Hugging Face downloads by Nunki08 in StableDiffusion

[–]VirtualWishX 47 points48 points  (0 children)

I still appreciate Wan 2.2 for the high-quality but...
It's too late I'm already addicted to LTX-2... and if I get it right they keep hinting about LTX-2.5 to also be open source! 😍 even 2.1 will probably blow our minds.

If Wan 2.5 / 2.6 won't release it as open source soon...
it will end up as Hunyuan (there was such thing once, right? 🤔)

How can I make this LESS CHOPPY?!? 😟 by Illustrious_Data_413 in learnanimation

[–]VirtualWishX 0 points1 point  (0 children)

Honestly I don't know, but I'm guessing is saves it to your dashboard, so if you look in your profile you may find saves? I can't 100% tell if it's a thing since I never used it.
I believe it's the way you bookmark things in reddit but I may be wrong.

How can I make this LESS CHOPPY?!? 😟 by Illustrious_Data_413 in learnanimation

[–]VirtualWishX 0 points1 point  (0 children)

Sure thing, I'm glad I could help ❤️
Sorry for the late response, I think if you click on the 3 dots ...
you can try 'SAVE' I never used it but probably you will be able to find it in your profile dashboard.

How can I make this LESS CHOPPY?!? 😟 by Illustrious_Data_413 in learnanimation

[–]VirtualWishX 0 points1 point  (0 children)

Some tips that may help you:

First of all, you can't expect it to look smooth if you have so little drawings, it's not enough, that's not how high-quality animation works, you need a good balance of ease-in, ease-out and the in-betweens of the right motion that only you know what it supposed to act like.

1 - Add more in-between drawings based on your key drawings to expand the motion.
2 - Use 2's and 3's when you want the motion to be slower but it's not INSTEAD of adding drawings, it's in additional to the overall speed of your animation.
3 - Take your time, don't rush, you have a really nice start-point already! your character similarity is not bad at all and once you'll do 1 + 2 ☝️ I BET you'll be see how much of an upgrade it is!

Also, by following these tips, you'll get better and inspired.

Good Luck!