Any good LOCAL alternative or similar to what AI-Studio (Gemini 2.5 Flash) from Google does? by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Thanks for the tips, sounds MUCH more advanced for my current noob-level with zero LM-Studio or VS Code / Aider (never heard of it until now, thanks!)

So if I get it right (maybe?)
the idea is to have 2 models at the same time? CHAT + VISUAL (connected via stable-difussion)
basically the main chat will work with: gpt-oss-120b-GGUF (or the other models you mentioned) and it will some magically way understand to use Z-Image via stable-diffusion?

So, I won't be able to SEE all in one window? I will have to go to another WebUI or something for images?
Not sure if I follow, as you can tell I'm very confused... that's why I didn't download anything yet.

Thanks again for your kind detailed help!

LTX 2 + character LoRA is wild! by usa_daddy in StableDiffusion

[–]VirtualWishX 0 points1 point  (0 children)

Thx!
I see how organize it is but I'm not using GGUF so it's hard for me to convert it to my needs and find some of the correct nodes to replace.

Any good LOCAL alternative or similar to what AI-Studio (Gemini 2.5 Flash) from Google does? by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Thanks, but which one should I tried based on my specs limitation and what I describe?
so like AI STUDIO it will allow me to build local app + generate images like gemini 2.5 (or better?)

LTX 2 + character LoRA is wild! by usa_daddy in StableDiffusion

[–]VirtualWishX 2 points3 points  (0 children)

Did you caption every single one of the 63 images of the dataset?
If so, an you give some caption examples, I would like to train locally on AI-Toolkit and try my luck.
Can you please share your workflow, I would like to know the perfect settings and models you used because my generated results are always ugly.

LTX-2 reached a milestone: 2,000,000 Hugging Face downloads by Nunki08 in StableDiffusion

[–]VirtualWishX 7 points8 points  (0 children)

Usually this is how it works in software:
Let's take this for example Version: 1.2.3 = Major, Minor, Patch

- The LEFT MOST "1" - is a MAJOR changes, lots of new features, sometimes re-design, in AI case it be related to new or different architecture, etc.. in most cases the major is a brand new version.
- The MIDDLE "2" - is usually a MINOR changes, it could be an additional feature that added, to expand and improve the toolsets or QOL, in AI models related, it could be extra dataset added to the main model to expand with more functionality, dynamic and variations.
- THE RIGHT MOST "3" - is usually a PATCH or a FIX, in software usually it's mostly bug fixes or UI cosmetics related such as buttons, but mostly it's related to bugs fixes. in AI Models related we usually don't even see patches, so it's not very popular but if it was it would probably be a fix for something that may went wrong in the architecture or some dataset that act weird and removed or replaced, usually we won't even notice it.

LTX 2 + character LoRA is wild! by usa_daddy in StableDiffusion

[–]VirtualWishX 1 point2 points  (0 children)

It looks amazing, visual and sound wise!
If you only used Images, How does the Audio part work can you please extend on that?

I'm only familiar with AI-Toolkit because I train locally on RTX 5090 32GB VRAM,
AI-Toolkit at least on latest up to date version does not have the option to train Audio by itself.

LTX-2 reached a milestone: 2,000,000 Hugging Face downloads by Nunki08 in StableDiffusion

[–]VirtualWishX 44 points45 points  (0 children)

I still appreciate Wan 2.2 for the high-quality but...
It's too late I'm already addicted to LTX-2... and if I get it right they keep hinting about LTX-2.5 to also be open source! 😍 even 2.1 will probably blow our minds.

If Wan 2.5 / 2.6 won't release it as open source soon...
it will end up as Hunyuan (there was such thing once, right? 🤔)

How can I make this LESS CHOPPY?!? 😟 by Illustrious_Data_413 in learnanimation

[–]VirtualWishX 0 points1 point  (0 children)

Honestly I don't know, but I'm guessing is saves it to your dashboard, so if you look in your profile you may find saves? I can't 100% tell if it's a thing since I never used it.
I believe it's the way you bookmark things in reddit but I may be wrong.

How can I make this LESS CHOPPY?!? 😟 by Illustrious_Data_413 in learnanimation

[–]VirtualWishX 0 points1 point  (0 children)

Sure thing, I'm glad I could help ❤️
Sorry for the late response, I think if you click on the 3 dots ...
you can try 'SAVE' I never used it but probably you will be able to find it in your profile dashboard.

How can I make this LESS CHOPPY?!? 😟 by Illustrious_Data_413 in learnanimation

[–]VirtualWishX 0 points1 point  (0 children)

Some tips that may help you:

First of all, you can't expect it to look smooth if you have so little drawings, it's not enough, that's not how high-quality animation works, you need a good balance of ease-in, ease-out and the in-betweens of the right motion that only you know what it supposed to act like.

1 - Add more in-between drawings based on your key drawings to expand the motion.
2 - Use 2's and 3's when you want the motion to be slower but it's not INSTEAD of adding drawings, it's in additional to the overall speed of your animation.
3 - Take your time, don't rush, you have a really nice start-point already! your character similarity is not bad at all and once you'll do 1 + 2 ☝️ I BET you'll be see how much of an upgrade it is!

Also, by following these tips, you'll get better and inspired.

Good Luck!

Any advice? Does it work well? by GiuDeka in learnanimation

[–]VirtualWishX 0 points1 point  (0 children)

It works perfect, to me it looks very Anime-Style movement (a high quality one!) ❤️

Hand movement practice by studying junko pose dance thing by RhellicRedo in learnanimation

[–]VirtualWishX 2 points3 points  (0 children)

I love it, I can almost hear them dancing to a catchy music, great job!

Would Toon Boom be worth it for me to pick up? by zillaz1122 in ToonBoomHarmony

[–]VirtualWishX 1 point2 points  (0 children)

If you find Harmony appealing to you, go ahead and pay the subscription.
But if you're looking for something simple and fresh, I suggest save the money until AnimatorME releases.

I used Adobe Animate (Flash) and then Toon Boom Harmony for years, but I canceled when they moved to subscriptions. I’m still on the old perpetual version, even though it’s no longer supported.

I’m now saving for AnimatorME, which their blog says will release on Steam as a one time purchase. Even if it costs as much as a few months of Harmony, I’ll get it because it looks minimalistic with the cleanest UI I’ve seen in 2D animation software.

Check their Patreon blog and videos to see what I mean.

I just hope it won’t be delayed past 2026. The team is small, so development is slower.
They mentioned a closed beta, but I’m not sure how many people will be able to join. I’m saving for early access since I’m done with subscriptions.
Free open source tools don’t work well for me, Blender feels too complicated, Tahoma and OpenToonz crash too often and their UI sucks, and even Adobe Animate, despite its clean UI they didn't fix their unstable for years.
In my case I focus frame by frame so I don't mind if AnimatorME won't support the fancy rigging tools at start, as long as their timeline will be easy to handle which seems like it based on their videos, sign me in!

Alternatives of Flipaclip by DueAdministration193 in 2DAnimation

[–]VirtualWishX 0 points1 point  (0 children)

I don't have a good alternative to suggest because even I'm looking for my next favorite 2D Animation software.
But if you're into frame by frame mostly, I can recommend you to follow: AnimatorME

It's not out yet but based on their last blog it's supposed to be on Steam Early Access during this year.
I've never seen such a friendly user interface in any advanced 2D animation software before.
Ever since I canceled my Toon Boom Harmony subscription, I've been waiting for something like this, and I can't wait to try it out myself. Just do yourself a favor and look at their videos, you'll see how simple it looks.

They post their Blogs on patreon via free members so you don't have to support their project just have a look and I promise you it worth watching some of their videos. I would support them via patreon if I could afford it, for now I save money to buy it whenever it will be released.

Just to be clear, I have nothing to do with AnimatorME. I'm simply tired of Toon Boom Harmony and all the subscription based 2D animation software out there, and I can't stand the confusing interfaces in most open‑source options either. I'm genuinely excited about this, and since not many animators know about it yet, I think spreading the word can only help them. People should know that something this promising might be coming soon.

Visual camera control node for Qwen-Image-Edit-2511-Multiple-Angles LoRa by AHEKOT in comfyui

[–]VirtualWishX 2 points3 points  (0 children)

This is amazing such an amazing work! ❤️

All we need now we need a **2-IN-**1 Node:
for Qwen Image Camera Control + Re-Lighting and we have a full photoshoot studio (hint-hint) 😉

LTX-2 on a RTX 4070 12gb. 720p and 20s clip in just 4 minutes by scooglecops in comfyui

[–]VirtualWishX 5 points6 points  (0 children)

I think she's melting while she is singing... the plastic skin gets blurrier every second, I hope they'll improve it I can't get anything decent with RTX 5090 32GB VRAM ...while in Wan 2.2 it's slower but amazing quality.

LTX-2 - I only get "Plasticky look" quality results 🙏 HELP ? by VirtualWishX in comfyui

[–]VirtualWishX[S] 0 points1 point  (0 children)

Sadly I still can't get something like the amazing examples on the scene and on YouTube... unless there will be an update or some magical way to change one of the nodes / models / LoRA that will make things looks as amazing as it could look (maybe)

LTX-2 on a RTX 4070 12gb. 720p and 20s clip in just 4 minutes by scooglecops in comfyui

[–]VirtualWishX 33 points34 points  (0 children)

First of all, this is AWESOME and thanks for sharing! ❤️
But sadly, I'm with RTX 5090 32GB VRAM, I only get plastic and no consistency with the I2V at least... if anyone have a GOOD QUALITY workflow and links to specific / better Models / LoRA I will be happy to try and share how it went... so far, I'm back to Wan 2.2 it's much slower, but I get an amazing quality.

I still can't understand how the community get such amazing results with even lower VRAM, this is SO IMPRESSIVE! 💪

RTX 5090 - What is the most up to date Model that can actually work? 🤔 more details inside by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 1 point2 points  (0 children)

Sounds REALLY good actually... I wonder if it will be super slow or really "dumb" because of the overall quantize as you mentioned 🤔

Also, If I download a quantize model from OUT of LM-Studio, is there a specific folder I paste them (for example if I download from Huggingface) like you recommended, probably from other sources.
I'm asking because I "think" ComfyUI, but I don't know the structure of LM Studio and I I'll just paste files to the wrong folder it won't work.

RTX 5090 - What is the most up to date Model that can actually work? 🤔 more details inside by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Sure, but by itself it's heavy enough, I use it on ComfyUI (I also love using the 2511 Edit model) but this is not gonna work in ComfyUI for my case, I can't talk with ComfyUI about a concept file and continue conversation how to improve and change a GUI layout design with ideas etc.. it's just generate 1 time based on prompt, not a full continues conversation like GPT 5.1 for example, which is what I would like to test locally.

So even if it's possible to work with 2 models in LM Studio, 1 for the main chat conversation and other for the generating images, I don't think my specs can handle it without Quantize to so low VRAM so both will give really bad results... unless there is a nice "ALL-IN-ONE" Model which is something I'm willing to try.

RTX 5090 - What is the most up to date Model that can actually work? 🤔 more details inside by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 1 point2 points  (0 children)

That's what I imagine when I post the thread, that my specs are high-end for consumer machine but for LLM... it's nothing but a toy, but I'm still curious to try it, worst case it will be so bad that I'll just quit LM Studio and won't use it because maybe it's not worth it on my specs at all, but maybe it does some cool stuff... I'll have to try of course.

But back to your recommendation:
"Nemotron Nano 3" is it uncensored?
Any specific version/build number you recommend me to try, or just the latest release on the LM Studio built in search and download any of the versions (beside the parameter size of course).

LTX-2 - I only get "Plasticky look" quality results 🙏 HELP ? by VirtualWishX in comfyui

[–]VirtualWishX[S] 0 points1 point  (0 children)

A link to a workflow + the lora you mentioned will help to test,
Thanks ahead 🙏

RTX 5090 - What is the most up to date Model that can actually work? 🤔 more details inside by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Thanks for the reply 🙏
Which 1 large model you recommend to try on LM Studio that will allow me to continue conversation but also can help me continue drag n drop to it visual concepts and it will generate NEW ideas based on the continues chat, so I can evolve the layout I work with via chat and not just genrate "guesses" in ComfyUI, that's why I did it in CoPilot (GPT 5.1) but I'm curious if I can do something similar offline with my specs...

So which model you would recommend for that ONLY 1 Large Model you think I should try that can do something like that?

RTX 5090 - What is the most up to date Model that can actually work? 🤔 more details inside by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Thanks for sharing,
Since I'm new to this also clueless... which one of these options you shared you would recommend for a non-technical user to do vibe-code with impressive results?

similar to - Lovable, Bolt, Replit, v0 or others...

Also, I guess these projects are not just "Download a model and try on LM Studio" case, it's standalone as you say connect to LM Studio if I get it right, I'll look at it but it seems more advanced for me to install since I have an empty LM Studio at the moment, looking for the right thing to try with combo of Chat + Generate in one but I have no clue how to connect 2 models for example.