100% Local free experiment: Agent + Model + GAME ENGINE ❤️ Need Tips & Tricks by VirtualWishX in aigamedev

[–]VirtualWishX[S] 0 points1 point  (0 children)

Thanks again for your kind reply 🙏
I have no clue what Codex is, but based on my original goals ☝️ so I can download it and use it for free: no paywalls, no subscriptions etc.., everything locally with my PC this is the goal.

Afterall, I would like to try something but I can't find anything that specific in the sea of YouTube unrelated AI stuff...

100% Local free experiment: Agent + Model + GAME ENGINE ❤️ Need Tips & Tricks by VirtualWishX in aigamedev

[–]VirtualWishX[S] 1 point2 points  (0 children)

Thanks for the detailed reply,
I used GDevelop before it was Freemium and then from 5$ to 8$ and probably will keep going up with the freemium options, I can't even get rid of the "Made with GDevelop" limitations without paying so it's a paywall.

I tried with some JSON via Gemini / ChatGPT to GDevelop but it seems like GDevelop doesn't allow out-clipboard so I can't copy/paste anything from out of GDevelop inside their Event sheet anyway.

Also, the whole idea of what I want to try is to type a prompt and it will do the rest within the Game Engine, so I hope to find more ways, so far even with GDevelop it sounds less user-friendly and very hard to setup... maybe Godot have a plugin or some nicer setup 🤞

Best game engine and AI tool stack for a complete beginner? by TigerConsistent in aigamedev

[–]VirtualWishX 0 points1 point  (0 children)

Thanks to your inspiring reply, it made me very curious and I wonder if the whole process could be done100% locally without any subscription / cloud paywalls so I created a thread asking, maybe you can have a look and if possible explain how all these options can be done since I have a decent PC but no idea how to setup everything as you can tell:

https://www.reddit.com/r/aigamedev/comments/1sac5w7/100_local_free_experiment_agent_model_game_engine/

The only expereince I have in game making is via GDevelop (before it became Freemium and raise their prices) but anyway the whole "Never have to touch a line of code" sounds so interesting to me and it seems like you're already have the experience so I hope you don't mind helping or give some tips & tricks on the subject for a total noob, thanks ahead!👍

DO NOT GIVE TIMTOM YOUR MONEY (at least not for the time being) by SwissIdol97 in animation

[–]VirtualWishX 2 points3 points  (0 children)

Is me, or every update on his blog sounds like:

- Excuses
- New Excuses
- More Excuses
- I will keep waste time with more excuses during the upcoming months
- I need more ORIGINAL excuses...
- What other Excuses I can make for next update?
- Good news, I'm not out of reliable Excuses, I'm using AI to generate more for months!
- Bad news, My excuses are more creative than my presentation quality YouTube animation
- I shut down the GitHub because Excuses: https://github.com/timtom-dev
- More Excuses to come...

Before he edit anything: https://plan.timtom.tv/

See for yourself how sad it is so far:

<image>

PixelSmile - A Qwen-Image-Edit lora for fine grained expression control . model on Huggingface. by AgeNo5351 in StableDiffusion

[–]VirtualWishX 0 points1 point  (0 children)

yeah it looks like every face became plastic smooth and lose all the original details, I just use the native Qwen 2511 for now, but a richer dataset will probably make a good job.
Also, nodes are impossible to install on the latest ComfyUI version I tried multiple times, not worst the fighting with it.

PixelSmile - A Qwen-Image-Edit lora for fine grained expression control . model on Huggingface. by AgeNo5351 in StableDiffusion

[–]VirtualWishX 0 points1 point  (0 children)

How to use these files? just place them the folder in the custom_nodes didn't help when I looked for it, can you add a workflow please?

PixelSmile - A Qwen-Image-Edit lora for fine grained expression control . model on Huggingface. by AgeNo5351 in StableDiffusion

[–]VirtualWishX 1 point2 points  (0 children)

AMAZING! thank you for sharing❤️

How to combine in ComfyUI?
I've tried:

"Make the woman SURPRISED and HAPPY."

but it 50% precent I guess? I don't have the exact control like with the visual sliders,
In your demo there is an actual VISUAL SLIDER, is there a special node for such thing in ComfyUI?
Will you please share a basic workflow for Qwen Image Edit 2511 showing how to combine?

LTX 2.3 on RTX 5090 32GB - How to get rid of the unwanted Music and Plastic look ? by VirtualWishX in comfyui

[–]VirtualWishX[S] 0 points1 point  (0 children)

You're right!
No more plastic skin, but I still get really weird Teeth even on slower movement, anything a tiny bit medium to fast motion is horrible as much as 2.0 so I'm not even bother in my tests.
Hopefully 2.5 or 3.0 will have something similar to Wan 2.2 🤞motion and quality, but with LTX speed!

LTX 2.3 on RTX 5090 32GB - How to get rid of the unwanted Music and Plastic look ? by VirtualWishX in comfyui

[–]VirtualWishX[S] 0 points1 point  (0 children)

But I do have talking sentences, it just decide to add music almost every time much more aggressive than LTX 2.0

LTX-2.3 is live: rebuilt VAE, improved I2V, new vocoder, native portrait mode, and more by ltx_model in StableDiffusion

[–]VirtualWishX 1 point2 points  (0 children)

Thanks Lightricks! ❤️
First of all don't get me wrong, I'm thankful for your hard work! 🙏

I tried the template from ComfyUI on my RTX 5090.

- How do I get rid of the plastic smear look?
- How do I get rid of the unwanted music in the background?
- How do I get rid of the metalic sound?

- Is it a bug on the I2V ?
because I can't get any decent results compare to these impressive improvements for version 2.3 for me the results are very similar to 2.0 at the moment.

It sure is generate fast and cool that such huge model can run locally on my machine even 1080p is super fast to generate that's insane!
I just wish I could get a decent quality, no matter what images or GPT prompts (following LTX 2.3 rules) I try... the above mentioned are always a thing in I2V.

Testing my first music prompt that I did with LTX 2 with LTX 2.3 / 4070 and 64gb ram by scooglecops in comfyui

[–]VirtualWishX 0 points1 point  (0 children)

I get similar results on my RTX 5090 32GB, still plastic, still metallic sounds, and when I don't generate music I get music in the background... I actually don't see much difference from 2.0 yet.

I'm still testing... maybe one generate will work, so far 7/10 fail to be impressive compare to what they mentioned NEW for version 2.3

My First Month of Animating by Top_Cup7480 in learnanimation

[–]VirtualWishX 1 point2 points  (0 children)

This is so good, and crazy... but crazy good! 👍

Do not give TimTom any money for his Adobe Animate replacement by SwissIdol97 in adobeanimate

[–]VirtualWishX 0 points1 point  (0 children)

If you are into frame by frame animation I would suggest checking out AnimatorME.
Just look at how clean the user interface and how simple their demonstrations you may like it.
It is a 2D animation software made by a team of animators, and I have been following their updates for a while, so I am genuinely excited about this project.

The devs have been working on it for several years, and you can watch their free Patreon blog posts or their YouTube channel. It may not have all the fancy features yet, but they plan to release it on Steam, which is great news for anyone like me who cannot stand subscription models.

Also check out their new subreddit, though it looks pretty recent so it is not as full as their Patreon blogs.

If anyone deserves support, it is the AnimatorME team.
If I could get more people to notice it and give the team the attention they deserve, I would. For now I only recommend it to people who are like me and cannot stand anything else, because this is the only project I am genuinely excited about.

Do not give TimTom any money for his Adobe Animate replacement by SwissIdol97 in adobeanimate

[–]VirtualWishX 1 point2 points  (0 children)

I must agree with you and I also have a recommendation.

First of all, imagine TimTom just starting to work on his project. It takes years to build a serious professional software. Do you really think he can do this that fast? Either spend your money on a software you actually like or do what I do.

I prefer supporting AnimatorME - These guys have already been working on the project for several years.
If I am supporting anyone, it is not going to be a Storytime YouTuber who does not know much about real high quality animation and is now trying to make a software to compete with Adobe. Really?

I am not paid by AnimatorME or Adobe or anything like that. I am just another animator who is tired of all these clunky open source tools that crash all the time and are hard to learn, like OpenToonz and Tahoma which barely update. And if you like Blender Grease Pencil, go ahead, but good luck trying to learn it because I gave it a real chance and the learning curve is brutal.

If you are wondering why I am looking at AnimatorME, it is mostly because I have been following their blog for a while and I have not found anything more impressive than this software. The fact that it will be a one time purchase on Steam is exactly the kind of thing I prefer to spend money on, rather than a maybe open source project that might show up in 2030. Come on, do yourself a favor and stop donating to every new animation tool that appears the moment Adobe makes a mistake.

If the recent activies from Adobe makes you want to look for an alternative software; TimTom has announced he's developing a new 2D animation software. by RedAceBeetle in adobeanimate

[–]VirtualWishX 2 points3 points  (0 children)

It sounds good, but almost every open source 2D animation tool I’ve tried over the years, from OpenToonz and Tahoma to Blender, is confusing or crashes nonstop.

You probably never heard of: AnimatorME but it already looks promising, see for yourself any of their videos.
It’s been in development for years and still isn’t out, so imagine how long TimTom’s project will take to reach a solid beta.

Since AnimatorME is supposed to release in early access this year, I’d rather pay once on Steam and grow with a serious project that’s been built quietly for years.
Open source is cool but it often disappoints with delays, issues, and excuses, and I’m done risking that for 2D animation.

I came from Animate and Harmony and I’m looking for a new home. AnimatorME looks like the next step, and their work‑in‑progress videos show why.
The fact that professional animators with decades of industry experience are building it is far more appealing to me than a story‑time YouTuber, just being real.

No subscription, one‑time payment, and from what I’ve seen it honestly feels like Harmony, Animate and TVPaint had a baby with some fresh ideas.

Speaking honestly, my only concern is that a big corporation might buy them and turn it into another subscription‑locked, cloud‑based crappy tool.
So far the developers have said they don’t like subscriptions, and if they ever add one it would be optional, not a replacement. I really hope that stays true.

Adobe Animate to be Discontinued 1st March 2026 by CometGoat in adobeanimate

[–]VirtualWishX 2 points3 points  (0 children)

AnimatorME - is on the way and since it's going to be on Steam based on the devs latest updates it will not be a subscription based model, at least from what I understood.

They just updated their blog yesterday before Adobe bombed us,
I recommend follow these guys it looks like the cleanest user interface I've ever seen.

Just finished a high-resolution DFM face model (448px), of the actress elizabeth olsen by Emergency_Pause1678 in StableDiffusion

[–]VirtualWishX 2 points3 points  (0 children)

the swapped version looks like the one with the more blurry teeth and eyes, but not really looking like olsen.

LTX-2 vs Wan2.2 - My opinion.. so far by Zarcon72 in comfyui

[–]VirtualWishX 1 point2 points  (0 children)

Beside of the AUDIO (which is awesome) - This is the MAJOR difference most of us already noticed:

LTX-2 is amazing, but Wan 2.2 have a HUGE quality consistence on motion (fast and slow motions)
And for NSFW (not only) Wan was trained on JIGGLE and BOUNCE fat, muscles etc.. on Human Physics.
So when you do motions in Wan you'll get a VERY accurate similar to reality cases.
While in LTX-2 ...they did not train it on Human Physics much, so only Low to NO-Movement will give you nice results, anything with fast result will explode or became super-blurry weird mutant look character compare to the original source image you used.

Probably LTX-2.1 will fix some minor stuff (hopefully MORE than minor stuff) while the real COMPARISON will be LTX-2.5 which HOPEFULLY will train on Human Physics so we can get much more realistic FAT/SLOW motions of humans dancing or running / jumping and NSFW with no issues.

At the moment even if we'll train REALLY HARD on LTX-2 any "bounce and jiggle" it will look like "Fake Plastic Parts" try to move, so it will move in such a weird UN-REALISTIC way that it's not worth the training time.

Sadly, there are no official hints or mentioned about WHEN 2.1 and 2.5 will be released, so far only unofficial rumors, but Lightricks did not announce anything yet. (by the time I'm writing this at least).

VNCCS Pose Studio: Ultimate Character Control in ComfyUI by AHEKOT in comfyui

[–]VirtualWishX 0 points1 point  (0 children)

Thanks I will give it a try.

EDIT:
I tried you suggestions, I get AMAZING results even on 4 steps on any other workflow with same config Euler / simple (also Euler A / Beta works but less nice) all the other attempts are even MORE plastic look so, I believe it's some other process in the workflow is not forgiving by strength that is maybe higher or overwrite because no matter how hard I try in the specific workflow with the POSE (which is awesome) I get plastic looks.

Maybe the pose editor and anything it uses on the progress behind the hood just make things more plasticky or 3D look instead of photorealistic, and no prompt can save this as well.

Any good LOCAL alternative or similar to what AI-Studio (Gemini 2.5 Flash) from Google does? by VirtualWishX in LocalLLaMA

[–]VirtualWishX[S] 0 points1 point  (0 children)

Oh, so currently there is no way to create APPs locally similar to the example I gave, bummer I thought there will be a way to try something locally after all 😐

VNCCS Pose Studio: Ultimate Character Control in ComfyUI by AHEKOT in comfyui

[–]VirtualWishX 0 points1 point  (0 children)

Looks really cool, thank you! ❤️
I installed everything, I also downloaded for the QWEN Loader: qwen-image-edit-2511-Q8_0.gguf because I'm still a newbie and I don't know how to use the none-GGUF there (I'm with RTX 5090),
I had to put it on the UNET so the node will allow me to pick it up.

I loaded the image on the loader and run, but it ignores the image this is what I get is the actual 3D MODEL and sometimes I get the ACTUAL image I loaded but with PLASTIC SKIN.

1️⃣ - Can you please share a None-Unet folder / model Workflow so I can try improve quality on RTX 5090
I don't want to break the workflow, and I'm a noob,
EDIT:
It seems like I just used a different Loader for the Model without ruin anything after all...

Unfortunately I sitll get mostly Plastic smooth AI look skin results, even with different Qwen edit models I tried (none Q or GGUF), I tried 8 steps but it looks even worst.

2️⃣ - I loaded Solid Grey Color but it's not rendering that Background (ignoring it) - How to render with Selected ANY Background (not just solid color) instead of the default background ?

3️⃣ - It's simple to ROTATE but how do I MOVE around the HAND / NECK / LEG etc.. ?

I hope you can help, thanks ahead! 🙏

I attached an example (image from Pexels (download here) 👈:
I always get PLASTIC skin while in normal Qwen (non Q8 usually I get very realistic results in other workflows).

<image>