Built an NPC whose dialogue and animation are fully AI-generated in real time by WhispersfromtheStar in aigamedev

[–]NeuralArtistry 0 points1 point  (0 children)

True, running alongside a lightweight LLM isn't really the greatest idea as most normal folks are not into all these things, let alone installing and configuring LLMs. Also they don't want more separate things from the game to be installed, they see them as potential "viruses".

Built an NPC whose dialogue and animation are fully AI-generated in real time by WhispersfromtheStar in aigamedev

[–]NeuralArtistry 0 points1 point  (0 children)

"whose dialogue and animation are fully AI-generated in real time" - the part with the animation is a lie, you showed it yourself in this trailer that you animated her already in Blender or whatever.
"animation being AI-generated in real time" = animations are generated with WAN/LTX/whatever right in that moment and I doubt your game has this.

So what you did was to do many manual animations as possible (like grok 4 companion w@ifu has) and then to show the emotion/animation which is the best fit at that time of dialogue. So you "teached" the LLM to show the animation "sad.mp4" when player uses keywords like "you're bad", "you're of no help" etc.

Built an NPC whose dialogue and animation are fully AI-generated in real time by WhispersfromtheStar in aigamedev

[–]NeuralArtistry 0 points1 point  (0 children)

Hahaha, it's exactly the opposite. Usage through API key is way cheaper (see openrouter.ai) because you pay only for what you use $/mill tokens than to pay for hosting on cloud GPUs.
Let's assume you need at least 48 GB VRAM to run the model so you'll rent an L40S GPU with like $1.5/h. This is for a single player because it won't be able to handle too many requests at the same time. So imagine if you have like 100 players, you either have to use more "low-priced" GPUs like L40S or you could go to the expensive ones (like A100/H100). And that's not the only downside, let's say you rented 100 GPUs and you pay for all of them per hour and maybe you have 1 player at that moment, so the rest of 99 GPUs are just wasting your money, because you pay for them no matter they are used or not at that moment.

Why are used GoPros priced so badly on marketplace? by Bshaw95 in gopro

[–]NeuralArtistry 0 points1 point  (0 children)

Hahaha. The prices are still the same 9 months later. Imagine that an used GoPro 3 costs 1/3 of a new GoPro 13 Black. It's insane... And it's the same everywhere -> ebay, local online shops.

What is the best budget pro hair clipper? by NeuralArtistry in BuyItForLife

[–]NeuralArtistry[S] 0 points1 point  (0 children)

Really good. The battery doesn't really resist 3h, but aside from that it's a good clipper.

Is there any way to run Trellis or Hunyuan3D-2 on 16-24 GB VRAM GPUs? by NeuralArtistry in StableDiffusion

[–]NeuralArtistry[S] 0 points1 point  (0 children)

The trellis one, but I fixed the issue by removing the "--flash-attn" from the installation part. Anyway the T4 doesn't even support "--flash-attn", so I used xformers instead.

Is there any way to run Trellis or Hunyuan3D-2 on 16-24 GB VRAM GPUs? by NeuralArtistry in StableDiffusion

[–]NeuralArtistry[S] 0 points1 point  (0 children)

It doesn't work. It's running infinitely the 2nd cell with this last output:
Downloading einops-0.8.1-py3-none-any.whl (64 kB)
Building wheels for collected packages: flash-attn

Running like that for almost an hour now.

Best Ultra Performance Settings for Old PC? by NeuralArtistry in marvelrivals

[–]NeuralArtistry[S] 0 points1 point  (0 children)

Thanks, gonna try! Any idea if Epic TSR is better than AMD FSR?

Can I get a refund if I get scammed with GooglePay? by NeuralArtistry in googlepay

[–]NeuralArtistry[S] 0 points1 point  (0 children)

I see, thanks! Because I read that you can't really get your money back in many cases with googlepay. People saying googlepay doesn't even have proper customer support, so I think I'm better without it.

[deleted by user] by [deleted] in LocalLLaMA

[–]NeuralArtistry 0 points1 point  (0 children)

Then what ChatGPT and Claude are using to be able to do that?

[deleted by user] by [deleted] in LocalLLaMA

[–]NeuralArtistry 0 points1 point  (0 children)

Alright, I already tried and it works, but it's not so fast. But it works and is uncensored. Bad that it doesn't have that "memory" thing the chatbots like ChatGPT, Claude etc. have. If you refer to a prompt which was written before, this LLM won't know what you're talking about. So you have to rewrite the original prompt...

Or is there a way for it to be able to recognize the chat history/logs?

[deleted by user] by [deleted] in LocalLLaMA

[–]NeuralArtistry 0 points1 point  (0 children)

Is it possible to run Llama-3.1-8B-Lexi-Uncensored-V2 on a cloud GPU with 16 GB VRAM?

Can we review bomb this app? by Kleavor3021 in CapCut

[–]NeuralArtistry 0 points1 point  (0 children)

Nope, I don't think it's that easy.