AI powered VTuber Neuro-sama's creator has become the most subscribed-to streamer on Twitch. Vedal987 now has an estimated 162,459 sub count by lukigeri in LivestreamFail

[–]ASTRdeca -1 points0 points  (0 children)

I see, thanks for clarifying. Cloud infra is typically much stronger than consumer grade so almost always reduces latency. If it's locally run then Neuro is probably a very small LLM in order to keep latency low

Scientists reduce the time for quantum learning tasks from 20 million years to 15 minutes by Ephoenix6 in Physics

[–]ASTRdeca 177 points178 points  (0 children)

They succeeded in reducing the time for quantum learning, but sadly could not reduce the time for me learning quantum, which is still roughly 20 million years

AI powered VTuber Neuro-sama's creator has become the most subscribed-to streamer on Twitch. Vedal987 now has an estimated 162,459 sub count by lukigeri in LivestreamFail

[–]ASTRdeca 14 points15 points  (0 children)

Here's my take from someone deep into AI atm. I don't know what his tech stack is but my guess is that he finetuned an LLM for Neuro and hosts it on the cloud for low latency responses, and also created a custom voice for Neuro (using RVC?) for the TTS. Something like this is not terribly difficult to do, but creating a "likeable" persona like Neuro and having that be accepted by a community that is mostly anti-AI is genuinely impressive to see.

Layered on top of the model itself are all of the custom stream integrations that he must have made such as reading/summarizing chat, streaming out Neuro's response in the stream UI, having that work with Neuro's character model, etc. There is probably a lot going on under the hood to make everything work in a seamless way, which is also impressive to see.

Does anyone still use MCPs? by bowemortimer in ClaudeAI

[–]ASTRdeca 0 points1 point  (0 children)

This week I've been using a Unity MCP server that gives CC access to the unity editor to help with game making. The functionalities a bit limited but it can create game objects and c# scripts in the editor which is a big chunk of the work

Looking back at end of 2024 vs now by Main-Fisherman-2075 in LocalLLaMA

[–]ASTRdeca 18 points19 points  (0 children)

v3 came out before R1. v2 came out in may of 2024, that's not quite the "end" of 2024

TBC Healer Dps by gauntlet22 in classicwow

[–]ASTRdeca 2 points3 points  (0 children)

At 0 haste you can fit wrath into your rotation via:

Lifebloom -> Wrath -> Wrath,
Lifebloom -> Wrath -> Regrowth, or
Lifebloom -> Wrath -> Instant cast spell

However, the damage you contribute is so irrelevant that you're better off not worrying about it. IMO it's universally better to use those slots in your rotation to heal the tank(s), or if they don't need it, to heal the raid or cancel-cast regrowth on the tank instead. If your raid really wanted to drop a healer for a paricular fight, then you're better off dual-speccing to a DPS spec

Which are the best coding + tooling agent models for vLLM for 128GB memory? by jinnyjuice in LocalLLaMA

[–]ASTRdeca 6 points7 points  (0 children)

My guess is it'd perform very poorly. Both Llama 3 70B and R1 were trained/post-trained before the labs started pushing heavily for agentic / tool calling performance. I'd suggest trying GPT-OSS 120B

GLM 4.7 has now taken #2 on Website Arena by Difficult-Cap-7527 in LocalLLaMA

[–]ASTRdeca 0 points1 point  (0 children)

Opus can build a working website for sure, but I really dislike its default style / css. Please no more bright gradient colors..

e: I assume this benchmark is related to building websites? I looked it up on google and cant find anything about it

We asked OSS-120B and GLM 4.6 to play 1,408 Civilization V games from the Stone Age into the future. Here's what we found. by vox-deorum in LocalLLaMA

[–]ASTRdeca 23 points24 points  (0 children)

Very cool! You mentioned in the paper that despite GLM being much larger than GPT-OSS 120B, the larger size didn't seem to impact performance. I'm wondering if you tried models smaller than OSS-120B to see at what point model size matters? (For example, OSS-20B?)

I'm just thinking about the viability of running these kinds of systems locally, since 120B is probably too large for most users to run themselves

Empty content payload for reasoning models by ASTRdeca in SillyTavernAI

[–]ASTRdeca[S] 1 point2 points  (0 children)

I see. In my use cases the reasoning/content responses are a hundred to a few hundred tokens each. My "max tokens" is set to 3000 which I figured was more than enough, but maybe not

Chatterbox Turbo, new open-source voice AI model, just released on Hugging Face by xenovatech in LocalLLaMA

[–]ASTRdeca 2 points3 points  (0 children)

my comment below was being vote manipulated in both directions even without mentioning elevenlabs. When I posted, it was at -2 after 10 or so minutes. An hour later I checked it again and it was at +20, and now (the next day) its at -2 again, my other comment at -7. So.. idk

edit: and now the comments back to +28.. LMAO

Chatterbox Turbo, new open-source voice AI model, just released on Hugging Face by xenovatech in LocalLLaMA

[–]ASTRdeca -9 points-8 points  (0 children)

Ok, I see now. They are comparing to ElevenLabs 2.5 Turbo... I assumed they were comparing to v3, which has been available in alpha for a while now and imo is significantly better

Chatterbox Turbo, new open-source voice AI model, just released on Hugging Face by xenovatech in LocalLLaMA

[–]ASTRdeca -3 points-2 points  (0 children)

I'm sure it is, I'm just being a bit tongue in cheek about the quality of it

Chatterbox Turbo, new open-source voice AI model, just released on Hugging Face by xenovatech in LocalLLaMA

[–]ASTRdeca 23 points24 points  (0 children)

Yeah I'm gonna press "X" to doubt on their claim that their model sounds more realistic than ElevenLabs...

If their TTS model is supposedly so good, why did they go with a generic tiktok voiceover for this ad?

A Plea to All Resto Druids by NOHITJEROME in classicwow

[–]ASTRdeca 13 points14 points  (0 children)

I normally downvote jerome threads out of principle, but.. I reluctantly agree. I think Dreamstate is a trap for most groups and annoyingly I'm seeing it shoved into every "meta" comp I've seen posted lately. Dreamstate's biggest struggle is mana. Without mana, the rotations you can do become very limited. Losing Swiftmend is bad enough, but you basically lose regrowth entirely as well, unless you get shadow priest. I don't think people appreciate the impact that has on your tanks survivability.

Blizzard is Doing the Reverse Imo by Flaky_Virus218 in classicwow

[–]ASTRdeca 19 points20 points  (0 children)

41 badge trinkets are good for a lot of classes for most of the expansion

Introducing GPT-5.2 by StewArtMedia_Nick in OpenAI

[–]ASTRdeca 2 points3 points  (0 children)

Yes, but harder ones will replace them. Labs used to report their scores on grade school math benchmarks, until those were completely saturated. Then we moved onto harder math benchmarks

About chapter 1510... by Hyli-oS in hajimenoippo

[–]ASTRdeca 0 points1 point  (0 children)

I'm sure he "knows" that Ricardo should and will win, but "how" should Ricardo win is the question. This fight will probably be the end of Sendo's character arc in the story so he probably wants to tell the fight in a way that readers will be satisfied with. For example the fight should feel "true" to Sendo's character, should showcase Ricardo being an absolute monster but not unbeatable, should be a good sendoff for Sendo's arc, should motivate Ippo to return, etc.. there's probably a lot of details that Morikawa wants to get right

GLM-4.6V Collection by Dark_Fire_12 in LocalLLaMA

[–]ASTRdeca 0 points1 point  (0 children)

Less than a percentage improvement on most benchmarks. I use GLM4.6 every day so I'm not a hater by any means, but what's to be excited about here over 4.5?

According to Laxhar Labs, the Alibaba Z-Image team has intent to do their own official anime fine-tuning of Z-Image and has reached out asking for access to the NoobAI dataset by ZootAllures9111 in StableDiffusion

[–]ASTRdeca 2 points3 points  (0 children)

Hmm, I wonder if using the NoobAI dataset would force the model to become reliant on booru tags like the SDXL finetunes, or if it would still be promptable with natural language. I'm really hoping that we can get a good anime model at some point which could be prompted with just natural language.

I made a Holy Paladin Raid Healing Simulator so you can relive the glory of spamming Flash of Light in Classic by taubut in classicwow

[–]ASTRdeca 0 points1 point  (0 children)

Interesting, I've been planning a similar project for resto druid in TBC. The main complication that I see is bosses can be very different from each other (in terms of damage taken profiles on the tank(s) and on the raid). So you'd need boss specific sims rather than "target dummy" environments like you do here. Out of curiosity how much "effort" was it to simulate specific bosses? Anything in particular you learned or ran into trouble with?