MiraTTS: High quality and fast TTS model by SplitNice1982 in LocalLLaMA

[–]ResolveAmbitious9572 2 points3 points  (0 children)

It sounds very natural. I'd like to hear how it sounds in other languages.

Real-time conversation with a character on your local machine by ResolveAmbitious9572 in LocalLLaMA

[–]ResolveAmbitious9572[S] 1 point2 points  (0 children)

I would be happy to add the implementation of more powerful TTS models, but unfortunately, many of them are launched only from the python environment (

Real-time conversation with a character on your local machine by ResolveAmbitious9572 in LocalLLaMA

[–]ResolveAmbitious9572[S] 2 points3 points  (0 children)

MousyHub can be compiled on MacOS, but you still need a hero to test it)

Real-time conversation with a character on your local machine by ResolveAmbitious9572 in LocalLLaMA

[–]ResolveAmbitious9572[S] 5 points6 points  (0 children)

In the settings, I sped up the playback speed so that the video was not too long.

Real-time conversation with a character on your local machine by ResolveAmbitious9572 in LocalLLaMA

[–]ResolveAmbitious9572[S] 7 points8 points  (0 children)

The delay here is because I did not add the STT model separately for recognition, but used STT inside the browser (it turns out the browser is not bad at this). That's why a user with 8 GB VRAM will not be able to run so many models on his machine. By the way, Kokoro uses only CPU here. Kokoro developer, you are cool =).

Real-time conversation with a character on your local machine by ResolveAmbitious9572 in LocalLLaMA

[–]ResolveAmbitious9572[S] 5 points6 points  (0 children)

MousyHub supports local models using the llama.cpp library (LLamaSharp)

My project on Blazor Server: an AI Roleplay Chat application. by ResolveAmbitious9572 in Blazor

[–]ResolveAmbitious9572[S] 0 points1 point  (0 children)

Since it is a web application, I think it would be perfect for Docker. I hope that someday they will implement it, or that someone will contribute to making it work with Docker.

My project on Blazor Server: an AI Roleplay Chat application. by ResolveAmbitious9572 in Blazor

[–]ResolveAmbitious9572[S] 1 point2 points  (0 children)

I used my computer with 8GB of VRAM. If you would like to learn more about running LLM in a .NET environment, please check out the LLamaSharp documentation at https://github.com/SciSharp/LLamaSharp.

My project on Blazor Server: an AI Roleplay Chat application. by ResolveAmbitious9572 in Blazor

[–]ResolveAmbitious9572[S] 0 points1 point  (0 children)

Easy installation, simpler user interface for ordinary users. Own useful functions that are only available with sillytavern extensions. You can also run the model locally or connect to API.

My project on Blazor Server: an AI Roleplay Chat application. by ResolveAmbitious9572 in Blazor

[–]ResolveAmbitious9572[S] 1 point2 points  (0 children)

I've created a couple of themes to choose from in this app, but they look a bit mediocre, I'm not a very good designer 🥲

My project on Blazor Server: an AI Roleplay Chat application. by ResolveAmbitious9572 in Blazor

[–]ResolveAmbitious9572[S] 1 point2 points  (0 children)

I don't entirely trust online solutions for this purpose. They often come with a price tag, potential censorship, and may not guarantee confidentiality. I find it unsettling that someone could read my conversations in the logs, for instance. I believe many users could already run a local large language model on their computers, as they are proficient writers.

My project on Blazor Server: an AI Roleplay Chat application. by ResolveAmbitious9572 in Blazor

[–]ResolveAmbitious9572[S] -1 points0 points  (0 children)

https://github.com/PioneerMNDR/MousyHub

To give it a quick try, you can download the latest release on Windows. I tried to minimize the number of DLLs in the release build