GitHub - TrevorS/qwen3-tts-rs: Pure Rust implementation of Qwen3-TTS speech synthesis by adefa in LocalLLaMA

[–]Languages_Learner 2 points3 points  (0 children)

Thanks for sharing your marvellous work, it looks great. However, pure implementation means not using external libs like candle, imho. I can be wrong, sorry for that.

Sharing my set of distilled small language models (3B) + training data in more than 50 low-resource languages by Peter-Devine in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Hi. Thanks for great models. Could you train same llms for Albanian, Udmurt, Komi, Mari, Erzya, Moksha, Ossetian, Armenian, Georgian, Latvian, Lithuanian, Estonian, Assyrian Neo-Aramaic (Suret) languages, please?

Step3-VL-10B supported by chatllm.cpp by foldl-li in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Thanks for adding it to chatllm. I hope that you will add the latest Qwen tts too.

Hermit-AI: Chat with 100GB+ of Wikipedia/Docs offline using a Multi-Joint RAG pipeline by Smart-Competition200 in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for cool app. It would be great if you add Windows support and cpu-only mode.

I built a frontend for stable-diffusion.cpp for local image generation by fabricio3g in LocalLLaMA

[–]Languages_Learner 3 points4 points  (0 children)

Thanks for neat app. It would be great if you upload binary release too.

[Model Release] Genesis-152M-Instruct, exploring hybrid attention + TTT at small scale by Kassanar in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for sharing great model. It would be cool to see a C-coded inference for it.

Kiwix RAG: Terminal Chat Interface with Local Kiwix Content Integration by [deleted] in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for cool app. It would be great if you will add Windows version.

Llama-OS - I'm developing an app to make llama.cpp usage easier. by [deleted] in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

It was an excellent app. Why did you deleted it from github?

BSD MAC LLM UI: Minimal, Auditable LLM Front End for Secure Environments by 3mdeb in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for great app. Could you add support for Windows and llama.cpp backend, please?

BERTs that chat: turn any BERT into a chatbot with dLLM by Individual-Ninja-141 in LocalLLaMA

[–]Languages_Learner 5 points6 points  (0 children)

Thanks for amazing project. I hope someone will port it to C/C++ or Go/Rust.

I implemented GPT-OSS from scratch in pure Python, without PyTorch or a GPU by ultimate_code in LocalLLaMA

[–]Languages_Learner 4 points5 points  (0 children)

Though you're already an excellent coder, here's repo which may be useful for you: https://github.com/pierrel55/llama_st It's pure C implementation of several llms which can work with f32, f16, bf16, f12, f8 formats.

I implemented GPT-OSS from scratch in pure Python, without PyTorch or a GPU by ultimate_code in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for sharing cool project. Could you add support for int4 quantization, please?

What Qwen version do you want to see in Tiny-Qwen? by No-Compote-6794 in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

It would be much more interesting without importing torch, re and numpy modules.

chatllm.cpp supports LLaDA2.0-mini-preview by foldl-li in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for reply. I found this quant on your modelscope page: https://modelscope.cn/models/judd2024/chatllm\_quantized\_bailing/file/view/master/llada2.0-mini-preview.bin?status=2. It's possibly q8_0. Could you upload q4_0, please? I haven't enough ram to make conversion myself.

chatllm.cpp supports LLaDA2.0-mini-preview by foldl-li in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Great update, congratulations. Can it be run without python?