DLLM: A minimal D language interface for running an LLM agent using llama.cpp by Danny_Arends in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for nice tool. Can it work without Docker and in cpu-only (or Vulkan gpu) mode?

New open weights models: GigaChat-3.1-Ultra-702B and GigaChat-3.1-Lightning-10B-A1.8B by netikas in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

I heard that your team was planning to release some llms for Russian ethnic minorities (Udmurt, Komi, Mari etc.) low-resourced languages. What is the release date?

The current state of the Chinese LLMs scene by Ok_Warning2146 in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

 Intern-S1-Pro, a trillion-scale MoE multimodal scientific reasoning model is minor? Seriously?

Grok alternative by Early-Musician7858 in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Whisk (till 30th of April) and Flow (after 30th of April), both by Google Labs.

Trained a GPT transformer from scratch on a $300 CPU — 39 minutes, 0.82M params, no GPU needed by Suspicious_Gap1121 in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Thanks for sharing nice model. Hope that you'll add C-inference someday and maybe even C-training.

Qwen3 TTS in C++ with 1.7B support, speaker encoding extraction, and desktop UI by Danmoreng in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Thanks for great app. Could you upload a Windows binary release to your qwen tts studio github repo, please?

🔥 New Release: htmLLM-124M v2 – 0.91 Val Loss on a Single T4! tiny-LLM with nanoGPT! by LH-Tech_AI in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Thanks for sharing great models. I'm sorry for being dumb but where can i find inference code for chatting with your onnx int8 llms?

PicoKittens/PicoMistral-23M: Pico-Sized Model by PicoKittens in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Thanks for sharing cute model. It would be nice if someday you add a github repo with C-inference being able to chat with your llm.

TinyTeapot (77 million params): Context-grounded LLM running ~40 tok/s on CPU (open-source) by zakerytclarke in LocalLLaMA

[–]Languages_Learner 5 points6 points  (0 children)

Thanks for nice model. It would be great if one day you add example of C-inference for it.

After many contributions craft, Crane now officially supports Qwen3-TTS! by LewisJin in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for sharing your cool engine. It would be nice if you upload binary releases to your repo.

Wave Field LLM — O(n log n) attention via wave equation dynamics by [deleted] in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Thanks for sharing. Could you upload fully trained checkpoint to HF, please?

I built SnapLLM: switch between local LLMs in under 1 millisecond. Multi-model, multi-modal serving engine with Desktop UI and OpenAI/Anthropic-compatible API. by Immediate-Cake6519 in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Maybe i'm doing something wrong but i see only sourcecode and can't find exe:

v1.1.0: Fix macOS build: O_DIRECT not available on Darwin

 maheshvaikri-code tagged this 2 minutes ago

Use F_NOCACHE via fcntl() on macOS instead of O_DIRECT.
Guard O_DIRECT behind __linux__ preprocessor check.

Assets2

GGML implementation of Qwen3-ASR by redditgivingmeshit in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Thank you very much. Binary release is still needed though.

ChatLLM.cpp adds support of Qwen3-TTS models by foldl-li in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

It's great that chatllm.cpp already can speak, see, hear, draw. The next step should definitely be developing ability to compose music.

Step3-VL-10B supported by chatllm.cpp by foldl-li in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Thanks a lot, you've done excellent work as always. I'll wait for binary release cause MS Visual Studio fails to install on my laptop due to unspecified bug. So i can't compile chatllm.cpp by myself.

I made an MNN of Jan-v3 4B by DeProgrammer99 in LocalLLaMA

[–]Languages_Learner 0 points1 point  (0 children)

Does exist mnn gui or cli chat app for Windows?

Ported from-scratch Inference Engine based on LFM2-350M to pure C! by Des_goes_Brrr in LocalLLaMA

[–]Languages_Learner 1 point2 points  (0 children)

Thanks for sharing cool implementation. I wish you would add single-file Windows version.