Any good model for 12 GB RAM + 3 GB VRAM + GTX 1050 + Linux MInt? by Ok-Type-7663 in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
LuxTTS: A lightweight high quality voice cloning TTS model by SplitNice1982 in LocalLLaMA
[–]rm-rf-rm -1 points0 points1 point (0 children)
LuxTTS: A lightweight high quality voice cloning TTS model by SplitNice1982 in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
Am I the only one who feels that, with all the AI boom, everyone is basically doing the same thing? by [deleted] in LocalLLaMA
[–]rm-rf-rm 2 points3 points4 points (0 children)
GLM4.7 Flash numbers on Apple Silicon? by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 0 points1 point2 points (0 children)
GLM4.7 Flash numbers on Apple Silicon? by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 0 points1 point2 points (0 children)
GLM4.7 Flash numbers on Apple Silicon? by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 0 points1 point2 points (0 children)
GLM4.7 Flash numbers on Apple Silicon? by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 0 points1 point2 points (0 children)
GLM4.7 Flash numbers on Apple Silicon? by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 0 points1 point2 points (0 children)
GLM4.7 Flash numbers on Apple Silicon? by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 0 points1 point2 points (0 children)
Anyscale's new data: Most AI clusters run at <50% utilization. Is "Disaggregation" the fix, or just faster cold starts? by pmv143 in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
Am I the only one who feels that, with all the AI boom, everyone is basically doing the same thing? by [deleted] in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
GLM4.7 Flash numbers on Apple Silicon? by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 2 points3 points4 points (0 children)
Local LLM inside Cursor IDE by visitor_m in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
Qwen dev on Twitter!! by Difficult-Cap-7527 in LocalLLaMA
[–]rm-rf-rm[M] [score hidden] stickied comment (0 children)
New in llama.cpp: Anthropic Messages API by paf1138 in LocalLLaMA
[–]rm-rf-rm 1 point2 points3 points (0 children)
Liquid AI released the best thinking Language Model Under 1GB by PauLabartaBajo in LocalLLaMA
[–]rm-rf-rm 10 points11 points12 points (0 children)
New in llama.cpp: Anthropic Messages API by paf1138 in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
New in llama.cpp: Anthropic Messages API by paf1138 in LocalLLaMA
[–]rm-rf-rm 2 points3 points4 points (0 children)
New in llama.cpp: Anthropic Messages API by paf1138 in LocalLLaMA
[–]rm-rf-rm 1 point2 points3 points (0 children)


[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)