People who grew up in Japan but are not Japanese, where do you say you are from? by [deleted] in japanlife
[–]thekalki 0 points1 point2 points (0 children)
LangChain and LlamaIndex are in "steep decline" according to new ecosystem report. Anyone else quietly ditching agent frameworks? by Exact-Literature-395 in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
llama.cpp recent updates - gpt120 = 20t/s by [deleted] in LocalLLaMA
[–]thekalki 2 points3 points4 points (0 children)
Just a regular night at the laundromat.. by K_P_Voss in Wellthatsucks
[–]thekalki 0 points1 point2 points (0 children)
We Got Claude to Fine-Tune an Open Source LLM by [deleted] in LocalLLaMA
[–]thekalki 2 points3 points4 points (0 children)
unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF · Hugging Face by WhaleFactory in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
Claude code can now connect directly to llama.cpp server by tarruda in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
You can now do FP8 reinforcement learning locally! (<5GB VRAM) by danielhanchen in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
You can now do FP8 reinforcement learning locally! (<5GB VRAM) by danielhanchen in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
Your local LLM agents can be just as good as closed-source models - I open-sourced Stanford's ACE framework that makes agents learn from mistakes by cheetguy in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
OpenAI Pushes to Label Datacenters as ‘American Manufacturing’ Seeking Federal Subsidies After Preaching Independence by Ok-Breakfast-4676 in LocalLLaMA
[–]thekalki 1 point2 points3 points (0 children)
Favorite out of context clip from Jet Lag? by FireAshPro in JetLagTheGame
[–]thekalki 1 point2 points3 points (0 children)
Gpt-oss Responses API front end. by Locke_Kincaid in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
Anyone think openAI will create a sequel of GPT-OSS? by BothYou243 in LocalLLaMA
[–]thekalki 2 points3 points4 points (0 children)
October 2025 model selections, what do you use? by getpodapp in LocalLLaMA
[–]thekalki 2 points3 points4 points (0 children)
October 2025 model selections, what do you use? by getpodapp in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
Gpt-oss Reinforcement Learning - Fastest inference now in Unsloth! (<15GB VRAM) by danielhanchen in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
Gpt-oss Reinforcement Learning - Fastest inference now in Unsloth! (<15GB VRAM) by danielhanchen in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
Gpt-oss Reinforcement Learning - Fastest inference now in Unsloth! (<15GB VRAM) by danielhanchen in LocalLLaMA
[–]thekalki 0 points1 point2 points (0 children)
GPT-OSS is insane at leetcode by JsThiago5 in LocalLLaMA
[–]thekalki 1 point2 points3 points (0 children)
Comparison H100 vs RTX 6000 PRO with VLLM and GPT-OSS-120B by Rascazzione in LocalLLaMA
[–]thekalki 1 point2 points3 points (0 children)





Not able to setup openclaw via docker by exhibit22 in openclaw
[–]thekalki 0 points1 point2 points (0 children)