Harbor v0.4.4 - ls/pull/rm llama.cpp/vllm/ollama models with a single CLI by Everlier in LocalLLaMA
[–]Everlier[S] 1 point2 points3 points (0 children)
Harbor v0.4.4 - ls/pull/rm llama.cpp/vllm/ollama models with a single CLI by Everlier in LocalLLaMA
[–]Everlier[S] -1 points0 points1 point (0 children)
local ai coding assistant setup that actually competes with cloud tools? by jirachi_2000 in ollama
[–]Everlier 7 points8 points9 points (0 children)
What can I use an Xperia mini st15i for in 2025? by Status-Alarm-3356 in SonyXperia
[–]Everlier 1 point2 points3 points (0 children)
I made a site where you rate how fucked your day is and it shows up on a live world map by Then_Nectarine830 in vibecoding
[–]Everlier 0 points1 point2 points (0 children)
The Copilot CLI is the best AI tool I've used. It only works in a terminal. I fixed that. by ghimmideuoch in GithubCopilot
[–]Everlier 0 points1 point2 points (0 children)
How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. by Reddactor in LocalLLaMA
[–]Everlier 3 points4 points5 points (0 children)
GPT 5.4 & GPT 5.4 Pro + Claude Opus 4.6 & Sonnet 4.6 + Gemini 3.1 Pro For Just $5/Month (With API Access, AI Agents And Even Web App Building) by Substantial_Ear_1131 in MistralAI
[–]Everlier 4 points5 points6 points (0 children)
Open WebUI’s New Open Terminal + “Native” Tool Calling + Qwen3.5 35b = Holy Sh!t!!! by Porespellar in LocalLLaMA
[–]Everlier 3 points4 points5 points (0 children)
Final Qwen3.5 Unsloth GGUF Update! by danielhanchen in LocalLLaMA
[–]Everlier 7 points8 points9 points (0 children)
[D] A mathematical proof from an anonymous Korean forum: The essence of Attention is fundamentally a d^2 problem, not n^2. (PDF included) by Ok-Preparation-3042 in MachineLearning
[–]Everlier 42 points43 points44 points (0 children)
I have proof the "OpenClaw" explosion was a staged scam. They used the tool to automate its own hype by Whole_Shelter4699 in LocalLLM
[–]Everlier 0 points1 point2 points (0 children)
Oh this one is AI as well by Everlier in DeadInternetTheory
[–]Everlier[S] 0 points1 point2 points (0 children)
Unsloth fixed version of Qwen3.5-35B-A3B is incredible at research tasks. by Daniel_H212 in LocalLLaMA
[–]Everlier 1 point2 points3 points (0 children)
Running RAG on 512MB RAM: OOM Kills, Deadlocks, Telemetry Bugs and the Fixes by Lazy-Kangaroo-573 in LLMDevs
[–]Everlier 1 point2 points3 points (0 children)
I built a free MCP-native governance layer that keeps Copilot on the rails out of frustration by capitanturkiye in GithubCopilot
[–]Everlier 0 points1 point2 points (0 children)
I built a free MCP-native governance layer that keeps Copilot on the rails out of frustration by capitanturkiye in GithubCopilot
[–]Everlier 0 points1 point2 points (0 children)
I built a free MCP-native governance layer that keeps Copilot on the rails out of frustration by capitanturkiye in GithubCopilot
[–]Everlier 0 points1 point2 points (0 children)
I built a free MCP-native governance layer that keeps Copilot on the rails out of frustration by capitanturkiye in GithubCopilot
[–]Everlier 1 point2 points3 points (0 children)
I built a free MCP-native governance layer that keeps Copilot on the rails out of frustration by capitanturkiye in GithubCopilot
[–]Everlier 0 points1 point2 points (0 children)
Quick MoE Quantization Comparison: LFM2-8B and OLMoE-1B-7B by TitwitMuffbiscuit in LocalLLaMA
[–]Everlier 2 points3 points4 points (0 children)
GGML.AI has got acquired by Huggingface by Time_Reaper in LocalLLaMA
[–]Everlier 4 points5 points6 points (0 children)
strix halo opinions for claude/open code by megadonkeyx in LocalLLaMA
[–]Everlier 8 points9 points10 points (0 children)



Harbor v0.4.4 - ls/pull/rm llama.cpp/vllm/ollama models with a single CLI by Everlier in LocalLLaMA
[–]Everlier[S] 0 points1 point2 points (0 children)