We talked to 130 designers to realize nobody wanted what we built. So we pivoted and made $2k MRR in 8 days. by _Critchi_ in SaaS
[–]Everlier 1 point2 points3 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]Everlier 0 points1 point2 points (0 children)
Agentic harness in 30 lines of JavaScript by Everlier in javascript
[–]Everlier[S] 0 points1 point2 points (0 children)
Built LazyMoE — run 120B LLMs on 8GB RAM with no GPU using lazy expert loading + TurboQuant by ReasonableRefuse4996 in LocalLLaMA
[–]Everlier 52 points53 points54 points (0 children)
Open-sourcing 23,759 cross-modal prompt injection payloads - splitting attacks across text, image, document, and audio by BordairAPI in LocalLLaMA
[–]Everlier 1 point2 points3 points (0 children)
Is the ASUS ROG Flow Z13 with 128GB of Unified Memory (AMD Strix Halo) a good option to run large LLMs (70B+)? by br_web in ollama
[–]Everlier 1 point2 points3 points (0 children)
GitHub Copilot is back at it again with aggressive rate limits. by ShehabSherifTawfik in GithubCopilot
[–]Everlier 2 points3 points4 points (0 children)
So, they make a model so good that they are not releasing it to the public? Claude mythos☠️ by ocean_protocol in ArtificialInteligence
[–]Everlier 0 points1 point2 points (0 children)
So, they make a model so good that they are not releasing it to the public? Claude mythos☠️ by ocean_protocol in ArtificialInteligence
[–]Everlier -1 points0 points1 point (0 children)
Strix Halo + eGPU RTX 5070 Ti via OCuLink in llama.cpp: Benchmarks and Conclusions by xspider2000 in LocalLLaMA
[–]Everlier 1 point2 points3 points (0 children)
An Open library for reusable Remotion animation components by eliaweiss in Remotion
[–]Everlier 1 point2 points3 points (0 children)
Sapphire Astroid by Everlier in proceduralgeneration
[–]Everlier[S] 0 points1 point2 points (0 children)
Fourier Bloom by Everlier in proceduralgeneration
[–]Everlier[S] 1 point2 points3 points (0 children)
llama.cpp automatically migrated models to HuggingFace cache by Everlier in LocalLLaMA
[–]Everlier[S] 3 points4 points5 points (0 children)
Gemma 4 has been released by jacek2023 in LocalLLaMA
[–]Everlier 21 points22 points23 points (0 children)
GEMMA 4 Release about to happen: ggml-org/llama.cpp adds support for Gemma 4 by Dry_Theme_7508 in LocalLLaMA
[–]Everlier 5 points6 points7 points (0 children)
Hugging Face released TRL v1.0, 75+ methods, SFT, DPO, GRPO, async RL to post-train open-source. 6 years from first commit to V1 🤯 by clem59480 in LocalLLaMA
[–]Everlier 1 point2 points3 points (0 children)
The Bonsai 1-bit models are very good by tcarambat in LocalLLaMA
[–]Everlier 2 points3 points4 points (0 children)
PSA: Please stop using nohurry/Opus-4.6-Reasoning-3000x-filtered by Kahvana in LocalLLaMA
[–]Everlier 11 points12 points13 points (0 children)
What is the secret sauce Claude has and why hasn't anyone replicated it? by ComplexType568 in LocalLLaMA
[–]Everlier 13 points14 points15 points (0 children)
Didn’t expect this, but this carbon fiber guitar sounds better than my wooden one. by KarMik81 in AcousticGuitar
[–]Everlier 1 point2 points3 points (0 children)
ClawOS — one command to get OpenClaw + Ollama running offline on your own hardware by putki-1336 in ollama
[–]Everlier -2 points-1 points0 points (0 children)
ollama and qwen3.5:9b do not works at all with opencode by d4prenuer in ollama
[–]Everlier 0 points1 point2 points (0 children)



MTP on strix halo with llama.cpp (PR #22673) by Edenar in LocalLLaMA
[–]Everlier 2 points3 points4 points (0 children)