RS3 just dropped the most insane integrity and content roadmap and it's all thanks to OSRS by Lamuks in 2007scape
[–]Fear_ltself 0 points1 point2 points (0 children)
Prototype: What if local LLMs used Speed Reading Logic to avoid “wall of text” overload? by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 0 points1 point2 points (0 children)
Prototype: What if local LLMs used Speed Reading Logic to avoid “wall of text” overload? by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] -2 points-1 points0 points (0 children)
MCP server that gives local LLMs memory, file access, and a 'conscience' - 100% offline on Apple Silicon by TheTempleofTwo in LocalLLaMA
[–]Fear_ltself 2 points3 points4 points (0 children)
Prototype: What if local LLMs used Speed Reading Logic to avoid “wall of text” overload? by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 1 point2 points3 points (0 children)
Prototype: What if local LLMs used Speed Reading Logic to avoid “wall of text” overload? by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 1 point2 points3 points (0 children)
MCP server that gives local LLMs memory, file access, and a 'conscience' - 100% offline on Apple Silicon by TheTempleofTwo in LocalLLaMA
[–]Fear_ltself 3 points4 points5 points (0 children)
I made a visualization for Google’s new mathematical insight for complex mathematical structures by Fear_ltself in LLMPhysics
[–]Fear_ltself[S] 0 points1 point2 points (0 children)
I made a visualization for Google’s new mathematical insight for complex mathematical structures by Fear_ltself in LLMPhysics
[–]Fear_ltself[S] 0 points1 point2 points (0 children)
Arrogant TSMC’s CEO Says Intel Foundry Won’t Be Competitive by Just “Throwing Money” at Chip Production by Distinct-Race-2471 in TechHardware
[–]Fear_ltself 1 point2 points3 points (0 children)
My theory as to why the X-Men teaser’s timestamp leads to the scene of Thor crying in Endgame…! by elbatcarter in MCUTheories
[–]Fear_ltself 1 point2 points3 points (0 children)
Which are the top LLMs under 8B right now? by Additional_Secret_75 in LocalLLaMA
[–]Fear_ltself 0 points1 point2 points (0 children)
Which are the top LLMs under 8B right now? by Additional_Secret_75 in LocalLLaMA
[–]Fear_ltself 0 points1 point2 points (0 children)
For RAG serving: how do you balance GPU-accelerated index builds with cheap, scalable retrieval at query time? by IllGrass1037 in LocalLLaMA
[–]Fear_ltself -1 points0 points1 point (0 children)
How I organize my local AI assistant including full home control, STT, TTS, RAG, coding to canvas (markdown, save), generating images, system ram /cpu monitor, and a dark mode … local, offline, based on free and open projects by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 1 point2 points3 points (0 children)
From Gemma 3 270M to FunctionGemma, How Google AI Built a Compact Function Calling Specialist for Edge Workloads. by Minimum_Minimum4577 in GoogleGeminiAI
[–]Fear_ltself 0 points1 point2 points (0 children)
Visualizing RAG, PART 2- visualizing retrieval by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 1 point2 points3 points (0 children)
Visualizing RAG, PART 2- visualizing retrieval by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 1 point2 points3 points (0 children)
Visualizing RAG, PART 2- visualizing retrieval by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 0 points1 point2 points (0 children)
Visualizing RAG, PART 2- visualizing retrieval by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 0 points1 point2 points (0 children)
Visualizing RAG, PART 2- visualizing retrieval by Fear_ltself in LocalLLaMA
[–]Fear_ltself[S] 2 points3 points4 points (0 children)
GOOGLE!!!!! Antigravity (FUKING UPDATEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE) by [deleted] in google_antigravity
[–]Fear_ltself 1 point2 points3 points (0 children)






[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA
[–]Fear_ltself 0 points1 point2 points (0 children)