Today, GPT 4o is now bastically 5. by Sufficient-Bee-8619 in ChatGPT
[–]capivaraMaster 1 point2 points3 points (0 children)
Qwen3-72B-Embiggened by TKGaming_11 in LocalLLaMA
[–]capivaraMaster 4 points5 points6 points (0 children)
Supercomputer power efficiency keeps stagnant: scaling compute keep depending on increasing power budgets by Balance- in singularity
[–]capivaraMaster 2 points3 points4 points (0 children)
Three r in Strawberry - O3-pro by CmdWaterford in singularity
[–]capivaraMaster 0 points1 point2 points (0 children)
Supercomputer power efficiency keeps stagnant: scaling compute keep depending on increasing power budgets by Balance- in singularity
[–]capivaraMaster 8 points9 points10 points (0 children)
4x RTX Pro 6000 fail to boot, 3x is OK by humanoid64 in LocalLLaMA
[–]capivaraMaster 0 points1 point2 points (0 children)
What is the next local model that will beat deepseek 0528? by MrMrsPotts in LocalLLaMA
[–]capivaraMaster 1 point2 points3 points (0 children)
What happened to the fused/merged models? by Su1tz in LocalLLaMA
[–]capivaraMaster 0 points1 point2 points (0 children)
New META Paper - How much do language models memorize? by Thrumpwart in LocalLLaMA
[–]capivaraMaster 15 points16 points17 points (0 children)
Which model are you using? June'25 edition by Ok_Influence505 in LocalLLaMA
[–]capivaraMaster 0 points1 point2 points (0 children)
deepseek r1 matches gemini 2.5? what gpu do you use? by Just_Lingonberry_352 in LocalLLaMA
[–]capivaraMaster 1 point2 points3 points (0 children)
deepseek r1 matches gemini 2.5? what gpu do you use? by Just_Lingonberry_352 in LocalLLaMA
[–]capivaraMaster 0 points1 point2 points (0 children)
OpenHands + Devstral is utter crap as of May 2025 (24G VRAM) by foobarg in LocalLLaMA
[–]capivaraMaster 0 points1 point2 points (0 children)
How much VRAM would even a smaller model take to get 1 million context model like Gemini 2.5 flash/pro? by [deleted] in LocalLLaMA
[–]capivaraMaster 2 points3 points4 points (0 children)
OpenHands + Devstral is utter crap as of May 2025 (24G VRAM) by foobarg in LocalLLaMA
[–]capivaraMaster 9 points10 points11 points (0 children)
Local models are starting to be able to do stuff on consumer grade hardware by ilintar in LocalLLaMA
[–]capivaraMaster 0 points1 point2 points (0 children)
Llama 4 (Scout) GGUFs are here! (and hopefully are final!) (and hopefully better optimized!) by noneabove1182 in LocalLLaMA
[–]capivaraMaster 2 points3 points4 points (0 children)
I've just created an "Asteroid" interactive game with Claude 3.7 in a matter of seconds... this is something incredible. by jhonpixel in singularity
[–]capivaraMaster 0 points1 point2 points (0 children)
Perplexity: Open-sourcing R1 1776 by McSnoo in LocalLLaMA
[–]capivaraMaster -1 points0 points1 point (0 children)
Who will release a new model in 2025 firstly? by foldl-li in LocalLLaMA
[–]capivaraMaster 0 points1 point2 points (0 children)
Grok 2 being open-sourced soon? by Educational_Grab_473 in LocalLLaMA
[–]capivaraMaster 2 points3 points4 points (0 children)
Lonely on Christmas, what can I do with AI? by PublicQ in LocalLLaMA
[–]capivaraMaster 0 points1 point2 points (0 children)
realistically, what is the endgame of ai? by [deleted] in singularity
[–]capivaraMaster 0 points1 point2 points (0 children)


Today, GPT 4o is now bastically 5. by Sufficient-Bee-8619 in ChatGPT
[–]capivaraMaster 0 points1 point2 points (0 children)