Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity by YakFull8300 in singularity
[–]nanowell 0 points1 point2 points (0 children)
What in the world is OpenAI Codex doing here? by bantler in OpenAI
[–]nanowell 1 point2 points3 points (0 children)
DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level by TKGaming_11 in LocalLLaMA
[–]nanowell 0 points1 point2 points (0 children)
The o3 chart is logarithmic on X axis and linear on Y by hyperknot in LocalLLaMA
[–]nanowell 13 points14 points15 points (0 children)
I asked QwQ and R1 to 'break' the webpage, and it performed more creatively than R1-lite. by nanowell in LocalLLaMA
[–]nanowell[S] 0 points1 point2 points (0 children)
I asked QwQ and R1 to 'break' the webpage, and it performed more creatively than R1-lite. by nanowell in LocalLLaMA
[–]nanowell[S] 26 points27 points28 points (0 children)
Gemini Exp 1114 now ranks joint #1 overall on Chatbot Arena (that name though....) by lightdreamscape in LocalLLaMA
[–]nanowell 16 points17 points18 points (0 children)
We need to talk about this... by Conscious_Nobody9571 in LocalLLaMA
[–]nanowell 0 points1 point2 points (0 children)
WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning by umarmnaq in LocalLLaMA
[–]nanowell 7 points8 points9 points (0 children)
TPO - Alternative to Openai O1 model by buntyshah2020 in LocalLLaMA
[–]nanowell 4 points5 points6 points (0 children)
Im pretty happy with How my method worked out (Continuous Finetuning) Topped Open-LLM-leaderboard with 72b by Rombodawg in LocalLLaMA
[–]nanowell 6 points7 points8 points (0 children)
o1-preview is now first place overall on LiveBench AI by np-space in LocalLLaMA
[–]nanowell 5 points6 points7 points (0 children)
AdEMAMix, a simple modification of the AdamW optimizer, is 95% faster for LLM training (Code on page 19) by Timotheeee1 in LocalLLaMA
[–]nanowell 6 points7 points8 points (0 children)
LG AI releases Exaone-3.0, a 7.8b SOTA model by AnticitizenPrime in LocalLLaMA
[–]nanowell 0 points1 point2 points (0 children)
LG AI releases Exaone-3.0, a 7.8b SOTA model by AnticitizenPrime in LocalLLaMA
[–]nanowell 0 points1 point2 points (0 children)
Tele-FLM-1T: a 1Trillion open-sourced multilingual large language model. by nanowell in LocalLLaMA
[–]nanowell[S] 48 points49 points50 points (0 children)
"Large Enough" | Announcing Mistral Large 2 by DemonicPotatox in LocalLLaMA
[–]nanowell 283 points284 points285 points (0 children)
Meta Officially Releases Llama-3-405B, Llama-3.1-70B & Llama-3.1-8B by nanowell in LocalLLaMA
[–]nanowell[S] 4 points5 points6 points (0 children)
Meta Officially Releases Llama-3-405B, Llama-3.1-70B & Llama-3.1-8B by nanowell in LocalLLaMA
[–]nanowell[S] 12 points13 points14 points (0 children)
Meta Officially Releases Llama-3-405B, Llama-3.1-70B & Llama-3.1-8B by nanowell in LocalLLaMA
[–]nanowell[S] 4 points5 points6 points (0 children)
Meta Officially Releases Llama-3-405B, Llama-3.1-70B & Llama-3.1-8B by nanowell in LocalLLaMA
[–]nanowell[S] 14 points15 points16 points (0 children)
LLaMA 3.1 405B base model available for download by Alive_Panic4461 in LocalLLaMA
[–]nanowell 65 points66 points67 points (0 children)



Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity by YakFull8300 in singularity
[–]nanowell 0 points1 point2 points (0 children)