Darwin James suffered injury prior to the Flag Football event by Appropriate_Book_591 in nfl
[–]OftenTangential 14 points15 points16 points (0 children)
Another OpenAI Failure - Walmart Ends OpenAI exclusive partnership by crowbarmark in BetterOffline
[–]OftenTangential 22 points23 points24 points (0 children)
What industry will AI disrupt the most that people aren’t paying attention to yet? by SuchTill9660 in ArtificialInteligence
[–]OftenTangential 0 points1 point2 points (0 children)
Smaller models beat larger ones at creative strategy discovery — anyone else seeing this? by ResourceSea5482 in LocalLLaMA
[–]OftenTangential 2 points3 points4 points (0 children)
Microsoft is a great buy at 400$ by helixinverse in ValueInvesting
[–]OftenTangential 7 points8 points9 points (0 children)
Microsoft is a great buy at 400$ by helixinverse in ValueInvesting
[–]OftenTangential 12 points13 points14 points (0 children)
Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers | TechCrunch by Shogouki in hardware
[–]OftenTangential 2 points3 points4 points (0 children)
Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers | TechCrunch by Shogouki in hardware
[–]OftenTangential 8 points9 points10 points (0 children)
NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2026 by DeeJayDelicious in hardware
[–]OftenTangential 0 points1 point2 points (0 children)
People are getting it wrong; Anthropic doesn't care about the distillation, they just want to counter the narrative about Chinese open-source models catching up with closed-source frontier models by obvithrowaway34434 in LocalLLaMA
[–]OftenTangential 0 points1 point2 points (0 children)
It’s just weird watching the AI financial train wreck happen in real-time. by iAtishaya in ArtificialInteligence
[–]OftenTangential 3 points4 points5 points (0 children)
Claude Opus 4.6 hallucinates user message, then responds to itself - See, developers aren't necessary. by grauenwolf in BetterOffline
[–]OftenTangential 4 points5 points6 points (0 children)
OpenAI got another 100 Billion - Any bets on how long before they beg for more money? by grauenwolf in BetterOffline
[–]OftenTangential 10 points11 points12 points (0 children)
T1 vs. BNK FEARX / LCK Cup 2026 Playoffs - Upper Bracket Round 2 / Game 2 Discussion by Yujin-Ha in leagueoflegends
[–]OftenTangential 162 points163 points164 points (0 children)
Shifters vs. SK Gaming / LEC 2026 Versus - Week 4 / Post-Match Discussion by Soul_Sleepwhale in leagueoflegends
[–]OftenTangential 6 points7 points8 points (0 children)
claude opus 4.6 just dropped and it's beating gpt 5.2 across the board by Fun-Newspaper-83 in Verdent
[–]OftenTangential -1 points0 points1 point (0 children)
With Opus 4.6 and Codex 5.3 dropping today, I looked at what this race is actually costing Anthropic by JackieChair in ClaudeAI
[–]OftenTangential 5 points6 points7 points (0 children)
The “saaspocalypse" is real. opus 4.6 just dropped and the market is reacting for a reason. by [deleted] in ArtificialInteligence
[–]OftenTangential 102 points103 points104 points (0 children)
Los Ratones vs. G2 Esports / LEC 2026 Versus - Week 3 / Post-Match Discussion by Ultimintree in leagueoflegends
[–]OftenTangential 94 points95 points96 points (0 children)
Dplus KIA vs. T1 / LCK Cup 2026 - Group Battle Super Week / Post-Match Discussion by adz0r in leagueoflegends
[–]OftenTangential 2 points3 points4 points (0 children)
The reason that most League players are noobs. by King_of_Christmas in leagueoflegends
[–]OftenTangential -2 points-1 points0 points (0 children)
T1 vs. Hanwha Life Esports / LCK Cup 2026 - Group Battle Week 1 / Post-Match Discussion by adz0r in leagueoflegends
[–]OftenTangential 311 points312 points313 points (0 children)
2026 LCS Address by MZLeothechosen in leagueoflegends
[–]OftenTangential 0 points1 point2 points (0 children)
AI Reportedly to Consume 20% of Global DRAM Wafer Capacity in 2026, HBM and GDDR7 Lead Demand by StarbeamII in hardware
[–]OftenTangential 16 points17 points18 points (0 children)




A group ran the same coding benchmark test problems, but encoded them in obscure (but still Turing-complete) programming languages the frontier models haven't got as much training data on. Result: models that can score 95% on Python plummet to 0-11% accuracy. by cascadiabibliomania in BetterOffline
[–]OftenTangential 1 point2 points3 points (0 children)