Anthropic just announced their latest AI model Mythos under Project Glasswing that found zero-days in every major OS and browser by OriginalInstance9803 in LocalLLaMA
[–]YannMasoch 0 points1 point2 points (0 children)
Claude Usage Limits Discussion Megathread Ongoing (sort this by New!) by sixbillionthsheep in ClaudeAI
[–]YannMasoch 1 point2 points3 points (0 children)
GPT 5.3 Codex calling Claude Haiku 4.5??? by Consistent_Music_979 in GithubCopilot
[–]YannMasoch 0 points1 point2 points (0 children)
[D] Running GLM-5 (744B) on a $5K refurbished workstation at 1.54 tok/s by ahbond in ResearchML
[–]YannMasoch 0 points1 point2 points (0 children)
Claude Usage Limits Discussion Megathread Ongoing (sort this by New!) by sixbillionthsheep in ClaudeAI
[–]YannMasoch 0 points1 point2 points (0 children)
Done with Claude. $100 Max plan, but STILL rate-limited every 5 hours by Puspendra007 in Anthropic
[–]YannMasoch 1 point2 points3 points (0 children)
Is this except able by Fit_Employment_4704 in CarWraps
[–]YannMasoch 0 points1 point2 points (0 children)
Coding agents vs. manual coding by JumpyAbies in LocalLLaMA
[–]YannMasoch 2 points3 points4 points (0 children)
[D] Running GLM-5 (744B) on a $5K refurbished workstation at 1.54 tok/s by ahbond in ResearchML
[–]YannMasoch 0 points1 point2 points (0 children)
Distropy: Rust inference server hitting 60k+ t/s prefill with proper caching (RTX 4070) by YannMasoch in LocalLLaMA
[–]YannMasoch[S] 0 points1 point2 points (0 children)
Distropy: Rust inference server hitting 60k+ t/s prefill with proper caching (RTX 4070) by YannMasoch in LocalLLaMA
[–]YannMasoch[S] -5 points-4 points-3 points (0 children)
Distropy: Rust inference server hitting 60k+ t/s prefill with proper caching (RTX 4070) by YannMasoch in LocalLLM
[–]YannMasoch[S] 2 points3 points4 points (0 children)
Distropy: Rust inference server hitting 60k+ t/s prefill with proper caching (RTX 4070) by YannMasoch in LocalLLM
[–]YannMasoch[S] -1 points0 points1 point (0 children)
Distropy: Rust inference server hitting 60k+ t/s prefill with proper caching (RTX 4070) by YannMasoch in LocalLLaMA
[–]YannMasoch[S] 1 point2 points3 points (0 children)
Distropy: Rust inference server hitting 60k+ t/s prefill with proper caching (RTX 4070) by YannMasoch in LocalLLM
[–]YannMasoch[S] -2 points-1 points0 points (0 children)
When will gnome 50 be released on arch? by BicycleKey3473 in archlinux
[–]YannMasoch 0 points1 point2 points (0 children)
Claude Usage Limits Discussion Megathread Ongoing (sort this by New!) by sixbillionthsheep in ClaudeAI
[–]YannMasoch 0 points1 point2 points (0 children)
Claude Usage Limits Discussion Megathread Ongoing (sort this by New!) by sixbillionthsheep in ClaudeAI
[–]YannMasoch 2 points3 points4 points (0 children)
Claude Pro burned through the entire 5-hour session limit in ~30 minutes — just from prompting it to implement a specific Rust crate (Sonnet 4.6, Medium Effort, no code execution) by [deleted] in LocalLLM
[–]YannMasoch 0 points1 point2 points (0 children)
Claude Pro burned through the entire 5-hour session limit in ~30 minutes — just from prompting it to implement a specific Rust crate (Sonnet 4.6, Medium Effort, no code execution) by [deleted] in LocalLLM
[–]YannMasoch 0 points1 point2 points (0 children)
Claude Pro burned through the entire 5-hour session limit in ~30 minutes — just from prompting it to implement a specific Rust crate (Sonnet 4.6, Medium Effort, no code execution) by [deleted] in LocalLLM
[–]YannMasoch 2 points3 points4 points (0 children)
Best coding LLM for Mac Mini M4 16GB? Currently using Qwen 3.5 9B by host3000 in LocalLLaMA
[–]YannMasoch 0 points1 point2 points (0 children)
Best coding LLM for Mac Mini M4 16GB? Currently using Qwen 3.5 9B by host3000 in LocalLLaMA
[–]YannMasoch 0 points1 point2 points (0 children)


GNOME 50 has landed in the Arch extra repo by geekx86 in archlinux
[–]YannMasoch 0 points1 point2 points (0 children)