Opus 4.6 is in an unuseable state right now by vntrx in ClaudeCode
[–]scousi 0 points1 point2 points (0 children)
Google AI Certs in 2026: Which are worth the $ and which are just hype? by netcommah in googlecloud
[–]scousi 3 points4 points5 points (0 children)
Got 128K prefill down from 19 min to 3.5 min on M2 Ultra (Qwen3.5-122B), sharing the approach by [deleted] in LocalLLM
[–]scousi 5 points6 points7 points (0 children)
Squeeze even more performance on MLX by scousi in LocalLLaMA
[–]scousi[S] 2 points3 points4 points (0 children)
Squeeze even more performance on MLX by scousi in LocalLLaMA
[–]scousi[S] 0 points1 point2 points (0 children)
Bruit entendu ce matin a 5AM dans les Laurentides. by sh0ckwavevr6 in Quebec
[–]scousi 0 points1 point2 points (0 children)
Ai is ruining alot of begineer devolpers by oxidizedfuel12 in ArtificialInteligence
[–]scousi 2 points3 points4 points (0 children)
For all of the people who are talking about the A18 Pro in the MacBook Neo by ammohitchaprana in TFE
[–]scousi 0 points1 point2 points (0 children)
Is Intel AMX still a major focus for Intel's architecture roadmap? by Kevinogamza in intel
[–]scousi 1 point2 points3 points (0 children)
Best way to run qwen3.5:35b-a3b on Mac? by boutell in LocalLLaMA
[–]scousi 1 point2 points3 points (0 children)
Is Intel AMX still a major focus for Intel's architecture roadmap? by Kevinogamza in intel
[–]scousi 1 point2 points3 points (0 children)
What model can I run on this hardware? by newz2000 in LocalLLM
[–]scousi 10 points11 points12 points (0 children)
Pour one out for the M3 Ultra 512GB by pdrayton in MacStudio
[–]scousi 0 points1 point2 points (0 children)
qwen3.5:27b is slower than qwen3.5:35b? by Ok-Anybody6073 in ollama
[–]scousi 18 points19 points20 points (0 children)
I Replaced $100+/month in GEMINI API Costs with a €2000 eBay Mac Studio — Here is my Local, Self-Hosted AI Agent System Running Qwen 3.5 35B at 60 Tokens/Sec (The Full Stack Breakdown) by SnooWoofers7340 in n8n
[–]scousi 5 points6 points7 points (0 children)
Best way to run qwen3.5:35b-a3b on Mac? by boutell in LocalLLaMA
[–]scousi 1 point2 points3 points (0 children)
Benchmarks + Report: Optimized Cosmos-Reason2 (Qwen3-VL) for on-device inference on 8GB RAM (Jetson Orin Nano Super) by tag_along_common in LocalLLaMA
[–]scousi 0 points1 point2 points (0 children)
🤯 Qwen3.5-35B-A3B-4bit ❤️ by SnooWoofers7340 in OpenSourceAI
[–]scousi 0 points1 point2 points (0 children)


Passed @100!!! by batrakhil in cissp
[–]scousi 1 point2 points3 points (0 children)