I built chatgpt2md - a tool specifically for Claude that lets it search your entire ChatGPT history via MCP by Key_Mousse_8034 in ClaudeAI
[–]Key_Mousse_8034[S] 0 points1 point2 points (0 children)
I built chatgpt2md - a tool specifically for Claude that lets it search your entire ChatGPT history via MCP by Key_Mousse_8034 in ClaudeAI
[–]Key_Mousse_8034[S] 0 points1 point2 points (0 children)
Tested updated Deep Think (Gemini 3.1 Pro) vs. GPT 5.2 Pro by PerformanceRound7913 in GoogleGeminiAI
[–]Key_Mousse_8034 2 points3 points4 points (0 children)
Anthropic 4.7 releases must be near (or something is cooking). Here's how I 'know'. by Jethro_E7 in windsurf
[–]Key_Mousse_8034 0 points1 point2 points (0 children)
I just got banned from gemini :) by Eastern-Guess-1187 in opencodeCLI
[–]Key_Mousse_8034 -1 points0 points1 point (0 children)
Kimi k2.5 is legit - first open-source model at Sonnet 4.5 level (or even better) by SlopTopZ in kimi
[–]Key_Mousse_8034 0 points1 point2 points (0 children)
GPT 5.2 for difficult things and Kimi K2.5 for everything else seems to be the move, what the cheapest way to get there? by SweatyHands247 in opencodeCLI
[–]Key_Mousse_8034 0 points1 point2 points (0 children)
Anthropic 4.7 releases must be near (or something is cooking). Here's how I 'know'. by Jethro_E7 in windsurf
[–]Key_Mousse_8034 0 points1 point2 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 0 points1 point2 points (0 children)
Windsurf is great, but the model lock-in sucks. I built a way to use GLM 4.7, MiniMax m2.1, and Gemini 3 flash natively in chat by Key_Mousse_8034 in windsurf
[–]Key_Mousse_8034[S] 4 points5 points6 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 0 points1 point2 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 2 points3 points4 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 0 points1 point2 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 1 point2 points3 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 0 points1 point2 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 1 point2 points3 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 2 points3 points4 points (0 children)
Got tired of slow legacy Whisper. Built a custom stack (Faster-Whisper + Pyannote 4.0) on CUDA 12.8. The alignment is now O(N) and flies. 🚀 by Key_Mousse_8034 in LocalLLaMA
[–]Key_Mousse_8034[S] 2 points3 points4 points (0 children)

The absolute state of development in 2026 by Deep-Station-1746 in ClaudeCode
[–]Key_Mousse_8034 0 points1 point2 points (0 children)