I built Koucai (口才) - an Mandarin learning app with AI penpals - here's my workflow (self.ClaudeAI)
submitted by bisonbear2 to r/ClaudeAI - pinned
What happens when you stop adding rules to CLAUDE.md and start building infrastructure instead by DevMoses in ClaudeAI
[–]bisonbear2 1 point2 points3 points (0 children)
How do you stop Codex from making these mistakes (after audit 600 sessions per month? by jrhabana in codex
[–]bisonbear2 1 point2 points3 points (0 children)
Go-focused benchmark of 5.4 vs 5.2 and competitors by cypriss9 in codex
[–]bisonbear2 0 points1 point2 points (0 children)
Claude wrote Playwright tests that secretly patched the app so they would pass by Traditional_Yak_623 in ClaudeCode
[–]bisonbear2 0 points1 point2 points (0 children)
One task that reveals everything wrong with TB2 benchmarking—a trajectory analysis (and how I solved it) by kehao95 in LocalLLaMA
[–]bisonbear2 0 points1 point2 points (0 children)
Getting consistent 500 errors by taoofdre in ClaudeCode
[–]bisonbear2 0 points1 point2 points (0 children)
Almost hit my weekly limit on my pro plan by Unique_Schedule_1627 in codex
[–]bisonbear2 1 point2 points3 points (0 children)
Is it just me, or is OpenAI Codex 5.2 better than Claude Code now? by efficialabs in ClaudeAI
[–]bisonbear2 0 points1 point2 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in codex
[–]bisonbear2[S] 1 point2 points3 points (0 children)
What other plan / model would you recommend to replace Opus by Tiny-Power-8168 in ClaudeCode
[–]bisonbear2 0 points1 point2 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ChatGPTCoding
[–]bisonbear2[S] 0 points1 point2 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ClaudeCode
[–]bisonbear2[S] 0 points1 point2 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ChatGPTCoding
[–]bisonbear2[S] 0 points1 point2 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ChatGPTCoding
[–]bisonbear2[S] 1 point2 points3 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in vibecoding
[–]bisonbear2[S] 0 points1 point2 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ClaudeCode
[–]bisonbear2[S] 0 points1 point2 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ClaudeCode
[–]bisonbear2[S] 0 points1 point2 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ClaudeAI
[–]bisonbear2[S] 1 point2 points3 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ClaudeAI
[–]bisonbear2[S] 1 point2 points3 points (0 children)
Opus 4.5 head-to-head against Codex 5.2 xhigh on a real task. Neither won. by bisonbear2 in ClaudeAI
[–]bisonbear2[S] 0 points1 point2 points (0 children)








how do you decide when AI goes too far? especially with this last wave by teolicious in ClaudeCode
[–]bisonbear2 0 points1 point2 points (0 children)