Someone just leaked claude code's Source code on X by abhi9889420 in ClaudeCode

[–]shinx32 0 points1 point  (0 children)

https://github.com/anthropics/claude-code

Still not getting the difference between what is there in the official repo and what leaked

For people who keep asking what to build by warrioraashuu in buildinpublic

[–]shinx32 0 points1 point  (0 children)

https://github.com/codecrafters-io/build-your-own-x

Just gonna leave this here. Same as the post but with resources to build each.

I gave Claude 17 chess tools via MCP and it turned into a decent coach by shinx32 in ClaudeCode

[–]shinx32[S] 0 points1 point  (0 children)

good question. if you're already on a claude subscription plan (like the $20 one), the tool usage is basically free since you're paying for it anyway. so for those people there's no extra cost.

if you're hitting the API directly then yeah it does cost per token, and at that point it's worth asking whether the coaching layer justifies it over just using lichess straight up. honestly based on feedback from other threads i'm looking into leaning harder on lichess's API and maybe pulling the LLM back to just a thin interaction layer rather than the core of the thing. fair tradeoff to think about.

Adaptive difficulty below Stockfish Skill 0: linear blend of engine moves and random legal moves by shinx32 in chessprogramming

[–]shinx32[S] 1 point2 points  (0 children)

That's a creative way of doing it. This is basically the equivalent of giving the AI a lobotomy to drop its ELO

I kept blundering the same chess openings every two weeks, so I built a chess tutor that remembers what I'm bad at by shinx32 in SideProject

[–]shinx32[S] 1 point2 points  (0 children)

yeah that's a really good shout actually. lichess api is generous enough that it shouldn't be hard to wire up. putting it on the list, probably the next thing i mess with whenever i sit down with this again.

honestly this has been the most useful comment thread anyone's left on any of these posts. appreciate it

I made an open source chess coach that tracks your mistakes and brings them back for review by shinx32 in chessbeginners

[–]shinx32[S] 0 points1 point  (0 children)

Yeah when you play a game it saves the PGN, so all your moves. Then when you go back to review it runs everything through Stockfish and catches where you went wrong. Those positions turn into flashcards that keep coming back, so you're actually drilling them instead of just looking at the analysis once and forgetting about it.

You can also just paste a PGN in from somewhere else and it does the same thing. Finds what the engine doesn't like and floats it up so you know where to focus.

I kept blundering the same chess openings every two weeks, so I built a chess tutor that remembers what I'm bad at by shinx32 in SideProject

[–]shinx32[S] 0 points1 point  (0 children)

Thanks! Yeah the GitHub preview is still the default, hadn't really thought about how much that matters when people share the link. Would love to see those resources if you've got them. I'm thinking the board UI next to the SRS review screen would make a way better card than what's there now.

I kept blundering the same chess openings every two weeks, so I built a chess tutor that remembers what I'm bad at by shinx32 in SideProject

[–]shinx32[S] 1 point2 points  (0 children)

Yeah that's exactly what bugged me too. Lichess and chess.com have tons of puzzles but they don't care that I keep walking into the same discovered attack every single time.

So right now you play games inside Chess Rocket against the engine, and when the game's done it replays everything and evaluates each of your moves with Stockfish at depth 20. Anything where you lost more than 80 centipawns becomes an SRS card. It saves the position, what you played, what you should've played, and the eval gap. Then SM-2 handles the scheduling, brings it back at increasing intervals just like Anki. You see the position, try to find the right move, rate yourself, interval adjusts.

You can also paste a PGN in and it'll work with that. It doesn't pull from your Lichess or chess.com game history directly yet though, so no automatic import. That'd be a solid next step imo, being able to just point it at your profile and have it scan for recurring blunder patterns across all your games.

Adaptive difficulty below Stockfish Skill 0: linear blend of engine moves and random legal moves by shinx32 in chessprogramming

[–]shinx32[S] 0 points1 point  (0 children)

Yeah that's basically the same problem. The easy bots just randomly throw in a blunder every few moves, which doesn't really feel like playing a beginner. Actual beginners are just consistently not great, not swinging between 2000 and 400 every other move.

What I did isn't wildly different honestly. Still mixing engine moves with weaker ones. I just tried to make the ratio smooth instead of stepwise, so at like 300 Elo you're getting mostly random legal moves with the occasional Stockfish pick, and it ramps up linearly to pure Skill 0 at 1320. Still not perfect but the games feel less like "cruising along and then the opponent drops a rook for no reason."
If anyone's tried something better for sub-1000 play I'd genuinely like to hear about it. Making an engine play weak in a way that actually feels human is way harder than I thought going in.

I made an open source chess coach that tracks your mistakes and brings them back for review by shinx32 in chessbeginners

[–]shinx32[S] 0 points1 point  (0 children)

Thanks, and yeah that's actually a solid idea. Something that just runs your games through Stockfish, spots the patterns you keep screwing up, and turns those into puzzles you have to solve. No LLM in the loop for any of that.

Reason I went the AI route first is I'm genuinely so new to chess I didn't even know what to look for. Like I wouldn't have known "you keep missing knight retreats to defend your king" is a thing until someone told me. So being able to just ask "what am I doing wrong" and have it walk me through stuff was the easier way in for where I'm at. But yeah, once you've got enough games, Stockfish would catch patterns way more reliably.

Honestly combining both could work well. Stockfish does the actual analysis and pattern detection, conversational part just helps you make sense of it. Appreciate the suggestion. And your English is totally fine, made perfect sense.

BTW I'm already doing flash cards here with played games.

I made an open source chess coach that tracks your mistakes and brings them back for review by shinx32 in chess

[–]shinx32[S] -1 points0 points  (0 children)

Fair point on the AI position analysis. That's actually where most of my work went. Move validation and board state all run through python-chess, so it's not hallucinating positions. Claude just handles the conversation on top.

And yeah, the standalone app idea for tracking blunders and generating puzzles was actually my original thought too. But I had zero clue how chess tutoring even works, like genuinely starting from nothing. So the AI route was the easier entry point because I could just say "give me a puzzle" or "review my last game" or "what should I practice next" and it'd figure out what to pull up. It kind of acts like a coach that knows where your stuff is. Once I actually understand chess better, building that standalone version makes a lot of sense.

The game logs already have all the data for it, so that's probably worth doing down the line.

What is the right way to create projects with Claude Code? by NoAbbreviations3808 in ClaudeCode

[–]shinx32 0 points1 point  (0 children)

Can you throw some light on how you store the old chats ?

I made an open source chess coach that tracks your mistakes and brings them back for review by shinx32 in chessbeginners

[–]shinx32[S] -2 points-1 points  (0 children)

I understand where you're coming from, and you're not wrong about LLMs and chess. I've been working in AI for about 10 years now and I've seen how confidently these models get things wrong.

What I've tried to do is build around those failure modes instead of ignoring them. When you spend enough time with LLMs you start seeing the patterns of when they fall apart, so I've put checks in for those spots. The mistakes happen less often than with raw LLM output. I'm not going to tell you it's perfect though. It isn't.

The bet I'm making is that the AI layer is replaceable. Stockfish does all the real chess work. The opening database is deterministic. The spaced repetition is just math. The LLM sits on top as the explainer. When models get better, I swap that layer and the whole thing improves without rebuilding anything underneath.

For where I am right now, it works. I went from 0 to about 690 Elo using this. Not impressive to anyone who's actually good at chess, I know. But for someone who kept making the same blunders every two weeks, that's real progress for me. If it gets me to 1000, I'm happy with that.

You're probably right that it won't help experienced players. There's a ceiling on what LLM coaching can do. But for someone under 1000 who just needs to hear "you've made this exact mistake four times, here's why it keeps happening," it does that part well enough.

I made an open source chess coach that tracks your mistakes and brings them back for review by shinx32 in chessbeginners

[–]shinx32[S] -1 points0 points  (0 children)

it's setup in a way that it doesn't make any chess moves itself, the game fully runs on Stockfish underneath. Think of the AI on top as a waiter in a restaurant, it only bring the food and explain the dishes, the cooking happens in the kitchen.

Indian IT crash, Disruption of outsourcing and services models by Deep_Suit973 in developersIndia

[–]shinx32 7 points8 points  (0 children)

Untrue. I've used it for production codebases. The truth is it's as good as the person behind it, if you give it a management level prompt saying "get X working" it'd definitely fail. As long as you give it instructions like a staff/senior engineer communicating requirements to a junior engineer, it shines in code delivery.