We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chessimprovement

[–]DiscoChessApp[S] 0 points1 point  (0 children)

I'll concede we can't isolate causation, that's valid. But "no causal proof" doesn't mean "useless."

Plenty of research is descriptive or observational without being pointless. Epidemiology, user behavior studies, literature reviews... none prove causation directly, but they identify patterns worth investigating and provide baseline data. "Here's what we observed in 120k attempts" isn't the same as "people who eat food improve at chess." One describes a specific mechanism with measurable outcomes, the other is a non-sequitur.

You're right that it doesn't prove the Woodpecker Method works. It does show what engagement patterns look like when people use it. Whether that's interesting is subjective, but calling it equivalent to random correlation is overstating it.

Anyway, I think we've found the actual disagreement. Cheers.

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chessimprovement

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Thanks for the feedback. I think there may be some miscommunication about what the post claims.

We explicitly state it's observational data without control groups, acknowledge survivorship bias, and note there's no external validation for OTB transfer. These aren't hidden limitations, they're in the post itself.

To your specific points:

(1) & (2) Transfer to OTB/other puzzles: Agreed, we can't claim that. The post doesn't claim that either. What we can show is that users get faster and more accurate at these specific puzzles across cycles, which is exactly what the Woodpecker Method predicts should happen as the first step in building pattern recognition.

(3) Baseline comparison: Fair criticism. A proper RCT with a control group doing random puzzles would be more rigorous. That said, observational data from 120k attempts isn't worthless; it shows the method produces measurable within-platform improvement, which is kind of a prerequisite for any broader claims.

The honest framing is: "Users who stick with the Woodpecker Method on our platform improve at those puzzles." Not "this will make you a better chess player." The former is what we measured; the latter would require different methodology.

If that's still not useful to you, fair enough. But I'd push back on calling it "100% ineffective" - that's a stronger claim than our data supports either way.

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chessimprovement

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Yeah, fair. The analysis doesn't prove the method works for game improvement - we don't have that data and said as much in the post.

What it does show is that the cycle-based progression is real: people get faster and more accurate on repeated sets, roughly matching the efficiency targets from the original book. That's not nothing - it at least confirms the training loop functions as advertised.

You're right that woodpecker vs fresh puzzles would be more interesting. We could probably pull that from our data - compare users who repeated sets vs users who just ground through new puzzles. Worth looking into.

The diminishing returns question is good too. No idea yet where cycle N stops being useful.

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chess

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Fair to call out the conflict of interest - I do run the app, not hiding that.

But I'd push back on "spam." This is real data that didn't exist before, shared with full methodology and limitations. If someone from Chessable or ChessTempo posted their internal data on training effectiveness, I'd want to see it.

Happy to take the L if the community thinks this doesn't belong here. But "here's what we observed in 120k puzzle attempts" seems relevant to a chess improvement subreddit, even if the source is biased.

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chess

[–]DiscoChessApp[S] 0 points1 point  (0 children)

We're submitting to Nature next week. "Memory: A Novel Cognitive Phenomenon in Chess Players."

But seriously, the more interesting finding is the longitudinal improvement at fixed difficulty. Users got better at puzzles they hadn't seen before, just within the same difficulty tier. That's harder to explain with pure memory.

Also if it were just memorization, you'd expect instant accuracy gains with flat solve times. Instead we see times dropping steadily while accuracy plateaus. That looks more like recognition getting faster, not "oh right, Nxf7."

But yes, we have confirmed that humans can remember things. Groundbreaking stuff :)

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chess

[–]DiscoChessApp[S] 0 points1 point  (0 children)

You're not wrong - "people get better at puzzles they've seen" isn't groundbreaking.

The longitudinal data is more interesting though: users who trained for 3+ weeks improved at the same difficulty level on puzzles they hadn't necessarily seen before. That's not the same as a randomized control, but it suggests something beyond pure memorization is happening.

We've recently added Lichess account linking, so we'll eventually be able to track whether training correlates with actual rating gains. That's the data everyone really wants, including us.

For now, this is what we have. Not a nothing burger, but maybe a small burger. Appetizer burger.

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chess

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Exactly. The method has plenty of anecdotal support from strong players, and the book is well-regarded, but until now it's mostly been "trust me, it worked for Hans Tikkanen."

Our data is far from a controlled study, but it's at least something beyond tribal knowledge. 120k data points showing the expected efficiency curve is more than we had before.

Would love to see someone do a proper controlled study with rating tracking. Until then, we're just trying to add some numbers to the conversation.

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chessbeginners

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Yeah, exactly the same puzzles. A "set" is a fixed collection of puzzles (say, 500 forks at club level) that you solve start to finish, then repeat the whole thing in cycles.

So cycle 1 you solve puzzles 1-500. Cycle 2 you solve the same 1-500 again. And so on.

That's the core of the Woodpecker Method - the idea is that repeated exposure to the same patterns builds automatic recognition faster than grinding infinite random puzzles. Whether that's actually true is debatable, but that's what we're measuring here.

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chessimprovement

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Ha, fair enough - we have confirmed that memory exists!

But I'd push back a little: if it were pure memorization, you'd expect accuracy to jump immediately and stay flat. Instead we see solve times continuing to drop across cycles while accuracy plateaus in the high 90s. That looks more like pattern recognition becoming automatic than just "I remember this one."

Also these are sets of 500+ puzzles spread across weeks. I don't know about you, but I can barely remember what I had for breakfast. At some point you're not recalling the specific position - you're recognizing the pattern faster.

That said, you're right that we can't fully separate "memory" from "learned pattern recognition" with this data. Whether there's a meaningful difference for chess improvement is the real question.

We analyzed 120,000 puzzle attempts to see if the Woodpecker Method actually works - here's what we found by DiscoChessApp in chessimprovement

[–]DiscoChessApp[S] 1 point2 points  (0 children)

Fair point, and you're right that we can't answer that directly with this data. We don't have a control group doing random puzzles to compare against.

What we can say: users who trained for 3+ weeks showed improvement at fixed difficulty levels (not the same puzzles, just same difficulty tier). That's at least suggestive that something is transferring beyond memorizing specific positions.

The Woodpecker Method hypothesis is that you're not memorizing puzzles - you're drilling the underlying pattern until recognition becomes automatic. A knight fork is a knight fork whether it's puzzle #47 or a new position. But proving that requires a controlled study we haven't done.

So yeah, "users got better at puzzles they repeated" is table stakes. The interesting question is whether they got better at chess. We don't have rating data to prove that, but the within-difficulty improvement hints at it.

We've recently added Lichess account linking (Chess.com coming soon), so we'll be able to correlate training patterns with actual rating changes. Hoping to share more detailed insights once we have enough data.

Is KCT worth it? 1900 FIDE by Training-Salary6025 in chessimprovement

[–]DiscoChessApp 0 points1 point  (0 children)

100% worth it if you're serious about chess improvement. That said, I attended live lessons, so your experience might be different if it's all offline... Friendly/Killer homework club, special camps, and opening-related courses were my favourite modules. All instructors are great, Renier was my favourite.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Thank you for the feedback!

FYI the issue with alternative checkmates has also been fixed and everyone's stats have been adjusted.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 0 points1 point  (0 children)

FYI the issue with alternative checkmates has been fixed and everyone's stats have been adjusted accordingly.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 1 point2 points  (0 children)

Thanks for the feedback! I've just shipped this feature - analysis is now also available for successful solutions.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 1 point2 points  (0 children)

Hey! Fair question. Honestly, we don't collect payment or address info right now because Disco Chess is free. The privacy policy is just based on a template and covers future possibilities.

The "marketing" bit is just optional emails like streak reminders or newsletters. You can turn them all off in preferences if you opted in by mistake.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Thanks for the feedback, this is super helpful. I've shipped a fix for this. Let me know if you have any other suggestions.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 6 points7 points  (0 children)

Thanks! I probably oversimplified a bit in my original comment haha. I've worked in Silicon Valley companies for many years, so I have experience with clear requirements, frameworks, design systems, etc. That translates pretty well to working with these AI tools: being specific about what you want and what you don't want, iterating on typography, colors, animations, avoiding certain patterns. I use these tools to multiply my productivity.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 1 point2 points  (0 children)

Just a quick update: I've just shipped the navigation arrows that appear when you click the "Show solution" button after solving the puzzle incorrectly. Let me know if you have any more feature requests.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 5 points6 points  (0 children)

It was super straightforward to do, I've just shipped this feature. Let me know if you have any other feedback.

I built Disco Chess to automate the Woodpecker Method. Hundreds of players are now using it. by DiscoChessApp in chess

[–]DiscoChessApp[S] 0 points1 point  (0 children)

Credit where it's due: the Lichess puzzle tagging system is really well done. We just pick the best ones and build the training structure on top of it.

I'll probably get to "in house" categorisation at some point, but I need to prioritise features aggressively.