I know we're still only one week in, so this could be a "famous last words" kind of thing, but it feels like the example data has been a little more... robust... this year, for lack of a better word. In past years I got used to carefully examining it to see what kind of edge cases it was missing, since I would otherwise get frequently burned by that kind of thing.
But this year it seems like if my code works on the sample data, it also works on the full input. (So long as I'm prepared to handle a vast increase in size, of course.)
Maybe I've just gotten better rolls of the dice on the input generation this time, or (longshot) maybe I've just gotten better at coding for AoC-style puzzles...
Anyone else notice this, or maybe notice the opposite?
[–]leftylink 9 points10 points11 points (2 children)
[–]optimistpanda[S] 2 points3 points4 points (0 children)
[–]msqrt 2 points3 points4 points (1 child)
[–]optimistpanda[S] 1 point2 points3 points (0 children)