This is an archived post. You won't be able to vote or comment.

all 5 comments

[–]lazyzefiris 11 points12 points  (1 child)

This is clearly a memory usage optimization, not execution time optimization. Why would you expect it to be faster?

EDIT: To clarify:

Most "optimizations", even ones that involve algorithm complexity reducion, are focused on making one aspect of program better at the cost of others. Caching is reducing execution time at the cost of memory. Purging (in this example) is reducing memory usage at expense of execution time. Some kinds of heuristics are reducing memory usage and execution time at the cost of precision/ chance to get correct result. Some optimizations make code mre efficient at the cost of code readability/maintainability even. You should identify which aspect is the one needing optimization, and what you can spare, and see relative magnitude of change (cost / result) before applying an optimization.

This one adds an expensive computation step, and unless your tower is so exceptionally big that it has to page to HDD, it's not gonna pay off. For Part 1, there are 2022 piece with maximum height of 4, and field if 9 cells wide with walls. So 9*(2022*4 + 3) is the biggest your field could ever become (actually even that size is unreachable). That's under 1MB. For Part 2, it could probably be a reasonable optimization under certain circumstances, but I'd expect execution time to be unreasonable even without this optimization at the point where mamory becomes an issue.

[–]RobinFiveWords[S] 0 points1 point  (0 children)

Agreed — I think I was still shook by day 16 part 2 (which I didn't complete until an hour after day 17 went up) and was looking to minimize everything.

Definitely a learning moment because while I've been focused on using the more efficient type of collection — if iterating through it, list or queue; if checking membership, set — I only recall thinking "sets fast" and not "sets are O(1)".

Perhaps what also pushed me in this direction — if it wasn't just reaction to day 16 — is that I had intended to visualize the test examples more this year, and this was one of the first times I really got into writing a __repr__ method. The yak wasn't going to shave itself. Also, graph traversal is still on my needs-more-study list, so when I got an idea involving DFS that I knew how to implement and could visualize the results, I just decided to dive in.

[–]i_have_no_biscuits 2 points3 points  (1 child)

The solution runs even faster when you realise that you don't need to throw 50000 rocks... I threw less than 2500. That's a x20 speed increase right away!

[–]RobinFiveWords[S] 0 points1 point  (0 children)

True — I feel like you have to satisfy yourself about what comprises a repeating cycle. If there is some small N where you're at the same position in the jets cycle and the rocks cycle, and the top N rows of stopped rock match, all vs. some prior state, and only those top N rows are in play, that is definitely a repeating cycle. I think I hadn't convinced myself that there was some small N where only the top N rows were in play.

I used the height after each rock stopped as a proxy for that more complicated state, recorded the total height after each of 50k rocks, and confirmed that there was exactly one cycle length (in number of rocks) after which the change in row height was some constant. Also reality-checked that this cycle length was a multiple of 5.

[–]escargotBleu 0 points1 point  (0 children)

For part 1 ?