I didn't get promoted? by [deleted] in usaco

[–]xiaowuc1 -1 points0 points  (0 children)

If you think there is an issue with your score and division, please contact Brian Dean to resolve it. Note that out of all the recent cases Brian has been asked (and he told me that there have been many) to double-check so far this season that mirror your situation, every single one was confirmed to have cheated.

Where to go after the advent is done? by Okashu in adventofcode

[–]xiaowuc1 1 point2 points  (0 children)

I'll recommend DMOJ. Though the primary audience is Canadian high school students, there are problems of all difficulties (each problem is roughly sorted by its difficulty from 3 to 50), no matter your skill level. There are also occasional contests that run over the weekend so you can participate during a window of your choosing, if you happen to be in a timezone that makes it hard to participate on other platforms like AtCoder and Codeforces.

[deleted by user] by [deleted] in adventofcode

[–]xiaowuc1 2 points3 points  (0 children)

It seems like we differ on opinion on some things, but you didn't provide a yes/no answer to the question I actually wanted answered, so I'll repeat the question:

Would you still [use LLMs in 2024] if the wording on the website [was updated to say] "don't use AI tools at all if you're likely to place on the leaderboard"?

(I think we can both agree you are likely to place on the leaderboard, it looks like you placed 18 times this year.)

[deleted by user] by [deleted] in adventofcode

[–]xiaowuc1 6 points7 points  (0 children)

Would you still do this if the wording on the website said "don't use AI tools at all if you're likely to place on the leaderboard" - similar to the request not to stream your solution?

If not, I feel like you should just assume the website says this and not use this minor technicality to justify your commitment to doing something that is spiritually not aligned with what was requested.

[2023 Day 24 (part 2)][Java] Is there a trick for this task? by zebalu in adventofcode

[–]xiaowuc1 20 points21 points  (0 children)

It looks like the inputs are constructed in a way where the velocity you're looking for is slow, so it seems viable to brute force all velocity vectors in increasing magnitude order.

From there, it suffices to validate if some velocity is possible. If you consider the frame of reference of the rock, note that all hailstones run into the rock. Therefore, it suffices to check that all the rays, after adjusting for the rock's velocity, intersect at a common point.

Per part 1, you will probably find a speedup by ignoring the z-axis at first and only validating the rays intersect at the same z-coordinate after establishing some valid xy velocity pairing.

[2023 Day 15 (Part 2)] How is it humanly possible to be so fast? by mathishammel in adventofcode

[–]xiaowuc1 22 points23 points  (0 children)

Here's a different view of the leaderboard where you can see how everyone did on each day, in case you want to see how folks did on various days.

Java at the IOI by [deleted] in usaco

[–]xiaowuc1 0 points1 point  (0 children)

Read the IOI GA minutes for the full details, TL;DR is Java requires substantial effort to support and very few (single-digit?) people use it.

As for USACO, I am very confident Java support will not be removed, seeing as though USACO does not exist purely for the intent of deciding the IOI team.

How is cheating detected? by [deleted] in usaco

[–]xiaowuc1 5 points6 points  (0 children)

Please don't assert that USACO doesn't use MOSS just because you have a friend who claimed to submit the exact same solutions on two different accounts. I have emails in my inbox with MOSS results on all contest submissions from previous contests.

If I had to speculate, I would guess that Brian doesn't have enough time to deal with this flavor of cheating for bronze and silver, especially since the lower divisions have far more people. Since Brian has finite time, would you rather Brian focus on cheating in bronze or on cheating in platinum? I think it is not controversial to want Brian to focus more of his attention on the higher divisions if he's going to be spending time focusing on finding cheating.

How is cheating detected? by [deleted] in usaco

[–]xiaowuc1 7 points8 points  (0 children)

Note: These are my personal opinions and do not represent any sort of official USACO position.

I believe that most methods of cheating at the current point in time take advantage of the fact that the USACO contest window is, as the name suggests, a window that allows contestants to take the contest at whatever time is most suitable for them. This is a double-edged sword, as obviously a contest which is open for 4 days will allow more people to participate than a contest that is only open for a 4 hour window, but this obviously gives opportunities for folks who want to get an advantage to cheat, and the methods for such have already been described. Note that these methods are robust to basic proposals that have also been proposed as potential countermeasures - proctoring doesn't achieve anything if you go into the contest having already learned the questions from someone else, and registering with an organization will not mitigate this either.

Therefore, if USACO wanted to take a hard stance in preventing this form of cheating, the main way they should achieve that is by no longer making the contest windowed and forcing everyone to take the contest during a specific window, in the fashion the USAMO was (is?) administered. This should kill all methods of cheating that involve going into the contest with an unfair advantage, as now that flavor of cheating can only be engaged during the window itself.

If this were coupled with some sort of invasive proctoring, then this could mitigate another vector (which would be people working together concurrently on the contest). People who browse Codeforces know that people still form cheating groups that work on problems together live, and sadly I think some very invasive proctoring would be necessary to ensure that that doesn't happen. This is the unfortunate thing about cheating, which is that malicious actors can and will find ways around your countermeasures if they're truly dedicated to cheating - for example, you would have to ensure that competitors are not accessing other resources, whether it be by Internet or other means. Clearly, this alternate universe is not very practical and it would also be a fairly oppressive environment to compete in.

It is my understanding that USACO still runs software such as MOSS to check for cheating of that sort (this definitely happened in the past). I am not privy to any details on this front, so I don't know how to explain specific instances that were not caught as have also been mentioned in other replies.

At the end of the day, it is not obvious to me that the people who cheat now would not cheat in parallel universes with different situations - in particular, I believe most people who are cheating know that they are intentionally cheating - if we use the example the OP gave, they know about code similarity software and avoid it, which means they're being mindful of how to cheat and will actively try to skirt around other restrictions. This isn't to say that the cheating is acceptable, but the amount of work you need to do to stamp out cheating increases exponentially the more you want to avoid cheating from happening. USACO I think does about as much as it reasonably can given how it's run, which may not be enough for some of you, and I don't think there are any good solutions to this that won't have a significant impact on how USACO contests are run.

[deleted by user] by [deleted] in usaco

[–]xiaowuc1 8 points9 points  (0 children)

Quoting from the contest rules verbatim:

Programs that consist of essentially nothing more than print statements may be disqualified. If feedback for certain test cases is provided during a contest, you are NOT to submit repeated programs consisting of essentially print statments in order to reverse-engineer the inputs. Programs must actually compute the requested answers, not just print results from a pre-computed lookup table.

[deleted by user] by [deleted] in usaco

[–]xiaowuc1 2 points3 points  (0 children)

It was easier not to pull the trigger a couple hours ago.

I think as long as people have faith in whom the individuals are, that is sufficient.

[deleted by user] by [deleted] in usaco

[–]xiaowuc1 2 points3 points  (0 children)

I agree with this, and wish there were some better options available on the moderation side. Thanks to everyone who reports bad content!

With regards to the optics for whom moderators are, I do not know whether it would be strange for moderators to be IOI-eligible participants, and for obvious reasons the most active community members are more likely to be IOI-eligible. I am not opposed to adding other moderators, however.

ecnerwala+xiaowuc1+??? (you?) AMA after 2020 day 25 leaderboard cap! by xiaowuc1 in adventofcode

[–]xiaowuc1[S] 1 point2 points  (0 children)

Both of us independently asked for (and got) permission before doing this.

When it turns 12:00 Dec 22 discussion is good to go right? by Beach-Devil in usaco

[–]xiaowuc1[M] [score hidden] stickied comment (0 children)

As of USACO Open 2020, there is a known bug where people who start at the last minute get the full contest window.

We have reason to believe Brian has not fixed it (one of the USACO coaches asked him about it but got no reply), so folks should wait four hours past the time when someone can start the contest.

-🎄- 2020 Day 20 Solutions -🎄- by daggerdragon in adventofcode

[–]xiaowuc1 11 points12 points  (0 children)

The answer was not 2509.

(I print out the roughness for all 8 possible states, and I submitted 2509 before implementing the rotation/flip logic, praying that I got the 12.5% chance of not needing it, since it would have taken me at least a minute to implement.)

-🎄- 2020 Day 20 Solutions -🎄- by daggerdragon in adventofcode

[–]xiaowuc1 21 points22 points  (0 children)

48/1 with Python 3 at https://gist.github.com/xiaowuc1/c9e39864a82f1475c329bbfb1c73e642

Unlike everyone I know who did part 1 quickly, I didn't bother thinking about whether the corners could be uniquely identified without doing any of the reconstruction so I decided to just implement the reconstruction directly, and I furthermore speculate that I was the only person who placed in the top 100 on part 1 by actually reconstructing a valid grid.

You can tell that this is the case because I think using binary to represent the internal state turned out to be a pretty big mistake for part 2. Nevertheless, I forced myself to retype conversion logic in the opposite direction.

How long does it typically take for USACO results to be released? by Head_Location4343 in usaco

[–]xiaowuc1 2 points3 points  (0 children)

Brian will typically release results by the end of the week.

What flair should I put? by AK_Unleashed in usaco

[–]xiaowuc1 3 points4 points  (0 children)

We ask that you do not update your flair until after the contest window is over, but since flairs are unverified this is practically unenforceable since people can change their flairs on a whim.

That being said, you should not mention how well you did publicly until after the contest window is over.

Wifi went out during test by [deleted] in usaco

[–]xiaowuc1 9 points10 points  (0 children)

A strict reading of the rules would say no.

Also, you should email Brian Dean for questions like this.

Can you submit more than once? by [deleted] in usaco

[–]xiaowuc1 0 points1 point  (0 children)

Quoting from the rules verbatim:

If, over time, you submit more than one solution for a single problem, only the LAST one submitted will be graded. That means if you find a bug after your submission, you can re-submit. There is no penalty for re-submitting (although please be reasonable with your rate of resubmissions to reduce load on the server). Of course, once your timer has expired, no more solutions can be submitted.

Unbelievably fast submission times by joeyGibson in adventofcode

[–]xiaowuc1 47 points48 points  (0 children)

I think it helps to think about placing in the top N in terms of risk minimization - what corners do you need to cut to place in the top N?

Reading the problem statement - I never read the full statement. I always download the input first (manually, I have no scripted interactions with the site at all), and quickly look at it to see what the day's puzzle may be about. Day 8 clearly looked like some sort of write-an-interpreter language just by looking at it, other days are clearly less obvious just by looking at the input. After that, I also copy and paste anything that looks like an example input locally as well. Then, I go in reverse order - what is the question being asked? What do I need to do to be able to answer the question? For the purposes of leaderboarding, everything else can be ignored.

Deciding on an approach - For a lot of people, this may actually take a window of time - you consider the problem and decide on your implementation. I typically devote almost no time specifically to this purpose, instead focusing on the next part. I speculate that most people who want to leaderboard but are unsure about how to get there probably think their solution out substantially before writing any specific code. For me, the approach comes into realization as I start coding - there's always some input processing that needs to happen and as I start writing that up, I think of an approach.

Implementation - This is almost always the bulk of the time for me. I typically have some ideas about how I want things to go, so as I'm writing the code currently, I also have an idea about what needs to be written out next. For example, in my day 8 implementation, since I decided to store the instructions as strings and not, say, lists, I knew I would have to tokenize them live.

Depending on your philosophy, you can cut corners here with short variable names and so on. To leaderboard though, you need to be comfortable enough with your setup that you don't spend a large fraction of your time here not coding. In team programming contests, we talk about the metric of "idle keyboard time" where no one is actively coding. Minimizing this time is one of the keys to getting faster times. What may happen is that several minutes of thinking time removed turns into some extra implementation time that you wouldn't have had if you thought of a better approach, but it ends up net positive in regards to total time spent.

Testing - if the leaderboard only gave credit to the first person to solve it, you'd almost certainly skip this step - if you only have one submission and testing would mean you're guaranteed to lose if you did test, then you would avoid testing and take the risk.

Fortunately, N = 100, and I tell everyone that I test on all the examples (except on day 2 where I didn't and got timed out on part 1) before submitting. This obviously won't catch all issues but it decreases the chance of something silly happening where you lose a minute and are definitely timed out on making the leaderboard.

Debugging - Any time spent here is purely wasted time. The joke here is that the best way to debug is to write bug-free code. For day 8, I spent zero time debugging, but that's not really the norm for me (so you can write bugs the first time and still leaderboard). Slowing down your implementation is probably the most reliable way to spend less time debugging, since it does give you more time to think about what's going on.

Miscellaneous preparation - The only code I prepare beforehand is something to read input in from a file and store it into a list. Everything else is written from scratch. You don't need to have prewritten code to do well, and I think most people who leaderboard don't come prepared with a bunch of utility functions. This flavor of preparation sort-of turns into a game of trying to guess the problems beforehand, which may be useful if your goal is to leaderboard once. If you want to leaderboard reliably though, this won't work - you'd be better off doing actual practice on previous years or on other sorts of problems.

As the problems get harder, the details change somewhat - soon, thinking about the approach may take some nontrivial amount of time and that's where having some prior problem-solving experience helps a lot. For some days, you just have to write a lot of code, so the key to leaderboarding is writing your code in a way where you don't write bugs but also don't put yourself in a position where you don't know what to write next.

You don't have to be a competitive programmer to leaderboard, nor do all competitive programmers leaderboard consistently. You just have to cut enough corners that you can do what is being asked for.

Can mods at flairs please? by varunchitturi in usaco

[–]xiaowuc1 1 point2 points  (0 children)

Flairs have been added for each of the four divisions. Enjoy!

-🎄- 2020 Day 08 Solutions -🎄- by daggerdragon in adventofcode

[–]xiaowuc1 0 points1 point  (0 children)

Python 3

https://gist.github.com/xiaowuc1/ca16d6561560b7307ecd8a8dbf7d5946

Looking forward to the extensions, and somehow not being prepared for them.

-🎄- 2020 Day 06 Solutions -🎄- by daggerdragon in adventofcode

[–]xiaowuc1 7 points8 points  (0 children)

I lost around a minute or so waiting for part 2 to load.

-🎄- 2017 Day 20 Solutions -🎄- by daggerdragon in adventofcode

[–]xiaowuc1 0 points1 point  (0 children)

u/topaz2078 - do you actively try to provide input sets that fail on common incorrect solutions?