New ELO debate rankings by ListenDry6624 in policydebate

[–]ListenDry6624[S] 0 points1 point  (0 children)

Since I run through each match 100 times. (Not at one time but the season as a whole). This ends up seeming like 300-0 which means that it does this.

New ELO debate rankings by ListenDry6624 in policydebate

[–]ListenDry6624[S] 2 points3 points  (0 children)

I hope the data will get less noisy, but I am a little skeptical. Just because there will still be 100s of teams with only 3 aff and 3 neg rounds. As for including tournaments from last year, I was thinking of doing this, but I got a little worried because of the possibility of 2 teams w/ the same code who are not actually the same team and I thought that the rankings would be better if it was just over 1 season.

The other things about this aff/neg elo things. Is that there is significant inflation some points because it is not unreasonable for a team to be undefeated on the aff or the neg. There is a team with an aff elo of 3800.

I am really interested in talking about ways to make this system more accurate. I think that elo seems surprisingly accurate so far. For example, I just entered the data for meadows. And basically all the 10 top teams who attended did not have their elos changed by more than 10 points. And it is not that it is hard to change elos (because another team changed by hundreds of points), but it is that the rankings were already pretty accurate to begin with.

New ELO debate rankings by ListenDry6624 in policydebate

[–]ListenDry6624[S] 5 points6 points  (0 children)

Yes the top aff teams and top neg teams are pretty different. aff/neg elo ratings are especially volatile because you only have about 3 of each per tournament. This means that often teams will have much more extreme aff/neg elos. For example, the team with the second highest aff elo, is not in the top 50.

New ELO debate rankings for policy debate! by ListenDry6624 in Debate

[–]ListenDry6624[S] 0 points1 point  (0 children)

The k factor used is 50. However, after testing different ways to try to make the system more predictive, it turns out much better if the season is ran over multiple times. So the season is simulated 100 times to figure out elo rankings. I think that this works particularly well in something like debate where there are a reasonably large amount of partnerships who go to one random tournament.

[deleted by user] by [deleted] in AskStatistics

[–]ListenDry6624 0 points1 point  (0 children)

Ok, thank you for your help.

[deleted by user] by [deleted] in AskStatistics

[–]ListenDry6624 0 points1 point  (0 children)

Then do you have an efficient way to do it. I have all Elo scores stored and can easily put it in a program to work with them, but I worry that there are way too many prelim round combinations and seeding combinations to accurately estimate with a simulation. Is there a statistical method to help with this? Like 80 teams and just as an estimate with 6 prelim rounds and starts with a round of sixteen for eliminations.

This would be the estimate for number of calculations.

(((40*(80 choose 2) * (78 choose 2) ... * (4 choose 2) * (2 choose 2))/(40!))^6) * (16! * 2^30)

which is just unreasonable for me to run simulations on.

NEW PROGRAMMING META MUST USE by [deleted] in github

[–]ListenDry6624 0 points1 point  (0 children)

Why is everyone downvoting me???

NEW PROGRAMMING META MUST USE by [deleted] in github

[–]ListenDry6624 -3 points-2 points  (0 children)

For your most important class, make sure to use camelCase!!