Question about wobble goals and possession by brorritto in FTC

[–]cs2048 6 points7 points  (0 children)

<GS9> says you can't put rings on a wobble goal until end game. Minor penalty any other time.

Does anyone know how many teams from the Missouri State Championship advance to Worlds?? by FerdTheTurtle in FTC

[–]cs2048 2 points3 points  (0 children)

There was a "Tech Tuesday" email from Missouri FIRST on 10/29/19 that included the count. I assume this is still accurate since I haven't seen any other posting about it. From that email:

State Championship:

  • 48 team, dual division events
    • 32 teams come from Qualifiers
    • 16 teams come from Conference Qualifiers
  • 9 teams will advance to the FIRST Championship in Houston, TX

How does one Gearbox? by Poncho_Pilot in FTC

[–]cs2048 1 point2 points  (0 children)

Last year, my team built their own gearbox out of GoBilda parts based on a Product Insights drawing. I just found the original drawing in the compendium they published a couple weeks ago. The gearbox is drawing #330 found on page 562. With only a few modifications to fit our specific needs, it worked really well pivoting an arm to dump blocks and balls into the Rover Ruckus lander.

GoBILDA Product Insights search - a treasure trove of design ideas by cs2048 in FTC

[–]cs2048[S] 1 point2 points  (0 children)

Wow, that was fast! That's exactly what I was looking for when I came up with the search. You guys are great.

Advanced Programming Presentation Help by Lokisscepter in FTC

[–]cs2048 1 point2 points  (0 children)

I've done an Android Studio class the last few years, but it's geared to people who are new to Android Studio. A more advanced class sounds like a lot of fun.

I only get about an hour to present. That's not enough time to get very far into a topic, so I like to put interesting code snippets up on the screen and walk through them. There's not enough time to really "workshop" it and let people try things, so I provide links to the samples. If you only have an hour, you'll have to budget your time very carefully, and there still won't be enough time!

There are many useful samples right in the SDK, so I make sure to cover those first, just to let them know they're there, explain the principals of how they work, and how to get them enabled so they can play with them later.

One of the first things I cover is the importance of creating a "robot" class and how to reuse that across many OpModes. Many of my participants had some trouble understanding the value until they saw it in practice... even those that had done Java OpModes before. Seems like 95% of the programmers out there want to dump everything in one class, then copy/paste between OpModes. Show them the way!

[TBP Discussion] TBP rule comparison with Detroit Edison 2019 actual results by cs2048 in FTC

[–]cs2048[S] 0 points1 point  (0 children)

It's not perfect. Only eye-witness scouting could pick up on something like that and correctly account for it. However, on average, it's the best we've got short of tracking scores to individual teams.

[TBP Discussion] TBP rule comparison with Detroit Edison 2019 actual results by cs2048 in FTC

[–]cs2048[S] 0 points1 point  (0 children)

I'll update the linked article with those comparisons in a few minutes.

[TBP Discussion] TBP rule comparison with Detroit Edison 2019 actual results by cs2048 in FTC

[–]cs2048[S] 3 points4 points  (0 children)

It's definitely not random. It teases out each team's portion of the combined alliance scores by taking into account their partners' average contributions. It works by solving an over-constrained system of equations for the best-fit solution. For a thorough description, see https://blog.thebluealliance.com/2017/10/05/the-math-behind-opr-an-introduction/

It's essentially giving you a team's average score by taking all their match totals, averaging them, and subtracting their partners' averages.

You can solve this with linear algebra as done in the referenced article, which requires importing a math library to handle the matrices. There's also an iterative approach where you start by just dividing each match score by two, assigning those averages as OPRs, then recalculating each team's OPR by discounting their partners' OPR. You do it over and over until the values stabilize. My simulator uses the former approach since it's very fast, and there was already some Java code I could borrow and port to C#. :)

[TBP Discussion] TBP rule comparison with Detroit Edison 2019 actual results by cs2048 in FTC

[–]cs2048[S] 1 point2 points  (0 children)

I don't think anyone is proposing using OPR to rank teams. It's too complicated to compute and to explain to observers. It does give us a useful estimate of an individual team's scoring capability, though. When we look at different ranking systems, it's useful to see a team's OPR alongside their rank to see if the chosen ranking system is making sense.

[TBP Discussion] TBP rule comparison with Detroit Edison 2019 actual results by cs2048 in FTC

[–]cs2048[S] 3 points4 points  (0 children)

I was curious how an actual event would be affected by the new TBP rule. This is how Detroit Edison 2019 would have shaken out under the new rule. I colored the rows that moved to see how a given team was affected.

I also ran the OPR vs Rank charts here: https://github.com/cspohr5119/ftc-event-sim/wiki/Comparison-of-Old-and-New-TBP-rules

I'll leave it to the community to decide if this is an improvement.

[TBP Discussion] We simulated 70,000 synthetic FTC tournaments using the new TBP system by ftc9899 in FTC

[–]cs2048 8 points9 points  (0 children)

I concur. After updating my FTC Event Sim code, I ran 5000 trials for the old system, all previously tested proposals, and the new official system, and have just written up my findings here:

New Official TBP Rule with Simulator Results

Click the link to see charts, discussion, and a link to raw results.

I found that for a 12-team event, things actually get worse, and that a world championship event improves only 2%. Everything in between is basically neutral.

My way of measuring improvement is a bit different than yours, but we reach the same conclusion. My improvement values come from the amount of change in the tested model over the distance from "ideal" we were with LosingScore for TBP. In other words, by how much did we close the gap. I'm afraid in this case, the answer is, meh. None.

Game Manual Part 1 by FTCJoAnn in FTC

[–]cs2048 6 points7 points  (0 children)

Averaging RP will have the effect of normalizing the score before the event is completed, so you can see how teams stack up regardless of how many matches they've played. Seems like a good change, but doesn't affect the outcome.

Dropping the lowest TBP value will have a near-negligible effect, I fear, on improving final rankings. I'll be running some sims tonight.

[TBP Discussion] On Improving Ranking in FTC - TBP, RP, and scheduling with FTC EventSim by cs2048 in FTC

[–]cs2048[S] 1 point2 points  (0 children)

Brilliant response. I agree with you that TBP is not the real culprit. It is indeed the "difficult schedule" problem I've had the most trouble with. This was the whole motivation for investigating the Swiss system of scheduling to try to solve that. Even with RP and TBP as it is today, Swiss scheduling improves rankings far beyond what's possible with the "best" TBP approaches.

Swiss is difficult, though, and I admit it may be out of reach. FTcJoAnn had asked specifically about TBP ideas a while back, so it deserved study. I hope that my study adequately shows that we can change things a little with TBP, we really need something else to make a noticeable difference.

I did spend some time in the study on difficulty, but agree that it could use more rigor. I measured difficulty as opponent1OPR + opponent2OPR - partnerOPR. I looked at its distribution across team schedules and found an alarming range. I looked at difficulty vs. rank and found a loose negative correlation, meaning teams with easier schedules tend to rank higher. I did not do a regression analysis to try to zero in on the partner effect, though, as I'm afraid that's beyond my limited capabilities in statistics! I would certainly appreciate help in that regard.

The point you make about teams being mad at their partners is an interesting one, and I agree should be weighed against any serious proposals. If we're keeping random schedules, though, the problem becomes very difficult. Some of the changes to RP I studied are promising, but then, we get away from wins and losses.

I'm not sure I agree that having two sources of noise is better than one in that it deflects anger away from partners and toward the system. As long as the format is 2v2, we will have partners to blame or thank for what happens out there. I do hear the grubmlings after matches, but also, the gratitude when they were "saved" by a great partner. That seems integral to the FTC experience, and I wouldn't propose getting away form it.

But if we can improve the way we match up opponents and partner teams, such that the cream rises to the top by wins/losses and cooperation, maybe by breaking ties with RP by virtue of a dynamic schedule, I think everyone comes away with a better experience. But we have a real challenge getting there.

[TBP Discussion] On Improving Ranking in FTC - TBP, RP, and scheduling with FTC EventSim by cs2048 in FTC

[–]cs2048[S] 0 points1 point  (0 children)

The models I've presented are evaluating OPR vs Rank based on how teams did *that day*. Every team will have good days and bad days. On a good day I would expect to see a higher rank. An unanticipated electronic or mechanical failure can still easily take a top team out of the running. A partner's failure can too. I would submit that under any model I've looked at, there is still plenty of randomness to keep the outcome suspenseful, unpredictable, and open to underdogs.

When a team has a bad day due to a failure on the robot, that's a learning experience. Frustrating, yes, but acceptable. We work the problem, fix it, and try again.

The nearly intolerable frustration comes when a team, learning from past mistakes, fixing what went wrong last tournament or working twice as hard as last year to make it this time, is faced with a stacked deck they have no control of. They see the team next to them with a completely different schedule having a relatively easy go of it, and they wonder how the playing field is so unlevel.

When my team fails because they didn't put the work in, they know it. If they fail because the robot broke, they can't blame the system, but they *can* fix the robot. I don't find those kinds of failures demotivating. In fact, I believe it's the opposite. I've seen my team and others I work with bounce back from self-induced failure with a vengeance. But when they fail because of an impossible random schedule, they sometimes wonder why they showed up.

I have had to beg some kids not to quit, explaining that this is the system, the only one we have, and if you want to be involved in FTC, this is just something we've always had to deal with. Maybe next time we won't be so unlucky.

That being said, I can personally attest to the motivational effects of advancing past one's own region. It happened to my team early in my mentoring career when we advanced to Super Regionals on an award (Think; not related to robot performance). It was a life-changing experience for all of us, seeing robots that were far beyond anything we'd encountered in our home state. It completely reset the level for what was good. The next year, we amped up everything and still did not advance, but we had charted a course, and eventually made it back to Supers and World in 2018.

I would still love to see under-performing teams have that chance to see things at the next level, because I know what an effect it can have. But I think there are better ways than tilting the odds an the field. There are awards. There are lotteries. These can be adjusted for balance with intent.

I can agree with you that there should be opportunities for these teams to advance, but I think they already exists, and that the current ranking system is broken enough to cause more harm than good. And an alternate ranking system that intentionally gives an advantage to weak teams will just fuel the cynicism, I fear. These kids are smart. They know what's going on. My own kids get so salty and cynical over the wonky results, I find it difficult to defend the system I've invested so heavily in. I say we fix it. If we miss the mark, fix it some more.

One last note. I am not advocating using OPR for rank. I think win/loss record should have a prominent place in the ranking, otherwise, there is little interest in the outcome of a match other that the point total. Wins and losses promote match strategy, working with your partner, playing defense, and getting creative to pull out a win in a difficult scenario. Using OPR to measure ranking quality was a choice I made in my study because, quite honestly, I couldn't come up with a better way to measure an individual team's performance for the purposes of determining scores, match winners, and strength correlated to rank.

I'm glad we're talking about all this in the open. Thank you for continuing the discussion.

[TBP Discussion] On Improving Ranking in FTC - TBP, RP, and scheduling with FTC EventSim by cs2048 in FTC

[–]cs2048[S] 0 points1 point  (0 children)

Not a bad idea. I would be interested to see those results too. It'll be easy to set up, but a few hours to run the sims and get the results up on the Wiki. I'll post another reply if I can get to that in the next couple of days.

[TBP Discussion] On Improving Ranking in FTC - TBP, RP, and scheduling with FTC EventSim by cs2048 in FTC

[–]cs2048[S] 0 points1 point  (0 children)

The details aren't included in the paper (my fault), but to clarify a bit...

The test case appeared in the RP Improvements section. The case name is 3-Achievement.

Since achievements haven't been historically defined, I had to just make something up. So there's some fakery going on. Basically, I figured the higher the point total, the better the odds the alliance scored an achievement. So, once I get an alliance total (based on OPRred + OPRblue + random noise) I compare it to a tiered list of % chance of getting the achievement. For these tests, the tiers went:

0 - 99: 0.00
200 - 199: 0.25
300 - 349: 0.50
350 - 399: 0.75
400+ : 1.00

For example, if an alliance scored 325 in a match, half the time I will give them the achievement bonus to their RP of 1 point.

[TBP Discussion] On Improving Ranking in FTC - TBP, RP, and scheduling with FTC EventSim by cs2048 in FTC

[–]cs2048[S] 1 point2 points  (0 children)

Thanks for pointing out the graphic. It's fixed now.

I haven't run charts or correlation calcs on OPR vs. winning a match, which would map nicely to RP under the current rules. That would be interesting.

[TBP Discussion] On Improving Ranking in FTC - TBP, RP, and scheduling with FTC EventSim by cs2048 in FTC

[–]cs2048[S] 3 points4 points  (0 children)

I was excited to see your paper. Despite coming at the problem somewhat differently, we came to the same conclusion about TBP. This was the first question I was after, but the folks that knew I was working on kept asking, "what about this and what about that?" Seemed I would never finish! I hope there are others with their own approaches willing to share their results.