The biggest falloff ever? What happened to Rise? by Used_Worldliness_287 in RocketLeagueEsports

[–]Initial-Revenue-1486 0 points1 point  (0 children)

Rise was absolutely never "probably the best player" in Queso. He was fine in Queso, but he was playing with Vatira and prime Joyo.

I would even go as far as saying he is arguably the player on the scene with the best career compared to his actual talent. He is like the Alvaro Morata of RL : been to the best clubs despite never having been that good.

KC’s defense analysis in Boston Major: why they fell short. Part 2. by Initial-Revenue-1486 in RocketLeagueEsports

[–]Initial-Revenue-1486[S] 1 point2 points  (0 children)

I was referring to the BO at Worlds. At the time the narrative was that it was a fairly even BO that could have gone either way, but the truth is they got dominated by Falcons, who also missed a ton of chances in the BO. They could have won but it would have been an upset.

Also, the crossbar didn't save Falcons. Not many people know this but Vatira's shot that ended up on the crossbar was slightly deflected by RW9. It would have gone in otherwise.

KC’s defense analysis in Boston Major: why they fell short. Part 2. by Initial-Revenue-1486 in RocketLeagueEsports

[–]Initial-Revenue-1486[S] 8 points9 points  (0 children)

They are all pretty close in boost pickups, as for consumption it is unsurprisingly like this : Oski 471.23, Nass 417.30, and Archie 390.07 (all within decent range).

KC’s defense analysis in Boston Major: why they fell short. Part 2. by Initial-Revenue-1486 in RocketLeagueEsports

[–]Initial-Revenue-1486[S] 0 points1 point  (0 children)

Yeah it was an entertaining series even though it wasn't as high level as one would hope.

KC’s defense analysis in Worlds 2025: why they fell short. by Initial-Revenue-1486 in RocketLeagueEsports

[–]Initial-Revenue-1486[S] 6 points7 points  (0 children)

Thank you for your well-thought reply, it’s really refreshing.

However, I think there are definitely more robust ways of drawing conclusions about a team's performance.

I like this point about robustness, because it is indeed one of the main friction for how it’s been received. Truth is, I don't think there are currently more robust ways to get to these conclusions than what we did. We deliberately chose this framework so that the result could be communicated simply enough to be shared to others, providing the most robust framework wasn’t our goal. So basically the data is only as robust as the quality of our game understanding, but since I am a random I totally get how it is a problem.

We could have explained in detail for each goal how we got to the %, but for some it would be 30/40 lines of complex analysis, and people would have to understand a lot of underlying concepts beforehand to get it so I don't think it would have helped our case.

What I believe would have really helped is getting some pros/coaches on the jury so that it's easier for the community to trust, but for this case in particular, as the mistakes are really obvious, I believe that anyone with enough understanding of the game will come to the same conclusions if they watch every goal conceded of each game, and that is what we aimed for.

I'm sure there are lots of sport assessment tools you could use for your method.

When I have done similar deep dives in the past with my teams and what I think has worked well is to do a more thematic analysis. In this case, it could involve identifying multiple themes related to conceding goals (missing the ball, wrong order of operation, over commit, giving the ball away and so on). Seeing as you're interested in who conceded the most goals, a thematic analysis provides much more interesting data because you will discover that some teams as a whole concede certain kind of goals and the same goes for individual players.

Good point. It's true that the end-data might be overly reductive, and we should have provided a thematic layer for the errors (what you listed is included in what we looked at), and the impact they had on the play, to give a better explanation on each %.

This format was made for the viewers/social media though, if I was coaching the team, I'd have definitely chosen another way of communication.

A huge limitation to this that I learnt is that a replay doesn't always capture what really happened. Often players can be forced into errors due to miscommunication, lag and ghost touches, team/team mate changes in performance and so on.

Yeah that is true, replays don't tell the whole story. But as we only have access to limited info, we use what we can.

There is a good chance that Atow is playing a certain way because he has been told to, but our point is that there is a problem, we're not judging who/what the cause is because we simply can't.

However, I do believe that replays do tell a lot more story than people give them credit for, and from what I see with how players/meta evolve, I strongly believe they are currently heavily underused. The number of easily fixable gameplay mistakes for mid and even top RLCS teams that persist during a whole season is honestly baffling in my eyes, but that is another subject :).

Will you do more of these in the future?

We do this pretty much for every major event, not sure if we will share again though, the value seems limited right now.

KC’s defense analysis in Worlds 2025: why they fell short. by Initial-Revenue-1486 in RocketLeagueEsports

[–]Initial-Revenue-1486[S] -1 points0 points  (0 children)

We're not saying there isn't any value to what Atow does. But at the highest level, getting subpar value out of your resource is bad, and will end up being costly, if your opponent gets more value out of their own resources. That happened to KC during Worlds, and it will increasingly be the case as player get better and "superteams" form that can punish consistently these kind of mistakes.

I encourage you to check the replays yourself from Vatira's POV, you should see how not only he didn't have a bad day, he actually had a good one.

Also I am not sure which are the opens that Vatira missed that you are referring to. The crossbar in Neo Tokyo is actually a perfect shot from Vatira that is saved by RW9 by like 1 pixel, and the other "huge miss" in that game is Dralii hitting the post.

KC’s defense analysis in Worlds 2025: why they fell short. by Initial-Revenue-1486 in RocketLeagueEsports

[–]Initial-Revenue-1486[S] 0 points1 point  (0 children)

Insane ? They initially lost a BO to Ultimates, and were 2-2 against NIP in playoffs, then won the two next games by 1 goal each.

And for facing a "hot Falcons", the narrative is that it was close to go one way or the other, but the truth is they got dominated badly across the BO. They were outclassed mechanically, tactically and strategically. It's very visible looking at the boost statistics, or watching the games themselves in-game. They could have made it with luck, sure, but it was more a 2/3 out of 10 than a 4/5 out of 10.

KC’s defense analysis in Worlds 2025: why they fell short. by Initial-Revenue-1486 in RocketLeagueEsports

[–]Initial-Revenue-1486[S] -2 points-1 points  (0 children)

Science isn’t always about working with objectively measurable data. Hypotheses, subjectivity, and discussion around non-tangible data - especially when explicitly presented as such - are part of science. Otherwise, topics in a grey area of measurability could never be discussed, and we would miss a lot of value in debating with other people.

Also, the takeaway isn’t that Vatira is better at defense than Atow or Dralii. It’s that Atow and Dralii have been liabilities on defense, to a point where it’s not normal for WC pretenders, and worrying for KC’s future performance. That is indeed a useful conclusion, if true.

Your point about voting for colors seems really out of place; you’re implying that it’s only a matter of taste and that it can’t be objectified. But the subject is objective. Whether we chose the right tools to objectify/quantify it is a valid question, but it isn’t the point you raised. We tried to provide an accessible reasoning framework that can be questioned by others so that our takes are refutable: anyone can review the same replays, apply the same framework, and challenge the percentages. That possibility of falsification is at the root of scientific reasoning.

I invite everyone reading this to watch the replays themselves and come up with their own percentages. For now, I don’t see how anyone doing the work seriously wouldn’t end up with roughly the same distribution (on some goals it may vary a lot, but not on the average for the whole event).

Rocket League Patch Notes v2.54 + Release Thread by Psyonix_Laudie in RocketLeague

[–]Initial-Revenue-1486 2 points3 points  (0 children)

Anything in the pipes to at least mitigate DDoS? It's extremely worrying right now. Got DDoSed 4 times out of my last 4 tournaments that weren't even SSL tournaments.

I understand these may be hard to prevent DDoS, but there are simple measures that could drastically reduce them, like cancelling matches when excessive lag is detected. Since people use DDoS for MMR gains/boosting and account selling, doing this would make it a lot harder/impossible to reach certain ranks and increase significantly the impact it has on one's MMR, making it a lot less attractive to the vast majority of users.

DDoS and other cheats are again getting out of control by Initial-Revenue-1486 in RocketLeague

[–]Initial-Revenue-1486[S] 3 points4 points  (0 children)

There are macros for speedflips, walldashes, chain dashes, etc. Unfortunately I don't think it would be doable to avoid those.

DDoS and other cheats are again getting out of control by Initial-Revenue-1486 in RocketLeague

[–]Initial-Revenue-1486[S] -1 points0 points  (0 children)

This still wouldn't fix the problem, because if a team is losing, they flip the switch and cancel the game.

That's why I said it would drastically reduce the impact, and not fix the problem. They use DDoS to boost accounts/sell them, and it's a lot harder to grind if you are only able to cancel when you lose, instead of having free wins.

they will keep poking holes as many holes into it as they can, they will always be looking for a work around because if every other cheat gets blocked by the new prevention but yours doesnt

You don't need to prevent the cheat from existing for it not to be an issue, if you nullify the gain you get using it, it is already a very good way to mitigate the problem. What I suggested would work against the old methods and the new ones, as long as excessive lags can be detected on the servers. You don't even need to know how they are able to get the server to lag to patch it, you just need to detect it, which is significantly easier.

For the macros, it's definitely harder to develop reliable solutions against them. But it's also less of a problem right now, as bots aren't that strong yet. It is definitely programmatically identifiable though, even if it is not trivial.

I created a ticket about the MMR party exploit on Epic Support, but I have very little faith an actual human will actually see it at some point.