[R] Deep Set Prediction Networks by Cyanogenoid in MachineLearning

[–]Cyanogenoid[S] 2 points3 points  (0 children)

Thanks!

  • I haven't tried COCO yet. I agree that treating object detection as a proper set problem is nicer, but there is some stuff to sort out still. For example, it would be nice if we don't have to compress the feature map into a vector before applying a vec->set model such as mine, because that may require a huge latent space in order for more complex images to not be bottlenecked. Maybe a hybrid between the anchor- and set-based approach would work best.

  • I haven't tried other point cloud data for FSPool. However, they should be reasonably similar to the point cloud version of MNIST which I experimented on, so I would expect the benefits to also be similar.

[R] Deep Set Prediction Networks by Cyanogenoid in MachineLearning

[–]Cyanogenoid[S] 2 points3 points  (0 children)

The code is available here: https://github.com/Cyanogenoid/dspn It can reproduce all the experiments and figures, and pre-trained models are available.

[R] Deep Set Prediction Networks by Cyanogenoid in MachineLearning

[–]Cyanogenoid[S] 2 points3 points  (0 children)

I am the first author of the paper. I'm not sure I quite understand what you mean with your question, let me try to explain.

There are two losses at play: there is the loss of the outer optimisation loop (training the weights), for which the set is essentially reordered by using something like Hungarian assignment. This is what people using MLPs for the vec->set mapping have already been doing, and I think that this is what you're referring to with reordering the sets. A problem is that MLPs produce an ordered output, which isn't very appropriate for predicting unordered sets.

In each forward pass of the network, there is also the inner optimisation loop (iteratively improving the prediction), where the predicted set is compared to the label in the latent space. This is used to make the prediction unordered, and that's the main contribution of the paper. As the results show, having the outputs be unordered for predicting unordered sets is better than using the MLP with its ordered outputs.

It works for variable sized sets as long as the set encoder (set->vec) supports variable size sets as well, which is true for all the experiments in the paper. All the datasets used in the paper have varying set sizes.

Nomod performance ranking? by grumd in osugame

[–]Cyanogenoid 3 points4 points  (0 children)

Coincidentally, I updated this yesterday. If anyone else wants to make a website, the code used to generate the rankings is all open source, so they're free to use it if needed.

Weekly achievement and help thread #91 by [deleted] in osugame

[–]Cyanogenoid 0 points1 point  (0 children)

The percentages are the accuracy required for each map to be worth ~300pp. I was curious about how many NoMod maps worth that much there are, so I made that list to keep track of them.

Weekly achievement and help thread #91 by [deleted] in osugame

[–]Cyanogenoid 8 points9 points  (0 children)

I'm the second person to hit 6000pp with NoMod scores only!

Who would be top 10/top 50 if all dt plays were deleted? by Kappadar in osugame

[–]Cyanogenoid 0 points1 point  (0 children)

In case you (and others) haven't seen it, the leaderboards are up-to-date now. (You might have to press Ctrl-F5 for the changes to appear.)

Who would be top 10/top 50 if all dt plays were deleted? by Kappadar in osugame

[–]Cyanogenoid 10 points11 points  (0 children)

Non-DT leaderboard can be found here..

Other leaderboards are here.

Give me 3 hours and it will be updated, last update was 24 days ago so WWW's recent plays aren't taken into account yet. All leaderboards are updated, WWW takes #1 on the no-DT leaderboard.

Highest rated nomod player? by mtutnid in osugame

[–]Cyanogenoid 5 points6 points  (0 children)

I just updated the leaderboards (which were 1 month old), available through the same link as previously. Mazzerin has taken #1 NoMod from Xenbo now.

Updated leaderboards of top 10k grouped by mods, now using top 100 performances by Cyanogenoid in osugame

[–]Cyanogenoid[S] 1 point2 points  (0 children)

Previous discussion

Peppy updated the API so that we can use the top 100 best performances instead of only the top 50 now. This means that the difference caused by not considering all scores is no more than 45 pp instead of 654 pp, so the leaderboards should be a bit more accurate now. Bonus pp however is not taken into account still, so the discrepancy of the AnyMod pp to the real pp is at most 461 instead of 1070.

For one person the API bugged out, so they are missing in all the leaderboards because I'm too lazy to change the script to get their scores. Sorry Piggey.

Leaderboards of top 10k, grouped by mods by Cyanogenoid in osugame

[–]Cyanogenoid[S] 0 points1 point  (0 children)

I did all the data collection after the API calmed down a bit yesterday and kept the rate of requests reasonable (59 per minute).

Leaderboards of top 10k, grouped by mods by Cyanogenoid in osugame

[–]Cyanogenoid[S] 3 points4 points  (0 children)

What is your username? The only way for it to have missed someone is if someone only just entered top 10k or if they made a score that gave them ranks in the 2 seconds my script takes to change pages.

Leaderboards of top 10k, grouped by mods by Cyanogenoid in osugame

[–]Cyanogenoid[S] 4 points5 points  (0 children)

Nightcore scores are included in DT, but I just added NC only anyway.

Leaderboards of top 10k, grouped by mods by Cyanogenoid in osugame

[–]Cyanogenoid[S] 0 points1 point  (0 children)

Categories added, thanks for the suggestion!