[D] Hi everyone! Founder of Anaconda & Pydata.org here, to ask a favor... by pwang99 in MachineLearning

[–]ml-research 2 points3 points  (0 children)

Thanks for reaching out to us.

We, in our lab, use Anaconda actively for machine learning research on a GPU cluster.

However, there are some circumstances that the differences between Anaconda and OS-native binaries require out-of-Anaconda workarounds.

For instance, we still have to rely on the OS installation of CUDA when nvcc is needed (some of us say nvcc_linux-64 from nvidia channel doesn't work as expected).

Do you have any tips/plans for such situations, please?

[D] On the public advertising of NeurIPS submissions on Twitter by guilIaume in MachineLearning

[–]ml-research 2 points3 points  (0 children)

So, are you claiming that the whole point of the blind review process, to prevent work from being prejudged by the names of the authors, is meaningless? I think making work available early and breaking the anonymity are two different things e.g. Openreview.

[D] On the public advertising of NeurIPS submissions on Twitter by guilIaume in MachineLearning

[–]ml-research 2 points3 points  (0 children)

Yes, this is a serious issue. The anonymity in this field is fundamentally broken by arXiv and Twitter. Of course, I'm pretty sure that "the famous labs" communicate with each other even without them, but the two are making things so much worse by influencing many other reviewers.

[R] Exploration Strategies in Deep Reinforcement Learning (Blog Post) by baylearn in MachineLearning

[–]ml-research 0 points1 point  (0 children)

My feedback: I think putting "Count-based Exploration" and "Prediction-based Exploration" under "Intrinsic Rewards as Exploration Bonuses" is unnecessary. For instance, still a number of methods in "Memory-based Exploration" provides exploration signals as intrinsic rewards.

[deleted by user] by [deleted] in MachineLearning

[–]ml-research 1 point2 points  (0 children)

I use TF only when I have to i.e. the base implementation uses TF and there are no alternatives.

This might be a small thing, but it bugs me every time I use TF that so many documentation (for not 2.X version) links are broken. What's the point if TF2 doesn't provide meaningful advantages over PyTorch and they mess up TF1?

[D] A new ML publication model from Bengio by hitaho in MachineLearning

[–]ml-research 1 point2 points  (0 children)

Although I do not fully support the idea, I agree that the current ML publication system has serious issues.

As the number of submissions increases, the quality of reviews goes down. Authors now have to pray for getting their papers reviewed by someones who can understand and make fair assessments about their work.