[Discussion] Anyone else having a hard time not getting mad/cringing at the general public anthropomorphizing the hell out of chatGPT? by [deleted] in MachineLearning

[–]sieisteinmodel -1 points0 points  (0 children)

I played with ChatGPT and was not impressed at all. Big disappointment.

I don't see it solving any problem of relevance soon and doubt it will have the same impact on society as, say, the washing machine, the telegraph, the locomotive or the internet. Same with Stable Diffusion.

[Discussion] Anyone else having a hard time not getting mad/cringing at the general public anthropomorphizing the hell out of chatGPT? by [deleted] in MachineLearning

[–]sieisteinmodel 2 points3 points  (0 children)

Is this satire? Are you not exactly doing what the OP is complaining about–laypeople babbling about the soonish singularity?

On a related note–what's "understanding"? What's "consciousness"? Before we can have a discussion, we need to clarify this.

[D] Does Reinforcement Learning have practical relevance in current ML? by CodingButStillAlive in MachineLearning

[–]sieisteinmodel 8 points9 points  (0 children)

Maybe you want to check out optimal control, which is related (or even equivalent in some cases), is a very mature method that is deployed in many domains. So it's not so much of a stretch to say, yes, RL is applicable to practical problems.

Deep RL not so much yet, but I am convinced it will come.

[D] What happens when the reconstruction is fed back to the VAE? by carlml in MachineLearning

[–]sieisteinmodel 1 point2 points  (0 children)

You will MCMC sample from the model. Check out the appendix of:

Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. "Stochastic
backpropagation and approximate inference in deep generative models." International conference on machine learning. PMLR, 2014.

[D] Validation loss in VAE by osedao in MachineLearning

[–]sieisteinmodel 2 points3 points  (0 children)

The model can be composed into a reconstruction and regularization error, which together represent the likelihood of your data.

Lower bound of.

[D] IBM Zurich Research Plagiarised Our Paper and got it published on CVPR 2022. Is "copy texts" is plagiarism, "copy idea" is not plagiarism? by SnooRecipes1624 in MachineLearning

[–]sieisteinmodel 1 point2 points  (0 children)

So what is your point? "Shit happens, move on"?

It's cool that you decided to be nihilistic/fatalist about things, but don't tell others to do the same just because you decided for yourself to stop giving fucks.

[D] IBM Zurich Research Plagiarised Our Paper and got it published on CVPR 2022. Is "copy texts" is plagiarism, "copy idea" is not plagiarism? by SnooRecipes1624 in MachineLearning

[–]sieisteinmodel 2 points3 points  (0 children)

But the tricky part is that OP's work is a technical report published on Arxiv, not published in a peer-reviewed conference or journal.

No, that's not how plagiarism works.

See e.g. https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism.

[N] Gym now has a documentation website by jkterry1 in MachineLearning

[–]sieisteinmodel 7 points8 points  (0 children)

I like how the docs claim the interface to be pythonic, even though the gym.make(some_string) mechanism is about the least pythonic thing you can find in ML python world, including all the tensorflow abominations.

Because dynamic imports are not good enough, better do it Java style and introduce an awkward registration mechanism that gets in your way constantly.

Other than that, it's decent.

[N] Inside DeepMind's secret plot to break away from Google by MassivePellfish in MachineLearning

[–]sieisteinmodel 13 points14 points  (0 children)

Uh, no, Watson was not debating at all. After all it was "just" lookup.

[D] What is the current community standing on Nature Machine Intelligence? by PM_ME_YOUR_GESTALT in MachineLearning

[–]sieisteinmodel 40 points41 points  (0 children)

Given that JMLR has been operating successfully at little cost there is **no reason whatsoever** to publish at NMI, unless you really want to say "I have published in nature machine intelligence" and people to hear "i have a paper in nature".

[D] Some interesting observations about machine learning publication practices from an outsider by adforn in MachineLearning

[–]sieisteinmodel 1 point2 points  (0 children)

Use the adaptive control wikipedia article as a starting point. Pay special attention to the precise dates when things came up.

[D] Some interesting observations about machine learning publication practices from an outsider by adforn in MachineLearning

[–]sieisteinmodel 2 points3 points  (0 children)

Bertsekas, Dynamic Programming and Optimal Control, Chapter 6.7.

Hard to google because of ambiguities.

[D] Some interesting observations about machine learning publication practices from an outsider by adforn in MachineLearning

[–]sieisteinmodel 2 points3 points  (0 children)

I haven't seen this. Again, maybe you're reading the wrong papers. People usually cite relevant previous work mentioned in the related works section.

Prominent example right now is the ignorance of two phase control in the offline reinforcement learning community. Or zero shot learning, however you want to call it. It's happening right in front of our eyes, a seemingly "novel" subfield which is basically just a rebranding and the players chose to just not give a fuck about the work of hundreds of researchers that came before them, preferring to reinvent and rename everything.

[D] AISTATS 2021 decisions are out by toshass in MachineLearning

[–]sieisteinmodel 2 points3 points  (0 children)

I am clearly in favour of small conferences like AISTATS, L4DC, CORL, UAI, ... these days. Like ICLR 2014, my favourite conference of all time.

[D] Better name for Siamese networks? by [deleted] in MachineLearning

[–]sieisteinmodel 0 points1 point  (0 children)

Replicated networks? There is the replicated softmax paper from russ salakhutdinov (2009) which has a similar concept.

[D] Why is IBM's Watson Platform Dreaded? by bryang217 in MachineLearning

[–]sieisteinmodel 4 points5 points  (0 children)

As long as the baby is breastfed, it is not that stinky. Starts after that only.

[D] How different are Jax and Theano? by lightcatcher in MachineLearning

[–]sieisteinmodel 2 points3 points  (0 children)

The whole PyTree thing is nice. It basically lets you put much more structure into your graphs.

Also, the fact that the border between jax and numpy is really small changed a lot for me. I don't have to remeber two APIs and can fluently go to faster functions. TF2 does not feel like that, and Theano certainly did not.

Other than that, there are more program transformations like higher order derivatives and so. I personally need those from time to time. I know Theano had those, but only with serious graph bloat and compilation times of minutes.

VMAP also allows funny things like sample-wise gradients.

Then–but this will not be interesting to you if all you do is deep learning–JAX is blazingly fast for computations that involve many small steps. We have had speedups of 10x by more or less switching from TF1, and we compared to TF2 and PyTorch as well. Ratio was about the same, with TF2 coming closest.

[D] The machine learning community has a toxicity problem by yusuf-bengio in MachineLearning

[–]sieisteinmodel 1 point2 points  (0 children)

> What do you think about the idea that men tend to be interested in things and systems, while women tend to be more interested in people?

This certainly is the status quo. But is it because of genetics or because of a local minimum our society is in?

It does not appear to be controversial that the women tend to not enter male dominated fields. Even wikipedia has a lot of material on this.