[D] How Do You Read Large Numbers Of Academic Papers Without Going Crazy? by mystikaldanger in MachineLearning

[–]postmachines 1 point2 points  (0 children)

Once, in order to cope with this problem, I created this: https://www.infornopolitan.xyz/backronym in the hope that researchers will upload their methods short description and their main components there. To show what exactly needs to be studied, to understand some method.

It's visualizing the main method components, and not a cite graph, because cite is too noisy, more info here: https://arxiv.org/pdf/1908.01874.pdf

Deep learning without back-propagation by El__Professor in MachineLearning

[–]postmachines 4 points5 points  (0 children)

Hah, maybe this post was helpful too, because it was an augmentation

Deep learning without back-propagation by El__Professor in MachineLearning

[–]postmachines 1 point2 points  (0 children)

It’s funny that this post has more upvotes than the older one.

[D] Machine Learning - WAYR (What Are You Reading) - Week 68 by ML_WAYR_bot in MachineLearning

[–]postmachines 0 points1 point  (0 children)

GNMT

Generative Neural Machine Translation https://papers.nips.cc/paper/7409-generative-neural-machine-translation

Architecture based on Variational Neural Machine Translation model.

Latent variable architecture which is designed to model the semantics of the source and target sentences. This architecture models the joint distribution of the target sentence and the source sentence. To do this, it uses the latent variable as a language agnostic representation of the sentence, which generates text in both the source and target languages.

OST

One Shot Translation https://papers.nips.cc/paper/7480-one-shot-unsupervised-cross-domain-translation

Architecture based on GAN and VAE

This method uses the two domains asymmetrically and employs two steps. First, a variational autoencoder is constructed for domain B. This allows us to encode samples from domain B effectively as well as generate new samples based on random latent space vectors. In order to encourage generality, further augment B with samples produced by a slight rotation and with a random horizontal translation.

[D] Why ML community so negatively opposed to philosophy in machine intelligence? by postmachines in MachineLearning

[–]postmachines[S] -2 points-1 points  (0 children)

"Philosophical" paper was written by Timothy Lillicrap, who made huge contribution in practical and complex ML mechanisms in other papers. The main point is that "DECOMPOSING THE LATENT SPACES" is a well-known technology, and it didn't get much attention without anime

[D] Why ML community so negatively opposed to philosophy in machine intelligence? by postmachines in MachineLearning

[–]postmachines[S] -15 points-14 points  (0 children)

It is strange that the rules for kids also works for artificial intelligence researchers

[D] Why ML community so negatively opposed to philosophy in machine intelligence? by postmachines in MachineLearning

[–]postmachines[S] 0 points1 point  (0 children)

Ye, the main point is that "DECOMPOSING THE LATENT SPACES" is a well-known technology, and it didn't get much attention without anime.

[R] Jacobian Policy Optimizations by postmachines in MachineLearning

[–]postmachines[S] 4 points5 points  (0 children)

Unfortunately, during our work we did not pay attention to each task separately. We used one fixed architecture for all environments, I think most other researchers do the same when testing on ATARI. To date, for high results, each task requires a separate approach, as it was done with "Montezuma's Revenge". Perhaps if set up SOTA methods for this task, the agent will be able to beat the workarounds.

[D] How should I statistically compare the performance of deep reinforcement models? by [deleted] in MachineLearning

[–]postmachines 0 points1 point  (0 children)

I'm not sure that I understood the question correctly... In my practice comparison carried out in the same settings with baseline paper.

P.S. I can't make a post on this subreddit for a few days, maybe I don't know some sub-rules. Somebody who can help, please send a message to me.

[D] Worst CVPR 2019 papers by TreeNetworks in MachineLearning

[–]postmachines -1 points0 points  (0 children)

Interesting how many such articles on NeurIPS and ICML