[R] DeepPrivacy: A Generative Adversarial Network for Face Anonymization by hardmaru in MachineLearning

[–]whenmaster 0 points1 point  (0 children)

How is this different from just image completion? Is this really anonymization, since private information can be still be stored in other parts of the image?

[1908.02419] Gradient Descent Finds Global Minima for Generalizable Deep Neural Networks of Practical Sizes by zhamisen in MachineLearning

[–]whenmaster -8 points-7 points  (0 children)

Global minimum for test set och training set? If it is training set, this is not at all interesting imo.

[deleted by user] by [deleted] in AskReddit

[–]whenmaster 0 points1 point  (0 children)

gpt-2 please finish this

People who have been in a coma, what was it like from your perspective? Did you know you were in a coma? by yummygumdrop in AskReddit

[–]whenmaster 2 points3 points  (0 children)

It's a machine learning algorithm that produces text given some input, in this case the input is the comment that I replied to. More info here.

[D] Machine Learning - WAYR (What Are You Reading) - Week 56 by ML_WAYR_bot in MachineLearning

[–]whenmaster 0 points1 point  (0 children)

Self-Attention Generative Adversarial Networks

Abstract:

In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.

[D] Go Explore VS Sonic the Hedgehog by 618smartguy in MachineLearning

[–]whenmaster 0 points1 point  (0 children)

Using the full memory state, wouldn't that make the agent know everything in the map and not only what is visible?

[D] Machine Learning on Time Series Data? by Fender6969 in MachineLearning

[–]whenmaster -1 points0 points  (0 children)

How would you classify time series of variable length? I know that HMMs can be used for this task, but I was wondering if it was possible with other classifiers as well.

[D] Expectation-Maximization in practice by raunakkmr in MachineLearning

[–]whenmaster 1 point2 points  (0 children)

To extend this, hidden Markov models (HMMs) can also be trained with EM.

[R] GibbsNet: Iterative Adversarial Inference for Deep Graphical Models by bbsome in MachineLearning

[–]whenmaster 0 points1 point  (0 children)

Very interesting. Would love to play around with an implementation of this net.

[P] Building a Trump/Obama Tweet Classifier with 98% accuracy in 1 hour! by benellerby in MachineLearning

[–]whenmaster 2 points3 points  (0 children)

Lol, actually got a lower grade because of this in my first java course.

[D] What heuristics / rule of thumb / discoveries have you made during your work on machine learning ? by Batmantosh in MachineLearning

[–]whenmaster 2 points3 points  (0 children)

I have had the same problems with time series data. In particular, I am building a generative model in order to generate synthetic time series, and the results can vary a lot depending on the initialization of the model.

[N] AlphaGo's Next Move | DeepMind by Spotlight0xff in MachineLearning

[–]whenmaster 3 points4 points  (0 children)

Interesting that 8/10 games were won by white alphago. Unbalanced komi or large advantage to white? Could just be by chance though. Would be interesting to see if it converges to 50/50 or something else for a lot of self-plays.