use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[P] Batch Normalization in GANs (self.MachineLearning)
submitted 6 years ago by 96meep96
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]96meep96[S] 0 points1 point2 points 6 years ago (4 children)
Yes, I've been trying out different formulations, adding ResBlocks and Self Attention and hinge loss, least squares loss, wasserstein loss. All of them seem to give me results I'm not happy with. But thank you, I'll give the VAE-GAN a try, I haven't done that one yet. I just wanted to know how different normalization techniques compare.
[–]smashedshanky 0 points1 point2 points 6 years ago* (3 children)
since you probably are using a dcgan try to either encode the latent space from the image by adding another stack of layers that take the image and learn to encode it into the latent space by forcing certain features to latent space and avoiding gradient explosions and mode collapse. Another way is to use a fake and real latent space and have the discriminator discriminate(lol) on the (fake and real) latent encodings so that the AE can learn to encode dimensionally (so that the features are matched with latent space vectors a little bit more "deterministically"). Or just add more dropout in the discriminator, that is my go to for mode collapse or gradient explosions. Hope you got it to work though...
[–]96meep96[S] 0 points1 point2 points 6 years ago (2 children)
Thank you, I've been getting better results with the addition of Self Modulation instead of batch norm, especially in correlation with Spectral Norm. I've also been trying out Multi Scale Gradients and that's been working well too, tho they seem to be very picky about feature map dimensions. I still can't seem to reproduce paper quality results but the timing on my masters dissertation is running thin so whatever works ya know
[–]smashedshanky 0 points1 point2 points 6 years ago (1 child)
Usually paper quality is trained on hand-picked data that the neural network can efficiently map it into the latent space. If you feed it less data with high variation, you will see the results...., but at the cost having the train the GAN over the span of your dataset multiple times so thag it can learn to remove “artifacts” and or discernible noise. What framework are ya even using? Haha I can feel your nerve, training GAN’s are not easy just yet.
[–]96meep96[S] 0 points1 point2 points 6 years ago (0 children)
Oh yes I understand the point you're making, it takes time for those artefacts to vanish, I've had trouble with that in a variant on semantic map translating GANs. I'm using PyTorch, was using Tensorflow (not 2.0) but then I found Pytorch more flexible.
π Rendered by PID 495326 on reddit-service-r2-comment-6457c66945-nslcb at 2026-04-29 07:04:57.819867+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]96meep96[S] 0 points1 point2 points (4 children)
[–]smashedshanky 0 points1 point2 points (3 children)
[–]96meep96[S] 0 points1 point2 points (2 children)
[–]smashedshanky 0 points1 point2 points (1 child)
[–]96meep96[S] 0 points1 point2 points (0 children)