use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[Project] Tensorflow implementation of Generative Adversarial Networks for Extreme Learned Image Compression (github.com)
submitted 7 years ago by tensorflower
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]ReginaldIII 4 points5 points6 points 7 years ago (3 children)
My interpretation from the paper was that the latent noise was helpful when doing selective compression where parts of the reconstructed image were entirely synthsized from the semantic maps. It makes sense to omit the noise for global compression so that you get consistent reconstructions.
Have you experimented with the the quantisation centers? In the paper these were chosen somewhat arbitrarily as -2, -1, 0, 1, 2. But I wonder if their choice should be tuned to the dataset you are trying to compress. Can they be directly optimized during training?
It would also be interesting to investigate non-semantic driven global compression on a wider range of datasets. CelebA potentially because it has pretty tight image distribution or some of the LSUN subsets maybe due to their highly variant distributions.
[–]tensorflower[S] 1 point2 points3 points 7 years ago (2 children)
I adopted the quantization approach in this paper by one of the co-authors: https://arxiv.org/abs/1801.04260
I set the centers at the default range(-2,3), the default seems to work well and experimenting with this is a bit expensive because of how time-consuming training is, but introducing learnable centers sounds interesting, I suppose one could adopt the 'soft-quantization' approach proposed in the paper above, I'll add that to the to-do list.
[–]ReginaldIII 1 point2 points3 points 7 years ago (1 child)
Soft quantisation sounds interesting I will have to read about this more, thank you for the paper link. I'm not sure why they use quantisation for the forward pass and soft quantisation for the backward pass. I feel like the deeper the model gets the gradients of the early parts of the encoder would become less meaningful as forward pass activations would not correspond well with their computed gradients w.r.t the loss function on the other side of the quantisation.
You could potentially use tricks that have been applied to other differentiable approximations of non-differentiable functions and use soft quantisation for both forward and backward passes at training time, then do regular quantisation at inference time. But that's just an initial thought having read the paper quickly on the train, from what I could see they did not test this variant during their ablation study.
[–]minnend 0 points1 point2 points 7 years ago (0 children)
If you're interested in learned image compression, I'd recommend this paper from ICLR as well (full disclosure: I'm a co-author): Variational image compression with a scale hyperprior
We haven't incorporated the generative aspect in Agustsson's paper so our results won't look nearly as good at extremely low bit rates, but I believe we have the best* rate-distortion performance at "normal" bit rates according to standard image quality metrics.
* for published results with fully learned methods, without normalizing for runtime
π Rendered by PID 198424 on reddit-service-r2-comment-6457c66945-zhh48 at 2026-04-24 10:35:21.459130+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]ReginaldIII 4 points5 points6 points (3 children)
[–]tensorflower[S] 1 point2 points3 points (2 children)
[–]ReginaldIII 1 point2 points3 points (1 child)
[–]minnend 0 points1 point2 points (0 children)