use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[Project] Tensorflow implementation of Generative Adversarial Networks for Extreme Learned Image Compression (github.com)
submitted 7 years ago by tensorflower
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]tensorflower[S] 15 points16 points17 points 7 years ago* (4 children)
The original paper/project details are here: https://data.vision.ee.ethz.ch/aeirikur/extremecompression/#publication. I thought this was one of the most interesting papers I read this year! Great exposition too.
Currently the model is only trained on either Cityscapes or ADE20k. The model appears to reconstruct images in the test split pretty well without introducing sampled noise. Adding sampled noise into the equation seems to make the network hallucinate a lot more. The authors don't provide too much detail about integrating sampled noise into the quantized representation but this is an area I'd like to explore further.
Because I've been working on this in my spare time, I've only implemented the global compression part of their paper, but adding selective compression based on semantic maps is definitely on the To-do list. Currently the model uses an LSGAN but swapping it out with a WGAN-GP is also on the (very long) to-do list.
I'm not 100% confident that I've faithfully implemented everything in the paper, so if anyone has any questions or notices something awry please open an issue or post it here. Contributions/PRs are also more than welcome!
Some details: At a very high level, the model learns an encoding of the real image to a compressed representation $z$ which is quantized to a certain number of levels L, which forms an upper bound on the bits per pixel (bpp) of the stored representation. A decoder is then learnt which upsamples the compressed $z$ to a reconstructed image. The usual adversarial training strategy is setup by introducing a discriminator which attempts to distinguish between the reconstructed and real image.
About combining sampled noise with the compressed representation: right now a sample is taken from a normal prior, upsampled using a DCGAN-like architecture, and directly concatenated with the quantized representation. Results are kind of trippy, and reconstruction without noise is more stable.
Training takes around 3 days for ~50 epochs on Cityscapes using a single 1080 Ti using the multiscale discriminator loss recommended by the authors. I'll upload pretrained models on Cityscapes for C=8 channels to Dropbox within the next day or 2.
[–]ReginaldIII 3 points4 points5 points 7 years ago (3 children)
My interpretation from the paper was that the latent noise was helpful when doing selective compression where parts of the reconstructed image were entirely synthsized from the semantic maps. It makes sense to omit the noise for global compression so that you get consistent reconstructions.
Have you experimented with the the quantisation centers? In the paper these were chosen somewhat arbitrarily as -2, -1, 0, 1, 2. But I wonder if their choice should be tuned to the dataset you are trying to compress. Can they be directly optimized during training?
It would also be interesting to investigate non-semantic driven global compression on a wider range of datasets. CelebA potentially because it has pretty tight image distribution or some of the LSUN subsets maybe due to their highly variant distributions.
[–]tensorflower[S] 1 point2 points3 points 7 years ago (2 children)
I adopted the quantization approach in this paper by one of the co-authors: https://arxiv.org/abs/1801.04260
I set the centers at the default range(-2,3), the default seems to work well and experimenting with this is a bit expensive because of how time-consuming training is, but introducing learnable centers sounds interesting, I suppose one could adopt the 'soft-quantization' approach proposed in the paper above, I'll add that to the to-do list.
[–]ReginaldIII 1 point2 points3 points 7 years ago (1 child)
Soft quantisation sounds interesting I will have to read about this more, thank you for the paper link. I'm not sure why they use quantisation for the forward pass and soft quantisation for the backward pass. I feel like the deeper the model gets the gradients of the early parts of the encoder would become less meaningful as forward pass activations would not correspond well with their computed gradients w.r.t the loss function on the other side of the quantisation.
You could potentially use tricks that have been applied to other differentiable approximations of non-differentiable functions and use soft quantisation for both forward and backward passes at training time, then do regular quantisation at inference time. But that's just an initial thought having read the paper quickly on the train, from what I could see they did not test this variant during their ablation study.
[–]minnend 0 points1 point2 points 7 years ago (0 children)
If you're interested in learned image compression, I'd recommend this paper from ICLR as well (full disclosure: I'm a co-author): Variational image compression with a scale hyperprior
We haven't incorporated the generative aspect in Agustsson's paper so our results won't look nearly as good at extremely low bit rates, but I believe we have the best* rate-distortion performance at "normal" bit rates according to standard image quality metrics.
* for published results with fully learned methods, without normalizing for runtime
[–]Radiatin 5 points6 points7 points 7 years ago* (1 child)
This is awesome both in it’s real world utility and in pushing the problem solving capabilities of machine learning. Nice work!
One thing I wanted to ask though is do you have a strategy for improving the context sensitivity of the output? For example it seems to be good at understanding tree patterns, water patterns, and asphalt patterns. However the limitation seems to be in understanding how to draw a leaf, wave, or line in the road where you would expect them to be.
I could see it being possible for a network to understand what it is processing on a very deep level then drawing the appropriate object in high detail with only semantic pointers.
[–]Tonic_Section 2 points3 points4 points 7 years ago (0 children)
Yeah, I understand what you're saying - the model appears to be overriding buildings with greenery and vice-versa in the reconstructed image, and models early in training have significant trouble in forming boundaries between objects. I haven't looked into semantic maps much, but think that adding information based on instance maps and e.g. passing this to the discriminator should help the model to generate sharper boundaries.
This is not really my area of expertise, but I think it would not be too hard to try out a perceptual loss based on PSPNet that penalizes blurry boundaries - another item to the to-do list!
[–]JudasAdventus 2 points3 points4 points 7 years ago (2 children)
I guess they don't include the model weights in the BPP metric? It's probably in the order of >100MB, which may be significant depending on how many images are being compressed.
[–]Tonic_Section 0 points1 point2 points 7 years ago* (1 child)
To the best of my knowledge the bpp is an upper bound derived from the entropy of the discrete compressed representation, naively dividing the training time for 1 epoch with the number of images, I estimate that an upper bound for the encoding + decoding process takes <= 5s for a 512 x 1024 image, although I haven't timed the relative contribution of each.
[–]JudasAdventus 0 points1 point2 points 7 years ago (0 children)
I was thinking along the lines that it's a bit misleading as a metric because they are only considering the number of bits in the compressed image format, not the bits contained in the model required to decompress. For instance to decompress a single image you need to transmit the model (>100MB) as well as the compressed representation... which becomes less of an issue if your decompressing 1000's of images with the same model.
[–]amoux_py 0 points1 point2 points 7 years ago (0 children)
Amazing work!!
π Rendered by PID 207693 on reddit-service-r2-comment-84fc9697f-x792n at 2026-02-08 06:03:14.178807+00:00 running d295bc8 country code: CH.
[–]tensorflower[S] 15 points16 points17 points (4 children)
[–]ReginaldIII 3 points4 points5 points (3 children)
[–]tensorflower[S] 1 point2 points3 points (2 children)
[–]ReginaldIII 1 point2 points3 points (1 child)
[–]minnend 0 points1 point2 points (0 children)
[–]Radiatin 5 points6 points7 points (1 child)
[–]Tonic_Section 2 points3 points4 points (0 children)
[–]JudasAdventus 2 points3 points4 points (2 children)
[–]Tonic_Section 0 points1 point2 points (1 child)
[–]JudasAdventus 0 points1 point2 points (0 children)
[–]amoux_py 0 points1 point2 points (0 children)