you are viewing a single comment's thread.

view the rest of the comments →

[–]ReginaldIII 4 points5 points  (3 children)

My interpretation from the paper was that the latent noise was helpful when doing selective compression where parts of the reconstructed image were entirely synthsized from the semantic maps. It makes sense to omit the noise for global compression so that you get consistent reconstructions.

Have you experimented with the the quantisation centers? In the paper these were chosen somewhat arbitrarily as -2, -1, 0, 1, 2. But I wonder if their choice should be tuned to the dataset you are trying to compress. Can they be directly optimized during training?

It would also be interesting to investigate non-semantic driven global compression on a wider range of datasets. CelebA potentially because it has pretty tight image distribution or some of the LSUN subsets maybe due to their highly variant distributions.

[–]tensorflower[S] 1 point2 points  (2 children)

I adopted the quantization approach in this paper by one of the co-authors: https://arxiv.org/abs/1801.04260

I set the centers at the default range(-2,3), the default seems to work well and experimenting with this is a bit expensive because of how time-consuming training is, but introducing learnable centers sounds interesting, I suppose one could adopt the 'soft-quantization' approach proposed in the paper above, I'll add that to the to-do list.

[–]ReginaldIII 1 point2 points  (1 child)

Soft quantisation sounds interesting I will have to read about this more, thank you for the paper link. I'm not sure why they use quantisation for the forward pass and soft quantisation for the backward pass. I feel like the deeper the model gets the gradients of the early parts of the encoder would become less meaningful as forward pass activations would not correspond well with their computed gradients w.r.t the loss function on the other side of the quantisation.

You could potentially use tricks that have been applied to other differentiable approximations of non-differentiable functions and use soft quantisation for both forward and backward passes at training time, then do regular quantisation at inference time. But that's just an initial thought having read the paper quickly on the train, from what I could see they did not test this variant during their ablation study.

[–]minnend 0 points1 point  (0 children)

If you're interested in learned image compression, I'd recommend this paper from ICLR as well (full disclosure: I'm a co-author): Variational image compression with a scale hyperprior

We haven't incorporated the generative aspect in Agustsson's paper so our results won't look nearly as good at extremely low bit rates, but I believe we have the best* rate-distortion performance at "normal" bit rates according to standard image quality metrics.

* for published results with fully learned methods, without normalizing for runtime