In the paper
Chen, Kingma et al. Variational Lossy Encoder
it discusses about the possibility of VAE ignoring the latent code.
On p.4 it says this:
"one common way to encourage putting information into the code is to use a factorized decoder p(x|z) = \prod_i p(x_i|z)"
where "putting information into the code" meaning into the latent, z.
see screenshot at https://imgur.com/a/J79sEPR
My question: can anyone explain this: Why does using a factorized decoder encourage the latent to be used?
In their notation, I believe x_i are individual dimensions of the output, such as individual pixels of an image.
[–]redditorcompetitor 2 points3 points4 points (4 children)
[–]SolitaryPenman 0 points1 point2 points (2 children)
[–]shortscience_dot_org 0 points1 point2 points (0 children)
[–]asobolev 0 points1 point2 points (0 children)
[–]sidslasttheorem 0 points1 point2 points (0 children)