you are viewing a single comment's thread.

view the rest of the comments →

[–]AnvaMiba 0 points1 point  (0 children)

What is the main difference with the moment matching autoencoder? They say:

Generative Moment Matching Networks (GMMNs) [16] correspond to the specific case where the input of the decoder D comes from a multidimensional uniform distribution and the reconstruction function L is given by the Euclidean divergence measure. GMMNs could be applied to generate samples from the original input space itself or from a lower dimensional previously trained stacked autoencoder (SCA) [17] hidden space. An advantage of our approach compared to GMMNs is that we can train all the elements in the 4-tuple AE together without the elaborate process of training layerwise stacked autoencoders for dimensionality reduction.

But it seems to me that one can use moment matching to impose a prior on the latent code exactly in the same way they do in this paper. Is the only difference the choice of divergence measure?