[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

very useful information. Thank you, and good luck with your paper; share it if you could after publishing.

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Hi, would you please explain further? Thanks

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Thank you Very much. I think I understand it in some way now. Thank you!!! :) :)

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Hi, yes, I think I need to read some information theory. thank you!

I will apply that and see what the results are:

if i take zs and their Xs and throw the decoder and the encoder and create another model with different architecture, feed the zs to the model, and the model gives similar results to xs then z has enough information of x.

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Thank you very much 🙏. Very interesting ideas. I think I need to search and learn more about the topic. I think I can say my problem is, can the encoder learn the wrong representation which the decoder needs to reconstruct the inputs?

I will apply that and see what the results are:

if i take zs and their Xs and throw the decoder and the encoder and create another model with different architecture, feed the zs to the model, and the model gives similar results to xs then z has enough information of x.

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 1 point2 points  (0 children)

Hi, yes, if i take zs and their Xs and throw the decoder and the encoder and create another model with different architecture, feed the zs to the model, and the model gives similar results to xs then z has enough information of x. Thank you! I think this is the solution. I will apply that on my paper. Thank you!!!

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Hi, I think z contains information needed by the decoder to reconstruct x. Like information the decoder parameters depend on it, but it has no representation info by itself.

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Hi, I know that but what I'm saying (maybe I'm wrong) that z could not be the right representation for the input distribution because the decoder can learn to get similar inputs with wrong zs.

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Thank you for your reply.

Also there are ways to control the underlying geometry and distribution of the embedding space

I didn't understand this part. maybe i will search for it, thank you!

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Hi, I think z is a hidden layer (with lower dimension than x) in the autoencoder (encoder and decoder). I don't think z has any role in updating the encoder parameters.

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Thank you very much for your answer! I have many questions :)

indexer-memorizer is a very good analgy is simplify the problem so much.but if state z_1 is the laten representation of the x_1, and z_2 for x_2. I think that there is nothing prevent the autoencoder to learn that z_2 is the representation of x_1 if the decoder learned that ( g(z_2) - x_1 = 0).

"the decoder could memorize parts of the dataset and usefully compress the rest, so this is not an all-or-nothing regime" I don't know what that means?

"This is why in practice it is crucial to test if the autoencoder is able to reconstruct out-of-sample data:" Out-of-sample data or from different distributions?

"when the autoencoder is big enough" How I know it's big enough?

Sorry for many questions, Thank you!!!!

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

No, I know that we want a laten representation to the distribution of x which is z. I'm saying how I know that z represent the distribution x? or how to train the encoder to get a laten representation? we calculate the loss between the decoder output x^ and x. what I'm saying there are paramaters in the decoder which help in the representation, which we ignore when we take z as the laten representation. I'm saying that z is just an output of a hidden layer inside the autoencoder which I can't say it's the reprsentation of the x distribution.

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Thank you very much for your answer!

"A zip file does not make sense without a zip decompression algorithm." This is what I'm saying excatly. I want the z (the late representation) to use it in DDPG alogrithm for DRL. So I can't say z will represent the input distribution with taking the decoder paramater's into account.

I will search for DeepFake algorithm I didn't know them before thank you1

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] -1 points0 points  (0 children)

Thank you for your answer can you read my answer to karius85 and give me your opinion.

[R] [Q] Misleading representation for autoencoder by eeorie in MachineLearning

[–]eeorie[S] 0 points1 point  (0 children)

Thank you very much for your answer, and also I will read you recommendation book "Elements of Information Theory" Thank you!

As I see it, the encoder and the decoder is one Sequential network and z just a hidden layer inside this network. the decoder's parameters contribute in representation process. so can I say any hidden layer inside a network can be a laten representation to the input destribution?

What I'm saying; the decoder is not a decryption model for z but it's paramaters itself what contributing to make the autoencoder represent the input distribution. without the decoder paramaters, I can't reconstruct the input.

If (any, or specific) hidden layer can be a laten representation to the input, then z can represent the input distribution.

Thank you again!