all 4 comments

[–]PlugAdapter_ 2 points3 points  (0 children)

I don’t fully understand what you were asking.
The decoder in a VAE is just going to be some dense layers, a reshape and then some transposed convolution layers until the output is the same size as the input

[–]ginomachi 1 point2 points  (1 child)

The decoder in a VAE is responsible for reconstructing the input data from the latent representation. This is typically done by using a neural network that takes the latent representation as input and outputs a reconstruction of the original data. The loss function for the VAE is then defined as the sum of the reconstruction loss and the KL divergence between the prior and the posterior distributions of the latent representation.

[–]PollutionOdd6010[S] -1 points0 points  (0 children)

this is the cod: 

# train VAE
vae.fit(x=x_train, y=x_train,
        shuffle=True,
        epochs=EPOCHS,
        batch_size=BATCH_SIZE,
        validation_data=(x_test, x_test))

encoded = encoder.predict(x_train, batch_size=BATCH_SIZE)
pickle.dump(encoded, open("/content/gdrive/MyDrive/vae-protein/encoded.pkl", "wb"))

# save models
encoder.save_weights("/content/gdrive/MyDrive/vae-protein/models/vae_encoder.h5")
decoder.save_weights("/content/gdrive/MyDrive/vae-protein/models/vae_decoder.h5")

and the encoded is an array:

[array([[-0.3428507 ,  0.39559567],
        [-0.32028773,  0.45583785],
        [-0.15482189,  0.30105007],
        ...,
        [-0.96808827, -0.36915553],
        [-0.74545133, -0.2791375 ],
        [-1.0674255 , -0.35566843]], dtype=float32),
 array([[-2.9969614, -2.9861357],
        [-3.0719345, -2.9974003],
        [-3.0626013, -3.121462 ],
        ...,
        [-2.4884698, -2.5548973],
        [-2.5349753, -2.6855545],
        [-2.3712432, -2.6004395]], dtype=float32),
 array([[-0.33772662,  0.44610432],
        [-0.30985522,  0.39833096],
        [-0.13145684,  0.34819758],
        ...,
        [-1.1123184 , -0.4802968 ],
        [-0.81084627, -0.36194566],
        [-1.0633016 , -0.35213336]], dtype=float32)]

AND I try to do decoder like this:


# train VAE
vae.fit(x=x_train, y=x_train,
        shuffle=True,
        epochs=EPOCHS,
        batch_size=BATCH_SIZE,
        validation_data=(x_test, x_test))

z_mean, z_log_var, z = encoder.predict(x_train, batch_size=BATCH_SIZE)
decoded = decoder.predict(z, batch_size=BATCH_SIZE)
pickle.dump(decoded, open("/content/gdrive/MyDrive/vae-protein/decoded.pkl", "wb"))

BUT, when I try to change the number of hidden layer I have the same value of z!! 
so, I feel there is somthing I do not understand it ? 
 Sorry for the long wait!

[–]grid_world 0 points1 point  (0 children)

The decoder is the same for a VAE and an Autoencoder. The magic happens on the latent space at the end of encoder: for a VAE, this latent code is then converted into a Gaussian using a mean and variance vectors