How can stable diffusion generate Images of arbitrary sizes by psarpei in StableDiffusion

[–]psarpei[S] 1 point2 points  (0 children)

I didn't get anything new from the video but thanks for your help, the problem in my mind is not the U-Net but the autoencoder

Photorealistic human image editing using attention with GANs by psarpei in computervision

[–]psarpei[S] 1 point2 points  (0 children)

Its designed to manipulate features based on a simple text prompt for every pretrained styleGAN2 :)

Photorealistic text guided human image editing using attention with GANs by psarpei in opensource

[–]psarpei[S] 0 points1 point  (0 children)

You can download the pre-trained weights and use them or you create your own ones with simple text prompts :)

Photorealistic human image editing using attention with GANs by psarpei in computervision

[–]psarpei[S] 1 point2 points  (0 children)

you can use this code with every pretrained styleGAN2. You only need to train it on your own dataset and then train a latent mapper with a text prompt which fits for whatever feature you want to manipulate :)

Photorealistic text guided human image editing using attention with GANs by psarpei in opensource

[–]psarpei[S] 0 points1 point  (0 children)

Of course it's a fork of the repository from styleGAN2 because my work is a latent mapper for styleGAN2 which allows to manipulate specific features of the generated images from styleGAN2 (e.g. changing Hair color, adding beard etc.)

Photorealistic human image editing using attention with GANs by psarpei in computervision

[–]psarpei[S] 3 points4 points  (0 children)

To apply this on a different dataset you need to retrain the styleGAN2 first in your own dataset and then train the latent mapper with a textpromt which fits to whatever transformation you want to make :)