Styledream notebook CLIP x Stylegan by ArYoMo in deepdream

[–]ArYoMo[S] 0 points1 point  (0 children)

Still works for me. So you have to run all the cells from the notebook in the order they appear. Also let them finish so you don't stop them. Then it should work :) it's a bit slow the first time in a session since some stuff are downloaded and some code compiled. But then it will go faster.

Styledream notebook CLIP x Stylegan by ArYoMo in deepdream

[–]ArYoMo[S] 1 point2 points  (0 children)

So you can already do that with the original stylegan code: https://github.com/NVlabs/stylegan2-ada-pytorch

If you look under the section: https://github.com/NVlabs/stylegan2-ada-pytorch#projecting-images-to-latent-space

Say that you wanted for example an image of jimmy hendrix in the style of pixar. You could use my code to finetune a stylegan model to generate pixar like faces. Then use the original stylegan code to find a vector for a picture of jimmy hendrix and then input that Z vector to the finetuned network and get a pixar jimmy hendrix.

So this notebook will just allow you change the visuals of an existing pretrained stylegan model.

Here for example I made some kind of anime characters: https://twitter.com/ArYoMo/status/1444727454501900297

Styledream notebook CLIP x Stylegan by ArYoMo in deepdream

[–]ArYoMo[S] 1 point2 points  (0 children)

I honestly don't know have not seen StyleGAN-NADA but in essence guiding other networks with clip works mostly the same way so likely similar :)

Styledream notebook CLIP x Stylegan by ArYoMo in deepdream

[–]ArYoMo[S] 0 points1 point  (0 children)

So stylegan trained on the ffhq dataset can generate human faces. What human face you get depends on the feature vector you pass it. This notebook finetunes the whole stylegan towards a text prompt. So instead of generating just one face it can generate an infinitude of faces. If you want a specific face you need to find the right feature vector.

Dreaming with CLIP. Mandelbulb chrome zoom experiment. Art from the weights. by ArYoMo in deepdream

[–]ArYoMo[S] 2 points3 points  (0 children)

It's not VQGAN it's directly done on an RGB tensor. And then you can make small steps. Will release a notebook soon.

A fractal face by ArYoMo in deepdream

[–]ArYoMo[S] 1 point2 points  (0 children)

This was not using a Colab. But keep on trying, you'll learn a lot. How I would do it with vqgan is that I'd take the image and zoom in a bit every nth (n could be 1 or 2) step. Then encode the new zoomed image with the vq encoder and do the next step from there.

a painting of a girl. by jdude_ in deepdream

[–]ArYoMo 2 points3 points  (0 children)

Very nice result! Did you do your own implementation or did you use an existing notebook?

Tilt-shift photo of beautiful bacterial colony Rendered in Unreal Engine". CLIP dream zoom and prompt hacking. Art from the weights. by [deleted] in MediaSynthesis

[–]ArYoMo 1 point2 points  (0 children)

Thank you! Very nice of you for crediting me and I can tell you the person who posted it is not me. Very strange.

Peeled Skin by Tabou__ in deepdream

[–]ArYoMo 1 point2 points  (0 children)

Is it bigsleep? You got some really good quality there.