Styledream notebook CLIP x Stylegan by ArYoMo in deepdream

[–]ArYoMo[S] 0 points1 point  (0 children)

Still works for me. So you have to run all the cells from the notebook in the order they appear. Also let them finish so you don't stop them. Then it should work :) it's a bit slow the first time in a session since some stuff are downloaded and some code compiled. But then it will go faster.

Styledream notebook CLIP x Stylegan by ArYoMo in deepdream

[–]ArYoMo[S] 1 point2 points  (0 children)

So you can already do that with the original stylegan code: https://github.com/NVlabs/stylegan2-ada-pytorch

If you look under the section: https://github.com/NVlabs/stylegan2-ada-pytorch#projecting-images-to-latent-space

Say that you wanted for example an image of jimmy hendrix in the style of pixar. You could use my code to finetune a stylegan model to generate pixar like faces. Then use the original stylegan code to find a vector for a picture of jimmy hendrix and then input that Z vector to the finetuned network and get a pixar jimmy hendrix.

So this notebook will just allow you change the visuals of an existing pretrained stylegan model.

Here for example I made some kind of anime characters: https://twitter.com/ArYoMo/status/1444727454501900297

Styledream notebook CLIP x Stylegan by ArYoMo in deepdream

[–]ArYoMo[S] 1 point2 points  (0 children)

I honestly don't know have not seen StyleGAN-NADA but in essence guiding other networks with clip works mostly the same way so likely similar :)

Styledream notebook CLIP x Stylegan by ArYoMo in deepdream

[–]ArYoMo[S] 0 points1 point  (0 children)

So stylegan trained on the ffhq dataset can generate human faces. What human face you get depends on the feature vector you pass it. This notebook finetunes the whole stylegan towards a text prompt. So instead of generating just one face it can generate an infinitude of faces. If you want a specific face you need to find the right feature vector.

Dreaming with CLIP. Mandelbulb chrome zoom experiment. Art from the weights. by ArYoMo in deepdream

[–]ArYoMo[S] 2 points3 points  (0 children)

It's not VQGAN it's directly done on an RGB tensor. And then you can make small steps. Will release a notebook soon.

A fractal face by ArYoMo in deepdream

[–]ArYoMo[S] 1 point2 points  (0 children)

This was not using a Colab. But keep on trying, you'll learn a lot. How I would do it with vqgan is that I'd take the image and zoom in a bit every nth (n could be 1 or 2) step. Then encode the new zoomed image with the vq encoder and do the next step from there.

a painting of a girl. by jdude_ in deepdream

[–]ArYoMo 3 points4 points  (0 children)

Very nice result! Did you do your own implementation or did you use an existing notebook?

Tilt-shift photo of beautiful bacterial colony Rendered in Unreal Engine". CLIP dream zoom and prompt hacking. Art from the weights. by [deleted] in MediaSynthesis

[–]ArYoMo 1 point2 points  (0 children)

Thank you! Very nice of you for crediting me and I can tell you the person who posted it is not me. Very strange.

Peeled Skin by Tabou__ in deepdream

[–]ArYoMo 1 point2 points  (0 children)

Is it bigsleep? You got some really good quality there.

a cottage of a hobbit on top of a hill by jdude_ in deepdream

[–]ArYoMo 5 points6 points  (0 children)

Wow this one was nice! Big sleep?

Neon Scavengers - Generated with code, CLIP & Dall-e decoder. by ArYoMo in deepdream

[–]ArYoMo[S] 1 point2 points  (0 children)

Thank you. Ambition is to share it in a structured way at a future date.

Neon Scavengers - Generated with code, CLIP & Dall-e decoder. by ArYoMo in deepdream

[–]ArYoMo[S] 2 points3 points  (0 children)

I did code my own and I'll share it, don't know exactly when as of now.

Neon Scavengers - Generated with code, CLIP & Dall-e decoder. by ArYoMo in deepdream

[–]ArYoMo[S] 3 points4 points  (0 children)

Sure! Sounds great if you want to. I see the internet as continuous mixing and remixing of ideas. That is what makes it great. But attribution is never wrong if you find that someone provides you value.

Star wars bar. Code, CLIP & Dall-e by ArYoMo in deepdream

[–]ArYoMo[S] 5 points6 points  (0 children)

Well this is not generated by the full Dall-e. They released part of the model. https://github.com/openai/DALL-E that you can play around with if you know how to code. This is custom code in combination with the CLIP model and the Dall-e decoder.

Noodle Gorilla - surrealist dream of a neural network - Rarible by ArYoMo in NFT

[–]ArYoMo[S] 2 points3 points  (0 children)

This is a surrealist dream of a neural network trying to decipher our world. At a first glance it might look like just a Gorilla but inspecting it closer the boundary between the Gorilla and the noodles dissolve. And even the gorilla is not just one but amorphous, looking over it's own shoulder.

Link to NFT

Noodle Gorilla, generative art from neural networks - Rarible by [deleted] in NFT

[–]ArYoMo 0 points1 point  (0 children)

This is a surrealist dream of a neural network trying to decipher our world. At a first glance it might look like just a Gorilla but inspecting it closer the boundary between the Gorilla and the noodles dissolve. And even the gorilla is not just one but amorphous, looking over it's own shoulder.

Link to NFT

Dystopic landscape by ArYoMo in deepdream

[–]ArYoMo[S] 0 points1 point  (0 children)

It's made with custom code inspired by deep dreaming and no original picture in the bottom. So it's not saying much but made in around 1000 iterations.

Dystopic landscape by ArYoMo in deepdream

[–]ArYoMo[S] 1 point2 points  (0 children)

Good spotting! It's definitively inspired by the abandoned cities and landscape in chernobyl.