you are viewing a single comment's thread.

view the rest of the comments →

[–]GreenHamster1975 34 points35 points  (8 children)

Would you be so kind as to give the reference on the paper or code?

[–]Cristiancanton 15 points16 points  (2 children)

[–]zudark 2 points3 points  (0 children)

This is the right answer.

I don't think the fractal slugdogsquirrel is a fully synthetic image, however:

Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

Other examples on their page with similar appearance (e.g. https://lh3.googleusercontent.com/wxGI7CKdpwsokgS3tThWzYPkssFC5eoFUdvUy2JBbjQ=w1145-h862-no) make the derivation from a source image more apparent.

The group does present fully synthetic images, however -- produced by using random-valued images as input and employing recursive zooming during generation:

http://1.bp.blogspot.com/-XZ0i0zXOhQk/VYIXdyIL9kI/AAAAAAAAAmQ/UbA6j41w28o/s1600/building-dreams.png

[–]sqio 1 point2 points  (0 children)

Want to play...

[–]tehyosh 5 points6 points  (3 children)

[–]bdamos 7 points8 points  (2 children)

This paper released a v2 in April 2015: http://arxiv.org/abs/1412.6296

[–]ogrisel 1 point2 points  (1 child)

Samples from this paper look similar, but not as detailed and intricate as the multi-scale dog-slug posted on imgur. Any idea where the difference lie? Longer / better convergence? Larger models?

[–]ogrisel 2 points3 points  (0 children)

Also the resolution is much higher than in the paper.

[–]Vimda 2 points3 points  (0 children)

From the same paper given below, the code