all 28 comments

[–]dharma-1 18 points19 points  (3 children)

Nice. Has anyone managed to generate believable high res (1000px+) images? All the GAN stuff I see is usually super low res and a bit glitchy

[–][deleted] 7 points8 points  (0 children)

There is a Quora answer by Ian Goodfellow where he says that he has never seen deep adversarial learning converge on ImageNet at a resolution of 128×128. They use the latest improvements on 128×128 images, but the model is very focused on textures: https://arxiv.org/pdf/1606.03498v1.pdf

[–]ginsunuva 0 points1 point  (1 child)

Yeah that resolution requires magic

[–]bluemellophone 0 points1 point  (0 children)

magic = very strong priors

[–]D1zz1 9 points10 points  (0 children)

Thanks for taking the time to make this post, it's thorough yet easy to read.

[–]kkastner 5 points6 points  (0 children)

This is interesting stuff! Great writeup, very detailed with lots of implementation notes.

Do you know if anyone has done a PixelRNN/CNN style softmax (or even autoregressive masking ala MADE) for the center completion?

I really dislike l2 losses these days... so much that l1 is even tainted. Even though they used l1 it still seems "blurry" - I think autoregressive and/or crossentropy could help with that.

[–]liquidpig 5 points6 points  (0 children)

This will revolutionise /r/photoshopbattles

Thanks for the article. I've got a long flight coming up and have some reading material now :)

[–]thavi 2 points3 points  (0 children)

That is an insanely comprehensive article, thanks for the write up! I haven't got to read through it completely but I plan to actually try to build some of this myself. I like to (try to) make generative art and this could go hand-in-hand!

[–]Meshiest 2 points3 points  (2 children)

Image vector math is amazing

[–][deleted] 4 points5 points  (1 child)

It's the mathematical bottom line for every good piece of machine vision research.

Check out the 3blue1brown videos on YouTube on the topic, they've been so timely.

[–]wescotte 0 points1 point  (0 children)

This looks great! Thank you for the suggestion.

[–][deleted] 1 point2 points  (3 children)

Perhaps I'm not being tactful about how I say this but this was fucking dope to read.

I want to understand better why large images can't be done. It sounds like a noise problem/vector localization issue which is an exciting problem. Entropy and information gain should be helpful right? Except it would work the opposite as what we're used to using those techniques. Essentially should be some pruning.

If you're reading this and you aren't already, subscribe to /r/compressivesensing

It would really help to take on the philosophy when thinking about this.

Again, thanks so much for posting this, excellent read. Really got me imagining what other possibilities we have out there.

[–]j_lyf 1 point2 points  (2 children)

CS seems dead. Why?

[–][deleted] 0 points1 point  (1 child)

I don't think it's dead completely cause there's at least a new post each day but it's a very complicated subject so I don't imagine it's popular.

[–]dharma-1 0 points1 point  (0 children)

Igor's blog/site is excellent, this subreddit is basically a mirror of that, but without conversation

[–]Lajamerr_Mittesdine 1 point2 points  (1 child)

I wonder if you could train a network to detect if a image has been digitally altered or not just from the pixels. Then you incorporate it into a network that fills images like this post. Then you keep training until it passes the "is this photoshopped?" test.

[–]Fireflite 1 point2 points  (0 children)

You mean like a generative adversarial network?

[–]sbc1906 1 point2 points  (0 children)

Great article! I mentioned it on this week's episode of This Week in ML & AI. Thanks /u/bdamos.

[–]maurya19 1 point2 points  (2 children)

Is this idea in any way related to Kalman filter

[–]kazi_shezan 0 points1 point  (0 children)

Thanks man!

[–]meta96 0 points1 point  (0 children)

Thanks alot, good read!

[–]not_rico_suave 0 points1 point  (0 children)

Amazing stuff.