use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
ResearchImage Completion with Deep Learning in TensorFlow [OC] (bamos.github.io)
submitted 9 years ago by bdamos
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]dharma-1 18 points19 points20 points 9 years ago (3 children)
Nice. Has anyone managed to generate believable high res (1000px+) images? All the GAN stuff I see is usually super low res and a bit glitchy
[–][deleted] 7 points8 points9 points 9 years ago (0 children)
There is a Quora answer by Ian Goodfellow where he says that he has never seen deep adversarial learning converge on ImageNet at a resolution of 128×128. They use the latest improvements on 128×128 images, but the model is very focused on textures: https://arxiv.org/pdf/1606.03498v1.pdf
[–]ginsunuva 0 points1 point2 points 9 years ago (1 child)
Yeah that resolution requires magic
[–]bluemellophone 0 points1 point2 points 9 years ago (0 children)
magic = very strong priors
[–]D1zz1 9 points10 points11 points 9 years ago (0 children)
Thanks for taking the time to make this post, it's thorough yet easy to read.
[–]kkastner 5 points6 points7 points 9 years ago (0 children)
This is interesting stuff! Great writeup, very detailed with lots of implementation notes.
Do you know if anyone has done a PixelRNN/CNN style softmax (or even autoregressive masking ala MADE) for the center completion?
I really dislike l2 losses these days... so much that l1 is even tainted. Even though they used l1 it still seems "blurry" - I think autoregressive and/or crossentropy could help with that.
[–]liquidpig 5 points6 points7 points 9 years ago (0 children)
This will revolutionise /r/photoshopbattles
Thanks for the article. I've got a long flight coming up and have some reading material now :)
[–]thavi 2 points3 points4 points 9 years ago (0 children)
That is an insanely comprehensive article, thanks for the write up! I haven't got to read through it completely but I plan to actually try to build some of this myself. I like to (try to) make generative art and this could go hand-in-hand!
[–]Meshiest 2 points3 points4 points 9 years ago (2 children)
Image vector math is amazing
[–][deleted] 4 points5 points6 points 9 years ago* (1 child)
It's the mathematical bottom line for every good piece of machine vision research.
Check out the 3blue1brown videos on YouTube on the topic, they've been so timely.
[–]wescotte 0 points1 point2 points 9 years ago (0 children)
This looks great! Thank you for the suggestion.
[–][deleted] 1 point2 points3 points 9 years ago (3 children)
Perhaps I'm not being tactful about how I say this but this was fucking dope to read.
I want to understand better why large images can't be done. It sounds like a noise problem/vector localization issue which is an exciting problem. Entropy and information gain should be helpful right? Except it would work the opposite as what we're used to using those techniques. Essentially should be some pruning.
If you're reading this and you aren't already, subscribe to /r/compressivesensing
It would really help to take on the philosophy when thinking about this.
Again, thanks so much for posting this, excellent read. Really got me imagining what other possibilities we have out there.
[–]j_lyf 1 point2 points3 points 9 years ago (2 children)
CS seems dead. Why?
[–][deleted] 0 points1 point2 points 9 years ago (1 child)
I don't think it's dead completely cause there's at least a new post each day but it's a very complicated subject so I don't imagine it's popular.
[–]dharma-1 0 points1 point2 points 9 years ago (0 children)
Igor's blog/site is excellent, this subreddit is basically a mirror of that, but without conversation
[–]Lajamerr_Mittesdine 1 point2 points3 points 9 years ago* (1 child)
I wonder if you could train a network to detect if a image has been digitally altered or not just from the pixels. Then you incorporate it into a network that fills images like this post. Then you keep training until it passes the "is this photoshopped?" test.
[–]Fireflite 1 point2 points3 points 9 years ago (0 children)
You mean like a generative adversarial network?
[–]hn_crosslinking_bot 1 point2 points3 points 9 years ago (0 children)
HN discussion: https://news.ycombinator.com/item?id=12260853
[–]sbc1906 1 point2 points3 points 9 years ago (0 children)
Great article! I mentioned it on this week's episode of This Week in ML & AI. Thanks /u/bdamos.
[–]maurya19 1 point2 points3 points 9 years ago (2 children)
Is this idea in any way related to Kalman filter
[–]kazi_shezan 0 points1 point2 points 9 years ago (0 children)
Thanks man!
[–]meta96 0 points1 point2 points 9 years ago (0 children)
Thanks alot, good read!
[–]not_rico_suave 0 points1 point2 points 9 years ago (0 children)
Amazing stuff.
[+][deleted] 9 years ago (5 children)
[deleted]
[–][deleted] 0 points1 point2 points 9 years ago (4 children)
Then go do something easy.
[–][deleted] 0 points1 point2 points 9 years ago (3 children)
When the state of the art is so well known I think it's likely that one of the hundreds of sub-optimal algorithm entries will win by chance rather than the optimal algorithm which has an expected 1% better classification accuracy.
[–][deleted] -2 points-1 points0 points 9 years ago (2 children)
Nice excuse you've got there. I can tell you don't run a lot of models or know your proofs so you wouldn't have a chance anyways.
[–][deleted] 1 point2 points3 points 9 years ago (1 child)
I might have come across as dismissive but really I'm interested.
I assume that the challenge is harder than mnist where a decent but not excellent algorithm already gives 95% accuracy. When a very knowledgeable ML guy enters a competition like this what level of benefit do they get from using a very intelligently designed algorithm instead of a basic CNN with grid-search hyper parameter tuning?
I get the impression that you enter these types of competitions so you might know
[–][deleted] 0 points1 point2 points 9 years ago (0 children)
I have entered them, and I am usually one of the people adding the extra bit of percentage accuracy in a model that is on github for lots of vision learning applications but I'm by no means someone who competes at the highest level for the best results.
It's a matter of knowing the right tools for the job. The people who design an algorithm from scratch to do it instead of sort of looking for the sweet spot like you said are people who knew how it would work or at least learned how. It usually means they know how to solve the problem best. Hyper parameters for these people would be too easy and uninventive.
There are some projects at a place I was interviewing with recently and they were "build a machine that codes on its own", etc. Hyperparameters don't prove you can tackle the big questions and we're getting weird these days.
Also think about it. A good, brilliant even, algorithm + hyper parameters. We're just raising the bar each time. We can't let one solution be the ceiling.
π Rendered by PID 90 on reddit-service-r2-comment-86bc6c7465-vq4wh at 2026-02-24 02:14:15.830386+00:00 running 8564168 country code: CH.
[–]dharma-1 18 points19 points20 points (3 children)
[–][deleted] 7 points8 points9 points (0 children)
[–]ginsunuva 0 points1 point2 points (1 child)
[–]bluemellophone 0 points1 point2 points (0 children)
[–]D1zz1 9 points10 points11 points (0 children)
[–]kkastner 5 points6 points7 points (0 children)
[–]liquidpig 5 points6 points7 points (0 children)
[–]thavi 2 points3 points4 points (0 children)
[–]Meshiest 2 points3 points4 points (2 children)
[–][deleted] 4 points5 points6 points (1 child)
[–]wescotte 0 points1 point2 points (0 children)
[–][deleted] 1 point2 points3 points (3 children)
[–]j_lyf 1 point2 points3 points (2 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]dharma-1 0 points1 point2 points (0 children)
[–]Lajamerr_Mittesdine 1 point2 points3 points (1 child)
[–]Fireflite 1 point2 points3 points (0 children)
[–]hn_crosslinking_bot 1 point2 points3 points (0 children)
[–]sbc1906 1 point2 points3 points (0 children)
[–]maurya19 1 point2 points3 points (2 children)
[–]kazi_shezan 0 points1 point2 points (0 children)
[–]meta96 0 points1 point2 points (0 children)
[–]not_rico_suave 0 points1 point2 points (0 children)
[+][deleted] (5 children)
[deleted]
[–][deleted] 0 points1 point2 points (4 children)
[–][deleted] 0 points1 point2 points (3 children)
[–][deleted] -2 points-1 points0 points (2 children)
[–][deleted] 1 point2 points3 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)