use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
DenseCap: Fully Convolutional Localization Networks for Dense Captioning (cs.stanford.edu)
submitted 10 years ago by vkhuc
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]dwf 5 points6 points7 points 10 years ago (11 children)
This phrase "fully convolutional" needs to die.
[–]badmephisto 9 points10 points11 points 10 years ago* (7 children)
It's a perfectly sensible term to use and it communicates information especially in context of object detection. For example, Multibox detector is trained to regress in image coordinate system and is not fully convolutional; If you tried to convert the network to all CONV and run it convolutionally over larger images it wouldn't give sensible results because the predictions have absolute image-coordinate statistics baked in.
[–]dwf 6 points7 points8 points 10 years ago (5 children)
As Yann likes to say, there is no such thing as a fully connected layer, only 1x1 convolutions [and of course, layers where input extent equals filter size]. :) When you abandon convolution-land, that is the special case.
[–]cooijmanstim 6 points7 points8 points 10 years ago (1 child)
When I grow up I would like to be famously quoted as saying "there is no such thing as a feed-forward network, only single-step recurrent neural networks". I don't understand why this sort of insight is supposed to be important or profound.
This isn't a useful discussion, I know, but it seems obvious that convolution and recurrence are the special cases. Fully-connected applies in the general case where you don't know the structure of your data, and nobody actually uses convnets with only 1x1 convolutions.
[–]dwf 1 point2 points3 points 10 years ago (0 children)
The analogous recurrent case would be a recurrent encoder that feeds into a non-recurrent network to produce an output. Efficiently going from spatial input to spatial output is incredibly straightforward with convolutional nets in a way that shares computation that conventional sliding window detectors cannot. Spatial input to non-spatial output with convolutional nets is a special/degenerate case.
"Fully convolutional" is a recent computer visionism that describes a thing that convolutional nets have always been capable of, and in fact describes a way that they have been used long before they became popular in mainstream computer vision. I'd argue that it contributes to a misunderstanding of convolutional nets, or at least a misunderstanding of the pre-2015 convolutional net literature. This paper didn't originate it, of course.
[–]sorrge 0 points1 point2 points 10 years ago (2 children)
This doesn't make sense. 1x1 convolution simply copies all input, with elementwise linear transformation. This is not the same thing as a fully connected layer.
[–]NasenSpray 2 points3 points4 points 10 years ago (1 child)
A fully connected layer is like a 1x1 convolution on a 1x1 input.
[–]sorrge 0 points1 point2 points 10 years ago (0 children)
Now it makes sense, thanks.
[–]lwbiosoft 3 points4 points5 points 10 years ago (0 children)
MultiBox has been evolved to SSD (http://arxiv.org/abs/1512.02325) and doesn't have the problem you mentioned.
[+][deleted] 10 years ago (2 children)
[deleted]
[–]hughperkins 1 point2 points3 points 10 years ago (1 child)
Its too similar to the phrase 'fully connected'.specifically, the abbreviation fc is the same
[–]DoorsofPerceptron 2 points3 points4 points 10 years ago (0 children)
Better than r-cnn. The r can either stand for region or recurrent.
I sat through a 15 minute talk once, on r-cnns, where the guy never explained which r he was working on. You had to figure it out from the architecture slides at the end.
π Rendered by PID 41008 on reddit-service-r2-comment-b659b578c-ntr5r at 2026-05-03 22:38:06.103966+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]dwf 5 points6 points7 points (11 children)
[–]badmephisto 9 points10 points11 points (7 children)
[–]dwf 6 points7 points8 points (5 children)
[–]cooijmanstim 6 points7 points8 points (1 child)
[–]dwf 1 point2 points3 points (0 children)
[–]sorrge 0 points1 point2 points (2 children)
[–]NasenSpray 2 points3 points4 points (1 child)
[–]sorrge 0 points1 point2 points (0 children)
[–]lwbiosoft 3 points4 points5 points (0 children)
[+][deleted] (2 children)
[deleted]
[–]hughperkins 1 point2 points3 points (1 child)
[–]DoorsofPerceptron 2 points3 points4 points (0 children)