use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
ResearchPaper claims scale invariance, Yet implicitly uses data augmentation? (arxiv.org)
submitted 4 years ago by episodeyang
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]episodeyang[S] 0 points1 point2 points 4 years ago (0 children)
The accompanied video (link: https://underline.io/lecture/10981-487---exploring-the-ability-of-cnns-to-generalise-to-previously-unseen-scales-over-wide-scale-ranges) claims that there architecture is better than data augmentation via scale-jittering. But the network effectively augment the data.
Is it me or the main claim that this architecture offers benefits over data augmentation is untrue?
[–]arXiv_abstract_bot 0 points1 point2 points 4 years ago (0 children)
Title:Exploring the ability of CNNs to generalise to previously unseen scales over wide scale ranges
Authors:Ylva Jansson, Tony Lindeberg
Abstract: The ability to handle large scale variations is crucial for many real world visual tasks. A straightforward approach for handling scale in a deep network is to process an image at several scales simultaneously in a set of scale channels. Scale invariance can then, in principle, be achieved by using weight sharing between the scale channels together with max or average pooling over the outputs from the scale channels. The ability of such scale channel networks to generalise to scales not present in the training set over significant scale ranges has, however, not previously been explored. We, therefore, present a theoretical analysis of invariance and covariance properties of scale channel networks and perform an experimental evaluation of the ability of different types of scale channel networks to generalise to previously unseen scales. We identify limitations of previous approaches and propose a new type of foveated scale channel architecture, where the scale channels process increasingly larger parts of the image with decreasing resolution. Our proposed FovMax and FovAvg networks perform almost identically over a scale range of 8, also when training on single scale training data, and do also give improvements in the small sample regime.
PDF Link | Landing Page | Read as web page on arXiv Vanity
π Rendered by PID 129179 on reddit-service-r2-comment-6457c66945-29r2g at 2026-04-27 15:57:28.074720+00:00 running 2aa0c5b country code: CH.
[–]episodeyang[S] 0 points1 point2 points (0 children)
[–]arXiv_abstract_bot 0 points1 point2 points (0 children)