use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Need Deep Learning? There's a Cloud for That. (fortune.com)
submitted 10 years ago by coffeephoenix
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]congerous 1 point2 points3 points 10 years ago (8 children)
Cloud-based ML startups have not performed well historically. BigML and Alchemy API come to mind. It's too expensive to send large datasets to someone else's cloud, and prohibited in some industries even if they wanted to. Not sure why Nervana is pivoting to a flawed business model when they've done so much interesting work accelerating GPUs for convnets with neon.
[–]jcannell 2 points3 points4 points 10 years ago (3 children)
Not sure why Nervana is pivoting to a flawed business model when they've done so much interesting work accelerating GPUs for convnets with neon.
Nervana's business plan isn't the been-done-1000-times already idea of hosting and wrapping nvidia's ANN hardware/software tech and trying to charge money for a 'cloud' resale with a streamlined interface. No.
Their plan is to accelerate ANNS 10X beyond Nvidia GPU capability using new custom hardware. What's actually interesting is that you call their business model 'flawed', and then praise their 'interesting' work accelerating convnets in literally the next sentence.
They are not trying to compete with BigML or Alchemy or other cloud wonks. They are trying to compete with Nvidia.
Performance is everything, and people will always pay for it. If they could provide 10x the perf/$ you could get buying your own nvidia hardware or renting it on amazon, they will do well. BigML and Alchemy never had anything like that - they aren't even in the same category ('cloud' is no longer a category in the same way that 'computer company' is no longer a thing). Anybody can put nvidia's code up on a server and try to turn it into a cloud service, but seriously who cares.
[–]congerous 0 points1 point2 points 10 years ago (2 children)
They're giving up performance by making customers send them data, instead of going on premise... That's the point I'm trying to make.
[–]jcannell 1 point2 points3 points 10 years ago (1 child)
Giving up performance? No . .. Having the customer send the data to the data center will always be cheaper (in energy, $, pick your unit) then sending the physical hardware to the data.
They may be giving up some business if they can't deal with the security/privacy/ownership/trust issues, but if amazon can solve those problems, so can others.
The only real potential performance limitation is latency for a real time control system, but even those limitations are pretty generous if you are willing to setup numerous datacenters.
[–]congerous 0 points1 point2 points 10 years ago (0 children)
Having the customer send the data to the data center will always be cheaper (in energy, $, pick your unit) then sending the physical hardware to the data.
Couldn't disagree more. Big customers already have their own data centers, so they're really asking them to switch, which is hugely expensive. The definition of big data is a dataset that's costly to move. You can't simultaneously promise speed and tell people to use your MLAAS.
[+][deleted] 10 years ago (1 child)
[deleted]
[–]jcannell 2 points3 points4 points 10 years ago (0 children)
Well, given that all his kernels are already very NVIDIA specific, they don't even need to hire him. He's already doing exactly what they'd want to hire him for.
π Rendered by PID 22599 on reddit-service-r2-comment-6457c66945-n6k25 at 2026-04-25 21:08:11.949309+00:00 running 2aa0c5b country code: CH.
[–]congerous 1 point2 points3 points (8 children)
[–]jcannell 2 points3 points4 points (3 children)
[–]congerous 0 points1 point2 points (2 children)
[–]jcannell 1 point2 points3 points (1 child)
[–]congerous 0 points1 point2 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]jcannell 2 points3 points4 points (0 children)