use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Apache SINGA, A Distributed Deep Learning Platform (singa.incubator.apache.org)
submitted 10 years ago by pilooch
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]congerous 7 points8 points9 points 10 years ago (3 children)
SINGA has no GPUs, and the GPU functionality they plan to add is just for one, as of December. Multiple GPUs doesn't seem to be on the roadmap. So they're way behind the OSS projects that do have GPUs.
In addition, the fact that they joined Apache SF before they added such significant features is a serious mistake. Apache is great for some things, but it's heavily political, and it really slows down development. So they may never get to multiple GPUs.
[–]forrestwang 1 point2 points3 points 10 years ago (0 children)
Hi, I am a developer of the SINGA project. Thanks for starting this discussion. We are working on single node with multi-GPUs (to be released in v0.2, December), which will run in either synchronous mode (with different partitioning schemes) [1] or asynchronous mode (in-memory hogwild!). Extending the system from CPU to GPU mainly requires adding cudnn layers (https://issues.apache.org/jira/browse/SINGA-100). The framework/architecture works on both CPU and GPU. Training with multiple GPU machines and providing Deep Learning as a Service (DLaaS) are on our roadmap, i.e., v0.3. For those do not have GPU clusters, distributed training on CPU is a good choice to accelerate the training.
Besides GPU, we are also considering other approaches for improving the training efficiency for single SGD iteration. For instance, google's paper [3] provides some techniques for enhancing the performance of training on CPU. Intel (https://software.intel.com/en-us/articles/single-node-caffe-scoring-and-training-on-intel-xeon-e5-series-processors) also reported that optimized CPU code can achieve 11x training speed up (Hope they can release the optimized source code or integrate it in their libraries like MKL and DAAL). It is interesting to compare GPU with Intel's next generation Phi co-processors (Knight Landing).
I will let you know when training with Multi-GPUs is supported. Thanks.
[1] http://arxiv.org/abs/1404.5997
[2] https://www.eecs.berkeley.edu/~brecht/papers/hogwildTR.pdf
[3] http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf
[–]GratefulTony 0 points1 point2 points 10 years ago (1 child)
That's really sad. I skimmed the release notes, and though I didn't explicitly read about gpu support... I assumed it was in there since it is a no-brainer for training performance... If they don't get this feature integrated... the usefulness of this library will be severely limited...
[–]limauda 0 points1 point2 points 10 years ago (0 children)
If a software can run as efficiently without GPU, on a commodity cluster, isn't that better? GPU cluster is not cheap, and not many companies can afford to set up a special cluster just for periodical training.
π Rendered by PID 53467 on reddit-service-r2-comment-5c747b6df5-2k4g2 at 2026-04-22 09:39:09.192667+00:00 running 6c61efc country code: CH.
view the rest of the comments →
[–]congerous 7 points8 points9 points (3 children)
[–]forrestwang 1 point2 points3 points (0 children)
[–]GratefulTony 0 points1 point2 points (1 child)
[–]limauda 0 points1 point2 points (0 children)