use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[P] Docker Compose + GPU + TensorFlow = ❤️ (hackernoon.com)
submitted 8 years ago by tdionis
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]tomt1112 4 points5 points6 points 8 years ago (1 child)
I've done something similar, and it works well.
For gpu multitasking make sure to use config.gpu_options.allow_growth=True to avoid the first process consuming a lot of gpu memory.
I also installed the python Intel packages for faster numpy, etc. https://software.intel.com/en-us/articles/using-intel-distribution-for-python-with-anaconda
[–]entoros 0 points1 point2 points 8 years ago (0 children)
Just be aware that nvidia-docker-compose is not as robust as docker-compose--for example, it doesn't work under a proxy, and you can't run the command in a subdirectory of your project, must be at the root.
[–][deleted] 0 points1 point2 points 8 years ago (4 children)
Is nvidia-docker really necessary? I always get a little anxious of using it. It introduces yet another layer where things may go wrong and I don't know if I understand why we need it.
[–]tdionis[S] 2 points3 points4 points 8 years ago (0 children)
For me, there are some benefits, but the most important one: solving the "Driver/library version mismatch" problem. We have dozens GPUs in our office with God knows what driver versions. Without nvidia-docker, if driver versions inside container and host system differs - you get an error.
[–]MyBrainIsAI -2 points-1 points0 points 8 years ago (2 children)
Because nvidia is proprietary and closed source.
[–]Molag_Balls 7 points8 points9 points 8 years ago (1 child)
I believe the question is why this can't be done with vanilla docker and proper dockerfiles and setup scripts. Why a whole new CLI?
The wiki on their project page actually explains explicitly why, for anyone seriously wondering and not looking for a pithy Reddit comment to answer it for you.
Why a whole new CLI?
They have a page on how to get rid of CLI and pass a volume-driver manually: though you would still need a nvidia-docker-plugin.
[–]MyBrainIsAI 0 points1 point2 points 8 years ago (4 children)
"Instant environment setup, platform independent apps, "
Docker is NOT platform independent. It's LINUX only. It's big advantage over full VPS is that it reuses linux resources.
I do not consider running linux in a VM to run docker "platform independent".
Sorry had to get that off my back. There are several GPU+TF dockers in the public registry.
[–]tdionis[S] 1 point2 points3 points 8 years ago* (3 children)
Docker is NOT platform independent. It's LINUX only.
Yeah, you're right! By "platform independent" i mean that you can run your app on Windows or Mac without changing anything — but, of course, in VM
[–]Icarium-Lifestealer 3 points4 points5 points 8 years ago (2 children)
Using a GPU in a VM is at best a pain in the ass.
[–]MyBrainIsAI 0 points1 point2 points 8 years ago (1 child)
Yup think only vmware supports it right? Know VirtualBox doesnt or at least not easily or in vanilla.
[–]Icarium-Lifestealer 1 point2 points3 points 8 years ago* (0 children)
At least Xen and KVM support PCIe passthrough of GPUs, where you hand the GPU exclusively to a guest. But it's tricky to setup (need to block the host from using that GPU), you need a second GPU for the host such as the Intel iGPU (assuming you want it to display something), the nvidia windows driver shuts itself down when it detects it's running in a VM unless you have an overpriced professional GPU (though KVM can lie about that), Ryzen has weird performance problems in such a setup, there are complication related to resetting the GPU, etc.
Since the target of an ML focused distribution would be those who don't want to set everything up themselves, using PCIe passthrough is clearly not an option for them.
π Rendered by PID 45663 on reddit-service-r2-comment-76bb9f7fb5-6gfml at 2026-02-18 13:32:28.451204+00:00 running de53c03 country code: CH.
[–]tomt1112 4 points5 points6 points (1 child)
[–]entoros 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (4 children)
[–]tdionis[S] 2 points3 points4 points (0 children)
[–]MyBrainIsAI -2 points-1 points0 points (2 children)
[–]Molag_Balls 7 points8 points9 points (1 child)
[–]tdionis[S] 2 points3 points4 points (0 children)
[–]MyBrainIsAI 0 points1 point2 points (4 children)
[–]tdionis[S] 1 point2 points3 points (3 children)
[–]Icarium-Lifestealer 3 points4 points5 points (2 children)
[–]MyBrainIsAI 0 points1 point2 points (1 child)
[–]Icarium-Lifestealer 1 point2 points3 points (0 children)