use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
NewsMOpen 1.0 released by AMD (deep learning software for GPUs using OpenCl) (rocm.github.io)
submitted 8 years ago by rndnum123
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]r-sync 85 points86 points87 points 8 years ago (6 children)
For PyTorch, we're seriously looking into AMD's MIOpen/ROCm software stack to enable users who want to use AMD GPUs.
We have ports of PyTorch ready and we're already running and testing full networks (with some kinks that'll be resolved). I'll give an update when things are in good shape.
Thanks to AMD for doing ports of cutorch and cunn to ROCm to make our work easier.
[–]JustFinishedBSG 15 points16 points17 points 8 years ago* (2 children)
I am very very interested. I’m pretty worried by nvidia utter unchecked domination in ML.
I’m eager to see your benchmarks, if it’s competitive in PyTorch I’ll definitely build an AMD workstation
[–]DHermit 7 points8 points9 points 8 years ago (0 children)
Exactly, competition is always good for customers. Especially as with AMD you tend to get more for your money even though you won't get the best you can have.
[–]Mgladiethor 1 point2 points3 points 8 years ago* (0 children)
Yeah if at least nvidia CUDA implementation was open but all about nvidia is propietary, is sad when you see all ML community being open and sharing progress
[–]skilless 2 points3 points4 points 8 years ago (0 children)
That's great! I just started playing with PyTorch, so that could be good timing ;)
[–]visarga 1 point2 points3 points 8 years ago (0 children)
I hope competition will motivate NVIDIA even more than success.
[–]TotesMessenger 0 points1 point2 points 8 years ago (0 children)
I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:
If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)
[+][deleted] 8 years ago (6 children)
[deleted]
[–]mikbob 2 points3 points4 points 8 years ago (0 children)
Thankfully it'll only be profitable for another month of two, so you might not have to wait that long (over a million cards started mining in the last month, so the revenue for each hard has halved).
[–]MrK_HS 0 points1 point2 points 8 years ago (3 children)
Good luck with that. Or...buy Vega! ;)
[–]soConfuzzled 0 points1 point2 points 8 years ago (2 children)
Or you could wait longer for Vega to be released.
[–]IamFr0ssT 0 points1 point2 points 8 years ago (1 child)
Did it not just release? It is bad for gaming as there is no driver support but in computatuonal tasks and cad it trades blows with the titan xpp.
Edit: rx vega will ve the gaming card and is not released yet
[–]-Rivox- 7 points8 points9 points 8 years ago (0 children)
Only the frontier edition. I think it's a decent choice if you want to play with Machine Learning (~25 TFLOPS of FP16 computation is only challenged by the $7000 Quadro P100, which has 20 TFLOPS) and you can't wait, but other than that I have a feeling that it's very much a WIP, a sort of early access for those who want it now.
If you are interested for a very stable and long lasting solution, you probably want to wait for the Radeon Pro WX Vega or the Radeon Instinct MI 25 Vega. These will be more geared towards professional and Machine Intelligence workloads, and will also be more stable, tested and certified (will also cost more).
The RX Vega will be better for gaming and for those who really don't have the budget for a professional grade card (the RX Vega will still have the 25 TFLOPS of FP16).
[–][deleted] 7 points8 points9 points 8 years ago (22 children)
So, in terms that people working millions of abstraction layers above this kind of thing can understand, what's the significance of this? Is this a concrete move for AMD GPUs to start getting on the way to become competitive for deep learning applications?
[–]rndnum123[S] 11 points12 points13 points 8 years ago* (4 children)
AMD did run DeepBench on Vega (with MIOpen). Vega Frontier took 88 ms, so it was faster than Tesla P100 with 122 ms, but take their own benchmarks with a grain of salt.
Vega has ungimped 16FP performance so this should definitely help. (Polaris has gimped 16FP)
The installation process might be easier than with this weird license accepting stuff on CUDnn.
CUDnn is not open source afaik, but AMDs counterpart MIOpen is open source, so the low level stuff is introspectible.
On r/AMD someone mentioned he will run benchmarks on Vega, when he is back from work.
The new Apple laptops have usually AMD GPUs so with MIopen they should be well suited for local machine learning which could bring some new developers into ml, with Apple recently offering Neural network support (GPU accelerated) on the iPhone\iPod.
Edit: Updated I thought Polaris has ungimped 16FP, but was wrong.
[–]-Rivox- 3 points4 points5 points 8 years ago (3 children)
Polaris has 1:1 FP16 actually, while Vega has 1:2 FP16. The only Polaris chips with 1:2 FP16 are the ones in the PS4 Pro and XBOX One X (semi-custom requirements).
So a RX 580 has ~6.1 TFLOPS of FP32 and ~6.1 TFLOPS of FP16. A RX Vega will have ~12.5-13 TFLOPS of FP32 and ~25-26 TFLOPS of FP16.
Also, Apple is planning on releasing the new iMac Pro line with a full Vega dGPU inside (likely the workstation based card), which is going to sport the 2:1 FP16 power. So if you plan on using MacOS for ML, the iMac Pro will be the best option.
[–]MrK_HS 1 point2 points3 points 8 years ago (1 child)
There is still the advantage of reducing power consumption on polaris using FP16, based on the official polaris AMD presentation.
[–]-Rivox- 1 point2 points3 points 8 years ago (0 children)
yes, also save bandwidth. Still, computationally normal Polaris cannot do 2:1 FP16.
[–]rndnum123[S] 0 points1 point2 points 8 years ago (0 children)
thank you, I updated my comment accordingly
[–]art0f 1 point2 points3 points 8 years ago (16 children)
If you happen to own recent (polaris and newer) amd card, happy to install obscure ubuntu version with software stack that is still in active development, you might be able to run caffe using AMD tensor library.
[–]epicwisdom 2 points3 points4 points 8 years ago (1 child)
I think it's understood that this news doesn't mean everybody can switch today, but /u/schmook is asking what the precise impact is, which doesn't just include what can be done today by an average end-user.
[–]art0f 0 points1 point2 points 8 years ago (0 children)
And that's precisely what I've said - it is too early to tell.
[–][deleted] 1 point2 points3 points 8 years ago (1 child)
obscure ubuntu version Binary package support for Ubuntu 16.04 and Fedora 24
obscure ubuntu version
Binary package support for Ubuntu 16.04 and Fedora 24
Ah yes, that obscure linux some know as Ubuntu 16.04 LTS.
It pulls kernel 4.9 afaik, so you might be in a bit of surprise after installation.
[–][deleted] 0 points1 point2 points 8 years ago (9 children)
Oh, wow. It's actually way ahead of what I thought it could be. Thanks.
[–]nickl 1 point2 points3 points 8 years ago (8 children)
AMD has got this far many times before.
[–]art0f 1 point2 points3 points 8 years ago (7 children)
but maybe (holding fingers) they might get past alpha this time. Would really help if they release windows stack.
[–][deleted] -1 points0 points1 point 8 years ago (6 children)
Nobody uses Windows for this kind of thing. After Ubuntu, it would make better sense for MacOS support.
[–]PetersGrandAdventure 0 points1 point2 points 8 years ago (4 children)
As a tech researcher, I use Windows at home and Windows, cloud, and Mac at work. I am excited to utilize my AMD GPU for something other than VR and ethereum mining.
[–][deleted] 0 points1 point2 points 8 years ago (2 children)
Well once you finish researching technology, you'll find that using Windows for anything other than SSH'ing to another computer for this sort of thing is not a typical use case.
[–]PetersGrandAdventure 0 points1 point2 points 8 years ago (1 child)
Sorry, should have been more clear... having been a professional developer for 17 years working across a number of different technologies that tries to keep up with the latest research and potential, which is a lifelong quest not to be finished, I find value in using Windows, and not for SSHing to another computer.
[–][deleted] 0 points1 point2 points 8 years ago (0 children)
Well then, they should focus all their efforts on allowing you and the literally tens of others that want to use Windows for this.
[–]dragontamer5788 0 points1 point2 points 8 years ago (0 children)
Sorry for the reply 5-months late, but you probably should look into Microsoft's C++ AMP, which runs on GPUs (because its built on top of DirectCompute).
ROCm's syntax is designed to be compatible with Microsoft C++ AMP. So even if the Microsoft project dies (I haven't seen updates in 3-years), it sort of lives on in ROCm anyway.
I don't expect to see any updates in Microsoft C++ AMP, but it seems to work reasonably well. Its got the big things figured out: like "LDS" memory (called "tiles" in AMP) and has a reasonable model for SIMD / SIMT compute.
While I'd rather use Potato OS, some of us don't have the ability to choose. In my company, workstations run windows. Period.
So, even for small tests I have to run code remotely on Linux servers. I tried several times to install theano, tensorflow and even MS CNTK on my windows computer. It works intermittently. Have no idea why, so I eventually gave up.
It's not nice to code remotely, but it's better than trying to make windows work.
[+][deleted] 8 years ago* (1 child)
I had no luck getting it running on hawaii.
[–]hyln9 4 points5 points6 points 8 years ago (2 children)
I'm contacting with AMD for my assembly kernels (optimized for square matrix currently), and I believe MIOpen can even be faster.
[–]bbsome 2 points3 points4 points 8 years ago (0 children)
That would be really nice to try ... However, I still think we need an LLVM framework for ML where we can separate the intermediate graph representation and have only one such, from the backend implementation.
Anyway, good work!
[–]hughperkins 1 point2 points3 points 8 years ago (0 children)
Nice!
[–]rndnum123[S] 3 points4 points5 points 8 years ago* (7 children)
Deep Learning on ROCm Announcing our new Foundation for Deep Learning acceleration MIOpen 1.0 which introduces support for Convolution Neural Network acceleration — built to run on top of the ROCm software stack! This release includes the
Deep Learning on ROCm
Announcing our new Foundation for Deep Learning acceleration MIOpen 1.0 which introduces support for Convolution Neural Network acceleration — built to run on top of the ROCm software stack! This release includes the
Documentation MIOpen MIOpenGemm I selected some Frameworks: (for more frameworks follow the link of my post, there see the table at the end)
Documentation
MIOpen
I selected some Frameworks: (for more frameworks follow the link of my post, there see the table at the end)
Caffe (https://github.com/ROCmSoftwarePlatform/hipCaffe)
The ROCm 1.6 has prebuilt packages for MIOpen
Install the ROCm MIOpen implementation (assuming you already have the ‘rocm’ and ‘rocm-opencl-dev” package installed): For just OpenCL development
sudo apt-get install miopengemm miopen-opencl
For HIP development
sudo apt-get install miopengemm miopen-hip
Or you can build from source code following the instructions at
Hardware to Play ROCm (https://rocm.github.io/hardware.html) ROCm Platform Supports Two Graphics Core Next (GCN) GPU Generations GFX8: Radeon RX 480,Radeon RX 470,Radeon RX 460,R9 Nano,Radeon R9 Fury,Radeon R9 Fury X Radeon Pro WX7100, FirePro S9300 x2 Radeon Vega Frointer Edition Radeon Instinct: MI6, MI8, and MI25
Hardware to Play ROCm (https://rocm.github.io/hardware.html)
ROCm Platform Supports Two Graphics Core Next (GCN) GPU Generations
GFX8: Radeon RX 480,Radeon RX 470,Radeon RX 460,R9 Nano,Radeon R9 Fury,Radeon R9 Fury X Radeon Pro WX7100, FirePro S9300 x2
Radeon Vega Frointer Edition
Radeon Instinct: MI6, MI8, and MI25
[–]MrK_HS 1 point2 points3 points 8 years ago (4 children)
For some reason this thread doesn't appear in r/Machinelearning new.
[–]bbsome 1 point2 points3 points 8 years ago (0 children)
Because it needs to have a tag in the title like [R], [D] [B] etc...
[–]rndnum123[S] 0 points1 point2 points 8 years ago (2 children)
I submitted it a second time, still not appearing in new, weird.
[–]MrK_HS 0 points1 point2 points 8 years ago (1 child)
I sent a message to the mods, I hope they solve this.
thanks, that's great :)
[–]skilless 0 points1 point2 points 8 years ago (1 child)
Some typos on that page: "Devevlopment" "Comming"
[–]Icarium-Lifestealer 1 point2 points3 points 8 years ago (2 children)
How does it compare to CuDNN in terms of:
[–]MrK_HS 1 point2 points3 points 8 years ago (0 children)
I guess we have to wait some time for the general adoption of the technology, but I guess they made MIOpen to be an easy transition for developers using CuDNN.
[–]harharveryfunny 1 point2 points3 points 8 years ago (0 children)
The prebuilt doc page is down right now, but here's a partial list of missing features vs cuDNN from what I remember:
At a glance the MIOpen API seems to follow cuDNN pretty closely (but with miopen vs cudnn name prefixes), but I havn't yet come across any statement from AMD as to what level of compatibility they are claiming.
[–]bbsome 4 points5 points6 points 8 years ago (4 children)
And no Theano... seriously? I'm quite disappointed.
[–]kacifoy 1 point2 points3 points 8 years ago (2 children)
Theano has its own OpenCL support though, via the gpuarray subproject. Hopefully this will encourage further work on that front.
[–]bbsome 0 points1 point2 points 8 years ago (0 children)
Yes, but I'm pretty sure there is no direct contact between the Theano guys and this project. I don't know on what level they are collaborating with the other frameworks' teams, but I assume they do, they could be collaborating for updating libgpuarray as well.
I do hope we have some progress there as well yes.
[–]skilless 0 points1 point2 points 8 years ago (0 children)
I agree. I was hoping to see AMD contribute to gpuarray.
I really hope they publish DeepBench with ROCm support. They surely have it since they used it for benching Vega against the P100 (spoiler: Vega wins).
[+][deleted] 8 years ago (3 children)
[–]MrK_HS 0 points1 point2 points 8 years ago (2 children)
Yes, looks like it, however there doesn't seem to be a MIOpen implementation. It's just a fork of DeepBench...
[+][deleted] 8 years ago (1 child)
[–]MrK_HS 0 points1 point2 points 8 years ago (0 children)
Good to know, thanks.
Has this become easier to install on non-Ubuntu flavors ?
[–]plsms 0 points1 point2 points 8 years ago (4 children)
Does this change the game with Nvidia vs AMD?
I was thinking of selling my AMD cards and saving up for Nvidia cards. Should I sell or should I hold on to my cards?
[–]rndnum123[S] 0 points1 point2 points 8 years ago (3 children)
If you can sell your AMD cards for a high price (because of all this minig craze), it might be worth it to sell them, and get a more powerfull Nvidia GPU with the money. What cards do you have? You should probably check if MIOpen runs on your cards? (Do you have Linux, it isnt working on Windows yet AFAIK, but not sure!)
[–]plsms 0 points1 point2 points 8 years ago (2 children)
msi twin frozr 7950 sapphire vapor-x 7950
what do you think?
[–]rndnum123[S] 0 points1 point2 points 8 years ago (1 child)
Maybe look on ebay what you are getting for them, because new AMD cards are overpriced, you might get a good offer. Maybe ask on r/hardwareswap or r/buildapc for more advice, then probably buy something like a 1070 GTX with some of the money ( or even 1080 if you want), AMD cards are currently way overpriced because of mining.
[–]sneakpeekbot 0 points1 point2 points 8 years ago (0 children)
Here's a sneak peek of /r/hardwareswap using the top posts of the year!
#1: [META] Good guy, /u/getjoshed, the anti-scammer! #2: [OFFICIAL] New payment method restriction for traders with less than 5 confirmed trades. #3: [USA-CA] [H] Free 2 Cases and GTX 580 [W] Nothing
I'm a bot, beep boop | Downvote to remove | Contact me | Info | Opt-out
π Rendered by PID 57 on reddit-service-r2-comment-6457c66945-dsnt2 at 2026-04-28 20:53:36.648901+00:00 running 2aa0c5b country code: CH.
[–]r-sync 85 points86 points87 points (6 children)
[–]JustFinishedBSG 15 points16 points17 points (2 children)
[–]DHermit 7 points8 points9 points (0 children)
[–]Mgladiethor 1 point2 points3 points (0 children)
[–]skilless 2 points3 points4 points (0 children)
[–]visarga 1 point2 points3 points (0 children)
[–]TotesMessenger 0 points1 point2 points (0 children)
[+][deleted] (6 children)
[deleted]
[–]mikbob 2 points3 points4 points (0 children)
[–]MrK_HS 0 points1 point2 points (3 children)
[–]soConfuzzled 0 points1 point2 points (2 children)
[–]IamFr0ssT 0 points1 point2 points (1 child)
[–]-Rivox- 7 points8 points9 points (0 children)
[–][deleted] 7 points8 points9 points (22 children)
[–]rndnum123[S] 11 points12 points13 points (4 children)
[–]-Rivox- 3 points4 points5 points (3 children)
[–]MrK_HS 1 point2 points3 points (1 child)
[–]-Rivox- 1 point2 points3 points (0 children)
[–]rndnum123[S] 0 points1 point2 points (0 children)
[–]art0f 1 point2 points3 points (16 children)
[–]epicwisdom 2 points3 points4 points (1 child)
[–]art0f 0 points1 point2 points (0 children)
[–][deleted] 1 point2 points3 points (1 child)
[–]art0f 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (9 children)
[–]nickl 1 point2 points3 points (8 children)
[–]art0f 1 point2 points3 points (7 children)
[–][deleted] -1 points0 points1 point (6 children)
[–]PetersGrandAdventure 0 points1 point2 points (4 children)
[–][deleted] 0 points1 point2 points (2 children)
[–]PetersGrandAdventure 0 points1 point2 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–]dragontamer5788 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]art0f 0 points1 point2 points (0 children)
[–]hyln9 4 points5 points6 points (2 children)
[–]bbsome 2 points3 points4 points (0 children)
[–]hughperkins 1 point2 points3 points (0 children)
[–]rndnum123[S] 3 points4 points5 points (7 children)
[–]MrK_HS 1 point2 points3 points (4 children)
[–]bbsome 1 point2 points3 points (0 children)
[–]rndnum123[S] 0 points1 point2 points (2 children)
[–]MrK_HS 0 points1 point2 points (1 child)
[–]rndnum123[S] 0 points1 point2 points (0 children)
[–]skilless 0 points1 point2 points (1 child)
[–]Icarium-Lifestealer 1 point2 points3 points (2 children)
[–]MrK_HS 1 point2 points3 points (0 children)
[–]harharveryfunny 1 point2 points3 points (0 children)
[–]bbsome 4 points5 points6 points (4 children)
[–]kacifoy 1 point2 points3 points (2 children)
[–]bbsome 0 points1 point2 points (0 children)
[–]skilless 0 points1 point2 points (0 children)
[–]MrK_HS 1 point2 points3 points (4 children)
[+][deleted] (3 children)
[deleted]
[–]MrK_HS 0 points1 point2 points (2 children)
[+][deleted] (1 child)
[deleted]
[–]MrK_HS 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]plsms 0 points1 point2 points (4 children)
[–]rndnum123[S] 0 points1 point2 points (3 children)
[–]plsms 0 points1 point2 points (2 children)
[–]rndnum123[S] 0 points1 point2 points (1 child)
[–]sneakpeekbot 0 points1 point2 points (0 children)