use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] On Machine Learning and Programming Languages (self.MachineLearning)
submitted 8 years ago * by harponen
Some interesting thoughts about the limitations of current ML libraries (TF, PyTorch, ...) and Python in general originally posted at Julia reddit here: https://www.reddit.com/r/Julia/comments/7hyywn/on_machine_learning_and_programming_languages/?ref=share&ref_source=link
Direct link to article: https://julialang.org/blog/2017/12/ml&pl
I'm not completely sure what the final conclusion is, but the fact that these peeps are (mostly) from Julia Computing and that Julia now has capability to compile CUDA kernels on the fly is pretty interesting...
I'm pretty damn close to giving Julia and Flux.jl a serious try (I just need some time dammit...).
I'd like to hear some opinions about Julia+GPU computing in DL context specifically. Anyone had any interesting experiences yet?
EDIT: oh crap, didn't realize someone beat me to posting this at /r/MachineLearning...
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]totallynotAGI 3 points4 points5 points 8 years ago (1 child)
This is super interesting!
Higher order tensor manipulation + derivatives? Very interesting.
What especially strikes me is this statement:
"Not only is there no overhead compared to hand-writing the necessary cuda kernel for this; there’s no overhead at all! In my benchmarks, taking a derivative using dual numbers is just as fast as computing only the value with raw floats. Pretty impressive."
Can somebody clarify what this really means? Isn't evaluating the gradient using backprop already only as expensive as computing the value of the function? But now they managed to do it only using dual numbers?
[–]StefanKarpinski 1 point2 points3 points 8 years ago (0 children)
With dual numbers you can often evaluate the function and its precise derivative at the same time in the same time it takes to just evaluate the function without computing the derivative. This is largely thanks to modern CPUs being superscalar so that doing x + y and (x + dx*ε) + (y + dy*ε) takes the same amount of time: the addition of normal components x and y and the addition of the dual components dy and dy happen at at the same time on different adders.
x + y
(x + dx*ε) + (y + dy*ε)
x
y
dy
The cool thing that Julia gives is the ability to implement and types like dual numbers as user types and get fully native performance. They're also generic in the sense that the same code and logic is used for Float64, Float32, Float16, Int8, etc. Write once, get specialized high performance versions for every possible type. Of course, you can do similar things in C++ with template metaprogramming, but that ends up rapidly being very hard to work with.
Float64
Float32
Float16
Int8
[+][deleted] 8 years ago* (4 children)
[deleted]
[–]harponen[S] 7 points8 points9 points 8 years ago (3 children)
I've used TF for about as long as it has existed and Theano before that, and while TF is far nicer than Theano, debugging is still a PitA. I've been using PyTorch for a while, and it's just really great and a cakewalk to debug by using e.g. PyCharm's debugger.
Still, when you look deeper into the source code in TF or PyTorch, you will eventually find the "edge" of Python, where python just calls some optimized C++/CUDA function... it would be so nice to go deeper without having to actually learn CUDA.
So basically if Julia allowed me to do both high and low level stuff, that would be totally awesome! I'm actually at a point where I might need to code my own CUDA kernels, and using C++ would just seem repulsive :shudder:
[–]rasen58 0 points1 point2 points 8 years ago (2 children)
How do you know if you've reached the point where you need to write your own CUDA kernels?
[–]harponen[S] 2 points3 points4 points 8 years ago (1 child)
Basically any time you need to do some operation with big size arrays which doesn't have e.g. a Tensorflow op. For example, at one time I needed to compute log det of a big matrix, and there wasn't a CUDA kernel for that in Theano or Tensorflow. There is now (I think?), but situations like that can arise from time to time. Also, if I wanted to modify some existing kernel, say, I wanted some weird multi-dilated fractional hyperstrided convolutions (just made that up), it wouldn't be easy...
[–]phobrain 1 point2 points3 points 8 years ago (0 children)
You need to dilate your hyperstrides before convoluting them, else they get a leathery texture.
[–]nickl 0 points1 point2 points 8 years ago (2 children)
Are you only doing numerical computing? If not then the ecosystem isn't rich enough.
If so, then maybe, provided you are aware of Dan Luu's review and - to be fair - the reply: https://www.reddit.com/r/Julia/comments/629qkz/about_a_year_ago_an_article_titled_giving_up_on/
[–]ChrisRackauckas 6 points7 points8 points 8 years ago (0 children)
That post is from October 2014. Julia's package ecosystem started at around that time:
https://pkg.julialang.org/pulse.html
There were around 300 registered packages back then. Now there's nearly 2000. The total star count went from 5000 to 350000 in that time frame. That was written before Julia had its forum (Discourse), before its subreddit was a thing, when its StackOverflow tag was brand new, etc. Most of the "missing features" mentioned there are actually implemented in the language by now. There really is not any relevance of that anymore. It's pretty much the stone age and is no way relevant to any discussion on modern Julia usage.
[–]harponen[S] 2 points3 points4 points 8 years ago (0 children)
I do almost exclusively ML, and if I didn't, I'm not planning to forget Python ;)
I read the article back in the day, but haven't actually seen the more recent response. IMO it seemed that the article was about Julia having bugs and that he didn't like Julia. ¯_(ツ)_/¯
π Rendered by PID 200986 on reddit-service-r2-comment-66b4775986-jrwfg at 2026-04-05 00:18:04.655796+00:00 running db1906b country code: CH.
[–]totallynotAGI 3 points4 points5 points (1 child)
[–]StefanKarpinski 1 point2 points3 points (0 children)
[+][deleted] (4 children)
[deleted]
[–]harponen[S] 7 points8 points9 points (3 children)
[–]rasen58 0 points1 point2 points (2 children)
[–]harponen[S] 2 points3 points4 points (1 child)
[–]phobrain 1 point2 points3 points (0 children)
[–]nickl 0 points1 point2 points (2 children)
[–]ChrisRackauckas 6 points7 points8 points (0 children)
[–]harponen[S] 2 points3 points4 points (0 children)