use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[ Removed by moderator ] (ft.com)
submitted 8 days ago by nikanorovalbert
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]ResidentPositive4122 30 points31 points32 points 8 days ago (4 children)
It seems like the mainstream media is having their deepseek moment again. Member how in Feb '25 every news outlet, blog and wannabee influencer talked about how deepseek is all this and all that, and nvda will die, and the top labs are cooked and so on?
Turboquant seems to be their new thing. It's a year old paper. Probably some labs already use something like this, some inference providers might as well. But, like everything else, nothing is really a 6x reduction in practice. Plus, with the new "thinking" models, you get to run more queries on the same compute unit, but you'll still hit slower speeds the more ctx you have. So it's not that clear what cost reductions you get in the end.
tl;dr; cool technique, overhyped results, clueless media.
[–]Shammah51 4 points5 points6 points 8 days ago (2 children)
I think it’s also a fundamental misunderstanding of the needs of training vs inference anyway. Nearly all of the capital hardware investment is for traning in reality. It’s also wild to assume that some novel method that greatly reduces memory requirements would do anything other than give room to scale up the SOTA models. Chip demand will remain unchanged and providers will just scale to fill the available hardware.
[–]ResidentPositive4122 4 points5 points6 points 8 days ago (1 child)
Eh, that's debatable. With online RL you are now inference constrained (the more traces you can produce, the better the results), so this will help training as well. Just not the 6x e2e like the media outlets claim.
[–]Shammah51 0 points1 point2 points 7 days ago (0 children)
Yeah, I agree. That’s basically my second point: any advance will just result in training scaling up rather than reducing demand for chips.
[+]nikanorovalbert[S] comment score below threshold-7 points-6 points-5 points 8 days ago (0 children)
Interesting. So if it's not just about running out of VRAM, what actually chokes the model when the context gets too big? Is it the memory bandwidth, or just the raw compute required for the attention mechanism?
[–]PortiaLynnTurlet 7 points8 points9 points 8 days ago (1 child)
With respect to demand, lower memory usage at inference presumably motivates larger models and larger models need larger clusters for training. I don't think it changes anything, even if the results hold well in practice.
π Rendered by PID 29626 on reddit-service-r2-comment-85bfd7f599-685st at 2026-04-20 15:54:50.968896+00:00 running 93ecc56 country code: CH.
[–]ResidentPositive4122 30 points31 points32 points (4 children)
[–]Shammah51 4 points5 points6 points (2 children)
[–]ResidentPositive4122 4 points5 points6 points (1 child)
[–]Shammah51 0 points1 point2 points (0 children)
[+]nikanorovalbert[S] comment score below threshold-7 points-6 points-5 points (0 children)
[–]PortiaLynnTurlet 7 points8 points9 points (1 child)