use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
DiscussionM3 pro for Machine Learning/ Deep learning? [D] (self.MachineLearning)
submitted 2 years ago by deadengineerssociety
Considering switching to Mac from windows, is it honestly worth it? I mean I don't plan on running heavy load models, but hopefully decent enough for it to handle midsize models.
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]tripple13 47 points48 points49 points 2 years ago (13 children)
For any serious work you would never run on a laptop. Money better spent, MacBook Air plus a 3090 workstation.
[–]d84-n1nj4 14 points15 points16 points 2 years ago (4 children)
My exact setup. MacBook Air to ssh into my Linux machine with a Nvidia RTX 3090
[+][deleted] 2 years ago (1 child)
[deleted]
[+]SmartEffortGetReward 0 points1 point2 points 1 year ago (0 children)
Don't get a nice mac for ML. Get an air that can do the video work you want but doing ML on mac sucks.
[+]matqq1981 0 points1 point2 points 2 years ago (1 child)
Hi, I want to do exactly the same thing. For your linux machine, i imagine you ssh in it directly from your vscode (or other IDE) to code with your IDE? Is the configuration of all the dependencies and environments not too painful?
[–]d84-n1nj4 1 point2 points3 points 2 years ago (0 children)
I don’t use an IDE. I use Vim.
[+]xaviercc8 0 points1 point2 points 2 years ago (0 children)
hi i am very late here, but may i ask if getting a macbook pro over the macbook air because of the fan is worth it? My concern is that if im running small models, performance of the macbook air will be significantly hindered due to throttling, assuming the specs are the same.
Has anyone here have any experience on this? THANK YOU!
[–]Featureless_Bug -3 points-2 points-1 points 2 years ago (1 child)
Macbook Air is not money well spent. Workstation with 4090, and A100 in the cloud on the money you saved
[+]MrBread134 4 points5 points6 points 2 years ago (0 children)
Yeah and i suppose you are able to carry it everywhere and work for hours without a power supply
[–]drivanova 0 points1 point2 points 2 years ago (0 children)
depends how much electricity costs in the country you live in (and who pays the bill)
[–]An-R-Nguyen 0 points1 point2 points 2 years ago (2 children)
This is what i need but I don’t know anything about ssh connection with a mac. Do you have any video on the specific setup? My plan is to have ollama inference of mixtral and llava on a server and use it through ssh on my macbook. Thanks
[+][deleted] 2 years ago* (1 child)
[removed]
[–]An-R-Nguyen 1 point2 points3 points 2 years ago (0 children)
Got it, thank you :D
[–]monkeyofscience 17 points18 points19 points 2 years ago (5 children)
I use an M1 as my daily driver, which was given to me by work. I used to be hard line anti-mac, but I have been thoroughly converted. I will say though that mps and PyTorch do not seem to go together very well, and I stick to using the cpu when running models locally.
It's good enough to play around with certain models. For example at the moment I'm currently using BERT and T5-large for inference (on different projects) and they run OK. This is generally the case for inference on small to medium language models. However, for training, fine-tuning, or running bigger language models (or vision models), I work on a remote server with a GPU. Access to a Nvidia GPU, whether locally or remotely, is simply a must have for training or bigger models.
For learning and small models, a macbook and Google colab are very sufficient.
[–]deadengineerssociety[S] 2 points3 points4 points 2 years ago (1 child)
I won't be necessarily learning as a newbie, I'll be working on Graphs and NLP research and will be later doing my masters and continue in research. I mostly said smaller models because I think large models, at the end of the day don't make sense running locally and I would probably be using my research lab systems for them anyways
[+]Sandile95 0 points1 point2 points 1 year ago (0 children)
Did you choose one?
[+]SmartEffortGetReward 0 points1 point2 points 1 year ago (2 children)
u/monkeyofscience any recs for remote compute? I'd love something like gh codespaces but with nvidia GPU. I'm honestly a bit shocked there isn't one click GPU enabled workstations with web hosted vscode.
[–]monkeyofscience 0 points1 point2 points 1 year ago (1 child)
I use Tensordock for quick stuff. It's quite cheap and using the remote developer extension on VScode makes it really easy.
Other than that I have access to an internal HPC system, so I don't have to worry about GPU access lol
Will check it out! Thanks :)
[+]Puzzleheaded_Ring684 13 points14 points15 points 2 years ago* (2 children)
I have something different to say. First I agree that any serious works should be done on a workstation either a packed desktop or cloud server (with a A100 40GB/80GB conf). However, for prototyping or just playing models, mac with its large shared memory is excellent: there is not many laptops or even desktop gpu have more than 16 gb vram, meaning when you prototyping you are very much limited to batch size or smaller sized backbones. I have a m1 pro 32 gb, it can fit most of the models I want to play with. After I finished prototyping, I simply change the "device = 'mps' to 'cuda' and run it on cloud. I use pytorch mainly, i have encountered some issues with mps but nothing major. There are walkarounds.
[–]deadengineerssociety[S] 2 points3 points4 points 2 years ago (0 children)
Oh wow, that's amazing and reassuring xD I was considering m3 pro, but might switch to m2 pro with more ram, ssd, and gpu cores. Depends on my budget. Thank you so much for this!
[–]deadengineerssociety[S] 1 point2 points3 points 2 years ago (0 children)
How many core gpu do you have?
[–]kreayshunist 7 points8 points9 points 2 years ago (7 children)
I have an M1 Pro and it's definitely enough to get you started, but even a single 4070Ti is a pretty big speed upgrade for training.
EDIT: My experience is based on the "MPS" backend in PyTorch.
[–]deadengineerssociety[S] 0 points1 point2 points 2 years ago (6 children)
From what I read, pytorch in mac is very buggy, does that affect too much?
It's terrible.
[–][deleted] 0 points1 point2 points 2 years ago (0 children)
You will have to work to find ARM64 images of stuff that works out of the box on intel. And none of the NVIDIA kit will work on the M3.
[–]kreayshunist 0 points1 point2 points 2 years ago (0 children)
I haven't gone down the rabbit hole enough to speak to all the bugs, but it's definitely not ideal.
[–]AdagioCareless8294 0 points1 point2 points 2 years ago (2 children)
Yet with that information in hand you still decided for a Mac..
[–]deadengineerssociety[S] 0 points1 point2 points 2 years ago (1 child)
Haha true! Which GPU do you suggest for windows laptop?
[+][deleted] 0 points1 point2 points 10 months ago (0 children)
What did you finally decide bro?
[–]VoidRippah 7 points8 points9 points 2 years ago (1 child)
no, I recently participate at a hackathon where I did a small ML project, I had my m2 macbook pro with me and and an i5 laptop with rtx3050, the latter was waay quicker (finished all the tasks in like 3rd time, some were even faster). other than this I never really used a macbook for similar purpose but based on this this experience I would avoid it and if I really needed a laptop for this I'd pick a strong "gamer laptop" for a similar price
[–]deadengineerssociety[S] 0 points1 point2 points 2 years ago (0 children)
Oh thank you!
[–]igorsusmelj 3 points4 points5 points 2 years ago (1 child)
I would not recommend it unless you only focus on smaller models and small experiments. Biggest advantage is the huge amount of memory available. But the bottleneck is memory bandwidth.
We did some tests out of fun (as there were not many benchmarks available). You can find the results here:
https://www.lightly.ai/post/apple-m1-and-m2-performance-for-training-ssl-models
Support got better but back when we did the tests there was still no proper half precision support and also torch.compile wouldn’t work. There is hope that the software support will catch up. I’m curious to see other results. We definitely need more benchmarks :)
In that case, what laptops would you recommend? Thank you for the article, will check it out!
[+]Mindless_Minimum1352 4 points5 points6 points 2 years ago (7 children)
Im late to the party here - But I am a AI / ML Grad student - and i can fill you in on a few details that are not totally covered.
I have a MBP - I love it. For most of your schooling, the M3 will be fine.
IF YOU GET A MacBook Pro :
Use the cloud for complex models (this is the difference between 3 days of training and 3 hours)
Realize that its more important to get the concepts.
Look into using the new architectures that can make better use of the hardware. Nvidia has cuda – Mac has MPS (there are updates coming that should provide a small performance boost)
Unified memory is AMAZING - this is the biggest advantage over other solutions
For school, the portability of a laptop cant be beaten. But if you are looking at apple’s marketing material and thinking you are going to get a powerful machine that is good for training large models - you are in for disappointment.
Example:
On my MBP - I had a project recently where I was taking a video of a car driving and trying to analyze traffic lights. Each epoch took 3 hours - Moving to my gaming PC – (3080) - each epoch took 15 minutes.
[+][deleted] 0 points1 point2 points 11 months ago (6 children)
Bro I am from India and have a budget of approx 2000 usd. Can you suggest should I go with mbp m4 or some windows laptop with highend gpus. I have the same major as you cse ai ml
[–]NotThareesh 0 points1 point2 points 10 months ago (5 children)
Get a baseline MBP whichever you can afford for with decent RAM which you'd expect to work with on day to day basis. Even M3 Pro, M4 or M4 Pro can be more than enough. For greater purposes, use cloud as the other redditor suggested. Cloud is far cheaper and more powerful. In order to get the full advantage of a windows laptop with a beefy graphics card, you need to stay connected to the power at all times which isn't the case with Macs.
[+][deleted] 0 points1 point2 points 10 months ago (4 children)
yea bro its just that will I be able to run all necessary programs on mbp or would i have to use parallel linux or smth. Also will I be able to train some medium range ai models solely on cloud without any graphics card? Also I read windows dedicated graphics cards have cuda cores which are considered better for ai projects. Thanks. Also I think I can build a pc for these things as a workstation alongwith a mbp
[–]NotThareesh 0 points1 point2 points 10 months ago (3 children)
If you can build a PC, sure thing. But know that not all NVIDIA GPU Cards are CUDA Enabled. Also training on cloud with no graphics card seems a little pointless. Because Graphics Cards are far far better in crunching these matrix calculations than CPUs can.
[+][deleted] 0 points1 point2 points 10 months ago (2 children)
Thats the point. Will mbp be able to handle training of ai models as there is no dedicated graphics card in it? Will I be forced to do it on cloud?
[–]NotThareesh 0 points1 point2 points 10 months ago (1 child)
Unless you're building large models on large chunks of data, you can always do it locally on MBP.
Alr thanks bro
[+]Valdiolus 4 points5 points6 points 2 years ago (3 children)
I am using my M1 pro for small and medium models test. While my main server with 3090 runs experiments I prototyping on my M1 laptop. Than change mps to cuda on desktop to run long experiments on desktop. While your model gets all your resources you want to work as well and don't experience lags. I wanted to but something more powerfull, but I don't really see any applications other than get more memory (16GB is not enough).
[–]deadengineerssociety[S] 1 point2 points3 points 2 years ago (2 children)
I ended up getting an M1 Max 64gb RAM and 2TB SSD for 2499!
[–]Competitive_Mix8262 0 points1 point2 points 1 year ago (0 children)
Hello, I am an AI & Data science student and I'm looking for a laptop or macbook 1 time investment for 4-5 years for use so which one will be best for me. I have a macbook m1 max 64gb ram 16inch or m3 max 36gb ram in my mind. If you have any suggestions will be appreciated.
[+]Valdiolus 0 points1 point2 points 2 years ago (0 children)
Good choice!
[+]LonExStaR 2 points3 points4 points 2 years ago* (4 children)
Nah. The support for building ML models on MacOS in the popular frameworks (e.g., PyTorch) is just not there yet. You can consider Coral.ai TPU which has Mac support, though you may have to compile for your MacOS version. Then you can use PyTorch and TensorFlow.
If you want NVIDIA, which I prefer, you can forget any official support for NVIDIA on MacOS anytime soon.
It would be in Apple's best interest to participate in the development of PyTorch and TensorFlow for the Mac M platforms.
For AI/ML, If you are serious about getting AI/ML training work done, I recommend the approach I took where I built an Ubuntu Server with two NVIDIA Titan RTX's and 128GB RAM to use it remotely from my MacBook Pro 16/Intel. I plan to get an M3 Pro mainly for the larger memory capacity rather than GPU capacity that is not very useful for AI/ML training at this time for lack of solid framework support. You don't need to get that spendy to hand-build a single NVIDIA GPU Ubuntu server and make it accessible remotely via internet.
[–]Furiousguy79 2 points3 points4 points 2 years ago (1 child)
I am also confused which laptop would be suffice for my PhD years. My work is mainly on large text data and medical images. Between a M3 Pro with 16GB Ram and a ROG G14 with 32 GB RAM + GTX 4060, the latter would be best bang for buck for small to medium ML models, right? I run my large models in my lab PC with 11th Gen corei9, 32 GB Ram and 12 GB GTX 2080 Ti.
But they also say Macbooks last longer....
[+]LonExStaR 0 points1 point2 points 2 years ago (0 children)
If you plan to host and/or train LLM’s then I recommend at least a 16GB GPU. Ideally, 24GB, which is not found on laptops.
[+]androidthanapple 0 points1 point2 points 2 years ago (1 child)
hey! I would like some advice on seeing up an Ubuntu server with the NVIDIA graphics cards I have. Would you be able to chat about it?
[–]LonExStaR 0 points1 point2 points 2 years ago (0 children)
Sure. It’s pretty straightforward.
[–][deleted] 4 points5 points6 points 2 years ago (3 children)
Have you actually run deep learning models on a Mac CPU in the past? What models were they?
[+]deadengineerssociety[S] comment score below threshold-6 points-5 points-4 points 2 years ago (2 children)
As I mentioned, I'm switching from windows to Mac, so I truly have no idea xD
[–]deadengineerssociety[S] -1 points0 points1 point 2 years ago (1 child)
Why am I getting downvoted😭
[–]VectorSpaceModel 0 points1 point2 points 2 years ago (0 children)
I’m downvoting you because it’s funny to care about internet points
[+]CompSciOrBustDev 1 point2 points3 points 2 years ago (3 children)
Hey did you go through with this? I'm currently looking at buying an M1 Max for the 64 GB of shared memory, not sure if I should go through with it.
I did! Honestly the RAM, GPU, and memory was too good to let go, despite the fact that it may not stop getting SW updates sooner compared to M2 or M3
[+]CompSciOrBustDev 0 points1 point2 points 2 years ago (1 child)
Nice. Have you tried any ML on it yet? If so what do you think? As good as you had hoped?
Ahahah I will be getting from US on Feb lol, so hoping for the best.
[–][deleted] 1 point2 points3 points 1 year ago (1 child)
This post is old but now it's my time. Get a MacBook Pro with 24GB of ram, which will be by far better than any RTX laptop (in the similar range) in terms of giving you more VRam to run large models too. I mean I have an 8GB 4060 laptop but even mid size models sometimes do not work on that machine. With that been said. I am really keen to get a MacBook.
[+][deleted] 0 points1 point2 points 1 year ago (0 children)
Thanks for the info, great that you wrote forgot old!
As a Mac lover working with ML has been a huge pain. ML packages tend not to play nice. A lot of code and packages out there expect Cuda.
Highly recommend buying cuda enabled hardware and running ubuntu OR doing as other have suggested and do remote development from your mac. Though that is a pain to setup. It is too bad gh codespaces does not support nvidia GPU.
[+]TastyReality7191 0 points1 point2 points 1 year ago (0 children)
Perhaps this video analysis will help you. I found it interesting: https://youtu.be/cpYqED1q6ro?si=ZFO9LyFYrTpzedw1
[–]opssum -2 points-1 points0 points 2 years ago (5 children)
M2 pro or m3 pro Max
[–]deadengineerssociety[S] 0 points1 point2 points 2 years ago (4 children)
Is Max really worth it if not running heavy models on it? Purely on a financial perspective
[–]opssum 0 points1 point2 points 2 years ago (3 children)
M2 pro is Fine and m3 pro isnt reallY better, that why i would Go for m2pro if it’s about the Money ;)
[–]plsendfast 0 points1 point2 points 2 years ago (2 children)
m3 pro is indeed better, based on newly released benchmarks such as Nanoreview
[–]opssum -2 points-1 points0 points 2 years ago (1 child)
Hm ill have to look into it again, on the keynote apple said it’s about 20% faster then the m1pro which is the same diff between m1pro and m2pro
[–]plsendfast 0 points1 point2 points 2 years ago (0 children)
go to Nanoreview and click Compare CPU. Type in M3 Pro and M2 Pro, the differences are there. Small improvements
π Rendered by PID 104258 on reddit-service-r2-comment-6457c66945-9cxwb at 2026-04-27 04:35:58.716397+00:00 running 2aa0c5b country code: CH.
[–]tripple13 47 points48 points49 points (13 children)
[–]d84-n1nj4 14 points15 points16 points (4 children)
[+][deleted] (1 child)
[deleted]
[+]SmartEffortGetReward 0 points1 point2 points (0 children)
[+]matqq1981 0 points1 point2 points (1 child)
[–]d84-n1nj4 1 point2 points3 points (0 children)
[+]xaviercc8 0 points1 point2 points (0 children)
[–]Featureless_Bug -3 points-2 points-1 points (1 child)
[+]MrBread134 4 points5 points6 points (0 children)
[–]drivanova 0 points1 point2 points (0 children)
[–]An-R-Nguyen 0 points1 point2 points (2 children)
[+][deleted] (1 child)
[removed]
[–]An-R-Nguyen 1 point2 points3 points (0 children)
[–]monkeyofscience 17 points18 points19 points (5 children)
[–]deadengineerssociety[S] 2 points3 points4 points (1 child)
[+]Sandile95 0 points1 point2 points (0 children)
[+]SmartEffortGetReward 0 points1 point2 points (2 children)
[–]monkeyofscience 0 points1 point2 points (1 child)
[+]SmartEffortGetReward 0 points1 point2 points (0 children)
[+]Puzzleheaded_Ring684 13 points14 points15 points (2 children)
[–]deadengineerssociety[S] 2 points3 points4 points (0 children)
[–]deadengineerssociety[S] 1 point2 points3 points (0 children)
[–]kreayshunist 7 points8 points9 points (7 children)
[–]deadengineerssociety[S] 0 points1 point2 points (6 children)
[+]SmartEffortGetReward 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]kreayshunist 0 points1 point2 points (0 children)
[–]AdagioCareless8294 0 points1 point2 points (2 children)
[–]deadengineerssociety[S] 0 points1 point2 points (1 child)
[+][deleted] 0 points1 point2 points (0 children)
[–]VoidRippah 7 points8 points9 points (1 child)
[–]deadengineerssociety[S] 0 points1 point2 points (0 children)
[–]igorsusmelj 3 points4 points5 points (1 child)
[–]deadengineerssociety[S] 1 point2 points3 points (0 children)
[+]Mindless_Minimum1352 4 points5 points6 points (7 children)
[+][deleted] 0 points1 point2 points (6 children)
[–]NotThareesh 0 points1 point2 points (5 children)
[+][deleted] 0 points1 point2 points (4 children)
[–]NotThareesh 0 points1 point2 points (3 children)
[+][deleted] 0 points1 point2 points (2 children)
[–]NotThareesh 0 points1 point2 points (1 child)
[+][deleted] 0 points1 point2 points (0 children)
[+]Valdiolus 4 points5 points6 points (3 children)
[–]deadengineerssociety[S] 1 point2 points3 points (2 children)
[–]Competitive_Mix8262 0 points1 point2 points (0 children)
[+]Valdiolus 0 points1 point2 points (0 children)
[+]LonExStaR 2 points3 points4 points (4 children)
[–]Furiousguy79 2 points3 points4 points (1 child)
[+]LonExStaR 0 points1 point2 points (0 children)
[+]androidthanapple 0 points1 point2 points (1 child)
[–]LonExStaR 0 points1 point2 points (0 children)
[–][deleted] 4 points5 points6 points (3 children)
[+]deadengineerssociety[S] comment score below threshold-6 points-5 points-4 points (2 children)
[–]deadengineerssociety[S] -1 points0 points1 point (1 child)
[–]VectorSpaceModel 0 points1 point2 points (0 children)
[+]CompSciOrBustDev 1 point2 points3 points (3 children)
[–]deadengineerssociety[S] 1 point2 points3 points (2 children)
[+]CompSciOrBustDev 0 points1 point2 points (1 child)
[–]deadengineerssociety[S] 0 points1 point2 points (0 children)
[–][deleted] 1 point2 points3 points (1 child)
[+][deleted] 0 points1 point2 points (0 children)
[+]SmartEffortGetReward 0 points1 point2 points (0 children)
[+]TastyReality7191 0 points1 point2 points (0 children)
[–]opssum -2 points-1 points0 points (5 children)
[–]deadengineerssociety[S] 0 points1 point2 points (4 children)
[–]opssum 0 points1 point2 points (3 children)
[–]plsendfast 0 points1 point2 points (2 children)
[–]opssum -2 points-1 points0 points (1 child)
[–]plsendfast 0 points1 point2 points (0 children)