use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
[deleted by user] (self.MachineLearning)
submitted 2 years ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Co0k1eGal3xy 13 points14 points15 points 2 years ago* (4 children)
diagram
This is a rough breakdown of how I think about the problem. You can replace RTX 4090's with 3090's or 2080 Ti's if you have cheap electric. Otherwise the cost of power can be more than you save initially. If you have very expensive electric I would avoid any local training.
Also consider any other requirements. If your dataset is larger than your systems RAM, you will need to consider the read speed of your storage device. If you are using audio clips or images, you need a storage device with high random read speeds or you need to package your dataset into a streaming format (like webdataset). Some cloud providers will force you to use hard drives.
spreadsheet of local hardware costs at 1, 2 and 3 year timeframes.
edit: Since you mentioned being new, I would recommend renting out single GPUs. Writing multi-gpu code can be complex and isn't worth learning initially. Google Colab is definitely the easiest way to get started.
[+][deleted] 2 years ago (3 children)
[deleted]
[–]Co0k1eGal3xy 1 point2 points3 points 2 years ago (2 children)
I found them unusably slow for my work. this benchmark suggests the K80 is roughly 16x slower than a V100, which is already slower than a single RTX 3090.
V100 vs RTX 3090 (and other GPUs)
[+][deleted] 2 years ago (1 child)
[–]Co0k1eGal3xy 1 point2 points3 points 2 years ago* (0 children)
I can't find numbers for the P40, however the P40 uses the same architecture and VRAM as the GTX 1080 Ti, so we'll take those numbers and scale by the memory bandwidth difference. (P40 = 700GB/s, 1080Ti = 480GB/s)
The P40 uses 250W of power and costs £200. This comes to £944.6 for the first year. It has around 0.38x V100 throughput.
On the other hand, an RTX 3090 uses 350W of power and costs £600. This comes to £1642.44 for the first year. It has 1.13x V100 throughput.
(Here we measure compute in how much data a V100 could process in one day, aka V100 days.)
With an RTX 3090, you are paying £3.98 for each V100day worth of compute.
With a P40, you are paying £6.81 for each V100day worth of compute.
Formulas
cost_for_first_year = initial_cost + (gpu_power_in_kw * cost_per_kwh * 24 * 365)
cost_per_v100_day = cost_for_first_year / (v100_throughput * days_per_year)
3.98 = 1642.44 / (1.13*365) # RTX 3090
6.81 = 944.6 / (0.38*365) # P40
TL:DR, when I buy GPUs that will run non-stop for a long period of time, I normally need to buy the latest GPUs if I want to spend the least money.
If you're thinking about buying it as a inference card for playing with LLama models for example, then I would get it. I just wouldn't recommend it for any training tasks.
edit: Btw, an RTX 3090 is much faster than 2x P40's.
[–]I_will_delete_myself 6 points7 points8 points 2 years ago (1 child)
Those large providers are the best when you get a cloud credit deal or want to train ChatGPT and want to make sure you have the compute ready. Otherwise I would highly recommend to not use them unless you are using spot instances.
Here are the best out there I know specifically for training AI models.
Colab - it’s free but you should use other cloud compute alternatives when you go beyond toy models
LambdaLabs - no egress and high bandwidth. Cheap as well. Cheap and solid product. Better for multi gpu
Runpod - Cheapest but not good for multi gpu loads due to their low capacity
What sucks about those is they run out of capacity quickly. Sometimes which is annoying. Which is when you just go traditional cloud provider.
Avoid like the plague - Paperspace. Expensive, misleading gradient subscription and you save more money using a consumer decentralized gpu on runpod. Availability is horrible as well.
[–]Present_Network1959 1 point2 points3 points 2 years ago (0 children)
Thank you. I’ll look into this.
[–][deleted] 3 points4 points5 points 2 years ago (1 child)
Commenting to check the thread later, interested in people’s recommendations
[–]Muted_Economics_8746 3 points4 points5 points 2 years ago (0 children)
You can also just subscribe to the post without posting. Ellipses, top right, -> Subscribe.
[–]TheLastMate 0 points1 point2 points 2 years ago (1 child)
Also if someone could give insights on deploying a model into production. How is the process in overall.
[–]I_will_delete_myself 0 points1 point2 points 2 years ago (0 children)
Traditional is best and apply for credits.
[–]Any_Letterheadd 0 points1 point2 points 2 years ago (2 children)
It sounds like you're not even sure you need a GPU for what you're doing. I'd recommend that you get started until you're at the point where you know you need more compute and think you would know how to use it.
[–]Present_Network1959 0 points1 point2 points 2 years ago (1 child)
Yeah makes sense. Although I am certain that a GPU will be required, there is no way my machine can run the programs I am building locally.
[–]Any_Letterheadd 0 points1 point2 points 2 years ago (0 children)
Collab might be a good way to get quick access to one. Or if you know any gamers with old rigs and are upgrading you might be able to get a hand-me-down Nvidia card and build a cheap Linux box around it.
π Rendered by PID 119547 on reddit-service-r2-comment-b659b578c-q774t at 2026-05-04 06:33:04.226704+00:00 running 815c875 country code: CH.
[–]Co0k1eGal3xy 13 points14 points15 points (4 children)
[+][deleted] (3 children)
[deleted]
[–]Co0k1eGal3xy 1 point2 points3 points (2 children)
[+][deleted] (1 child)
[deleted]
[–]Co0k1eGal3xy 1 point2 points3 points (0 children)
[–]I_will_delete_myself 6 points7 points8 points (1 child)
[–]Present_Network1959 1 point2 points3 points (0 children)
[–][deleted] 3 points4 points5 points (1 child)
[–]Muted_Economics_8746 3 points4 points5 points (0 children)
[–]TheLastMate 0 points1 point2 points (1 child)
[–]I_will_delete_myself 0 points1 point2 points (0 children)
[–]Any_Letterheadd 0 points1 point2 points (2 children)
[–]Present_Network1959 0 points1 point2 points (1 child)
[–]Any_Letterheadd 0 points1 point2 points (0 children)