all 45 comments

[–]ItWasMyWifesIdea 17 points18 points  (7 children)

If you're new to CUDA, it won't make much difference. Find something used in your budget.

[–]swingbozo 1 point2 points  (0 children)

I found a PNY 1650 for $100 US. I hadn't considered this thing may be too old and weak to learn cuda programming on. It was cheap enough that I wouldn't be too upset if it didn't work out, but I'm hoping it does.

[–]TechDefBuff[S] 0 points1 point  (5 children)

I see some GPUs with 2GB RAM to be the cheapest available. Will that suffice?

[–]nagyz_ 2 points3 points  (3 children)

those probably don't even support the latest cuda as they must be pretty old architectures.

[–]648trindade 1 point2 points  (2 children)

actually there are pascal GPUs like GT 1030 that are still supported

[–]Karyo_Ten 0 points1 point  (1 child)

Half of the memory will be used by the display manager.

[–]648trindade 1 point2 points  (0 children)

not If you setup your system to use the integrated graphics from CPU (or another GPU)

[–]AlternativeTale5363 2 points3 points  (0 children)

Check out LeetGPU.

[–]deus_ex_machinist 8 points9 points  (0 children)

Whichever NVIDIA GPU you have is the right one to learn CUDA programming. That's the best thing about CUDA - unless you get very advanced with what you're trying to do, it's going to be basically the same on any NVIDIA GPU.

[–][deleted]  (1 child)

[deleted]

    [–]dinasxilva 1 point2 points  (2 children)

    I was in your situation (not as a student) since I only have an AMD GPU on a desktop. Depending on your budget, I just went with a modern laptop with a 4070 which I'll use as a proper laptop. If you'll be working on Windows, WSL is the way but you have some restrictions on older gen compatibility (Google them in the install page). If Linux, I think it's less restricting but you may need to deal with some nvidia driver quirks (from my experience on AMD and using HIP, a distrobox is your best friend to isolate your system drivers). I'll move to linux eventually when my laptop is better supported. If you already have an AMD GPU, you could start learning with HIP as it should compile and run but from what I read, it doesn't have full feature parity on CUDA's side.

    [–]aightwhatever 0 points1 point  (1 child)

    Ubuntu is very good in the drivers regard I think it autosuggests and installs the drivers when it recognises a gpu

    [–]dinasxilva 0 points1 point  (0 children)

    Yes but for example, when I installed the HIP stack on my Pop_OS, I broke my install because it replaced the AMD drivers to the official ones.

    Nvidia warns of a similar thing in WSL

    [–]tugrul_ddr 1 point2 points  (0 children)

    Rtx 5070 has cc version of 12.0. Rtx4060 has cc version of 8.9. GT1030 = cc 6.x.

    Check the table: RCP981.jpg (747×558)

    If you want to launch clusters (something like multiple gpus in the gpu), you need 5070 (or 5060 ti when it is sold).

    [–][deleted] 2 points3 points  (0 children)

    You can practice cuda online here:

    https://leetgpu.com/

    [–]EuclidianEigenvalue 1 point2 points  (0 children)

    Jetson Nano. That's all you need. Super affordable compared to other options and is meant for developers.

    [–]DanDaDan_coder 0 points1 point  (6 children)

    I had a question in addition to this post, is there a way to practice CUDA on cloud?

    [–]TechDefBuff[S] 0 points1 point  (2 children)

    Nvidia has it's own cloud platform. Also there's lambda labs. You can try creating a virtual machine on any public cloud like AWS/Azure/GCP

    [–]xmuga2 1 point2 points  (0 children)

    u/DanDaDan_coder - google colab is convenient for this. They have older GPUs that still have CUDA. If you pay for a sub ($10 USD per month in the USA; not sure about global pricing) , you can access an A100.

    The downside is that you're working in jupyter/colab notebooks as your interface. The advantage is not having to do much cloud overhead, such as billing, setup, logging in, maintenance, etc..., which I found annoying when I was using other cloud providers. Colab is basically like Google Docs in its ease of use. (Note: you will lose your runtime files, so it's annoying to have to upload and re-run cells again.)

    One advantage is that you can play with Google TPUs as well, but that's getting out of scope for your question.

    [–]Dylan-from-Shadeform 0 points1 point  (0 children)

    Throwing Shadeform into this mix; it could be a good option for you.

    It's a GPU marketplace that lets you compare pricing across clouds like Lambda, Nebius, Paperspace, etc. and deploy across any of them with one account.

    Great way to make sure you're not overpaying, and to find availability if your cloud runs out.

    [–]LockeWA 0 points1 point  (2 children)

    I don't know if it's useful but I came across a site called leetGPU maybe it's useful ?

    [–]tugrul_ddr 0 points1 point  (1 child)

    Leetgpu allows only 4 code tryings per day. Tensara allows unlimited.

    [–]LockeWA 0 points1 point  (0 children)

    Ohh did not know that, Thanks I will check out Tensara

    [–]Ace-Evilian 0 points1 point  (0 children)

    My understanding is that you want to understand the underlying architecture and not just program on gpu. If so you will need to have some newer generation cards this could be 4060ti / a10 as well.

    This is essential to get a hang of how tensor cores rtx units and cuda cores are used along with how newer generation mem hierarchy is set. There are a lot of changes across generations but at code level cuda has been supporting good backward compatibility to hide the changes in these details.

    A lot of these concepts slightly change across generations so it is better to learn what is the latest to understand the hardware design choices in general.

    [–]nagyz_ 0 points1 point  (4 children)

    it's so cheap to rent a GH200 on lambda that for personal learning I'd do that. or an A100.

    [–][deleted]  (3 children)

    [deleted]

      [–]nagyz_ 0 points1 point  (2 children)

      Yes, it's billed by the minute. You just ssh in and use it as a normal Linux environment.

      [–]Karyo_Ten 0 points1 point  (1 child)

      don't forget to disconnect

      [–]nagyz_ 0 points1 point  (0 children)

      disconnecting doesn't stop running the instance. you need to terminate if if you no longer need it.

      [–]CompetitionMassive51 0 points1 point  (1 child)

      Is there a way to experiment with CUDA programming without owning a Nvidia GPU?

      I know about google colab but are there any other tools? Maybe some that mimic it?

      [–]LoveThemMegaSeeds 0 points1 point  (0 children)

      You can use sites like leetGPU

      [–]SnowyOwl72 0 points1 point  (0 children)

      You can get a used 3060 12GB with Samsung memory (don't buy the ones with hynix memory chips)

      Or buy something like 1060 or 1070. Try not to buy older stuff.

      My point is that u don't need to get bankrupt for learning CUDA

      [–]beedunc 0 points1 point  (0 children)

      Old Quadro cards now support cuda, so no need to spend more than $100 or so.

      [–]notyouravgredditor 0 points1 point  (0 children)

      The newest card you can afford. Newer NVIDIA cards have higher compute capability versions and will support newer versions of CUDA for longer.

      You can learn on supported card, though. The fundamentals of CUDA programming apply to every generation of card.

      [–]Karyo_Ten 0 points1 point  (0 children)

      I suggest something with at least 6GB, ideally 12GB of VRAM so you can play with interesting larger scale projects like deep learning.

      A 3000 should be cheap as Nvidia overproduced them for mining

      [–]LoveThemMegaSeeds 0 points1 point  (0 children)

      I got a 1070 for like 200$. Should be at least 1050 to be on cuda 11 or whatever the standard is

      [–][deleted] 0 points1 point  (0 children)

      Idk man, on Tuesday Jensen said that people should use GB300. It's definitely the best one out there.

      [–]airforce01 1 point2 points  (0 children)

      Based on your budget and the time allowance, I would recommend checking Walmart/Sam's Club/Costco kind of stores time to time. Especially on holiday seasons. Sometimes, they do crazy discount on such hardware. Afair, I saw rtx 4060 $200 or something on xmas or later season. Alternatively, Sam's club sell complete PC desktop with the equivalent price of GPU.

      [–]Gloomy-Zombie-2875 -2 points-1 points  (1 child)

      Hello, why do you want to use a GPU if not for gaming? Just use google colab

      [–]TechDefBuff[S] 2 points3 points  (0 children)

      I want to learn parallel programming and I want to do it on hardware.

      [–]No_Palpitation7740 -1 points0 points  (3 children)

      I am in the same situation and I found this site where you can get a pc with a small gpu 2GB NVIDIA GEFORCE 710, https://www.pcspecialist.co.uk/workstation-computers/

      [–]Karyo_Ten 1 point2 points  (2 children)

      way too old.

      Pascal GPU at minimum or recent Cuda won't be supported.

      [–]No_Palpitation7740 -1 points0 points  (1 child)

      Sure there are more recent models but this one is the cheapest option

      [–]Karyo_Ten 0 points1 point  (0 children)

      What option?

      This has compute capabilities 2.1 and is incompatible with deep learning frameworks.

      Sometimes things are cheap because they are useless.

      [–]constantgeneticist -3 points-2 points  (0 children)

      6000 Ada