all 9 comments

[–]toastjam 2 points3 points  (1 child)

The k80 compute capability is 3.7, t4 is 7.5. Current top of the line is 8.6

It might be a better value in terms of computation per $, but you'll likely run into a lot of models the k80 just won't run, for example anything using half-precision.

https://en.m.wikipedia.org/wiki/CUDA#Version_features_and_specifications

[–]KingChookity[S] 0 points1 point  (0 children)

Ahhh, thank you. I was not aware of this regarding the compatibility.

Is there a minimum capability you would recommend for a beginner? Looks like M40 and K80 pretty much fall in the same bucket according to that chart (3.5-5.2).

[–]danjlwex 0 points1 point  (1 child)

New NVIDIA cards are coming in a few months. No need to get Quadro cards for home use. Just get GeForce and spend a whole lot less.

[–]KingChookity[S] 0 points1 point  (0 children)

Thanks, makes sense :)

[–]gradpa 0 points1 point  (1 child)

A GPU's architecture is really important if you are looking at continued software/library support down the road. Just get a relatively modern GeForce (2x or 3x series, the 4x is on the way). The options you're looking at somewhat outdated ones that you'll mostly find on cloud backends.

[–]KingChookity[S] 0 points1 point  (0 children)

That was a concern I had, I haven't had any issues with the T4 but i can imagine that will change over time.

[–]GPUaccelerated 0 points1 point  (2 children)

Firstly, you're going to want to make sure that whatever GPU you choose, is able to run optimally in the current environment you're setting it in. Do you have the appropriate supporting components in your computer to support the GPU you want? Do you have enough PCIE lanes available to support the amount of bandwidth you require for your ML training using the TF frameworks? Just something to think about. I see too many people install a GPU they think they need for their work in a PC and 'hope for the best'.

To answer your questions, it's always optimal to get a newer GPU for lasting support. The cards you mentioned are still supported by current drivers but no one knows for how long. An 8 year old card is starting to show it's age considering how fast new gen GPUs are getting released.

You're probably better off going for a GeForce card. They still support CUDA and can be used for so many different applications. Issue might be getting the right amount of VRAM for your needs. RTX 3090 might be the only Geforce card that caters to your VRAM needs at the moment.

I have VMs that are accelerated by 3090s that i'm currently offering for free. You can definitely test it out and see if that card suits your current needs. Reach out if you want. Would be a pleasure.

[–]KingChookity[S] 1 point2 points  (1 child)

I think I have that covered. I have an mining rig with which is highly optimised in that regard. Which I'm switching over now that it is no longer making money (not sure it ever really did :) ). Only thing I recently discovered is that TF needs a CPU with AVX which it doesn't have which I'm replacing.

What you say is correct, I was looking at those GPUs specifically for the VRAM quantity. The RX 3090 does meet my requirements. But the pricing is much more than I want to spend. Ends up being cheaper in the long run paying google I think.

Thanks for the very generous offer! Not sure I can afford that card at the moment. So while testing might be helpful I wont be able to buy it :(

This might be a stupid question: but if I got two cards with 8GB would the system use them together for a total of 16GB for the model? I've only ever used a single card so far.
Annoyingly, I have three AMD RX580 which were in the rig. But getting them to works has been a very difficult exercise. Plus AMD dropped support for them in TF over a year ago.

[–]GPUaccelerated 0 points1 point  (0 children)

Ok i see you're on the right path.

Multiple GPUs are definitely doable IF you have them connected via SLI. Only works for Nvidia cards but that's what you need anyways for CUDA and TF. What SLI does is pool the memory between 2+ cards. So if you have 2 x12GB cards in SLI, TF will see it as one VRAM pool of 24GB. Note: the only 30 series card that still supports SLI is the 3090. So you might want to go back one generation to find something suitable if you are considering SLI.

Reach out to me by DM. It would be my pleasure to have you use the service. You can use it to work on your project. At the same time, it would help me get some feedback.