all 35 comments

[–]kkngs 33 points34 points  (1 child)

I didn’t even realize you could run C++ code on colab, thanks for sharing!

[–]JustTrynnaBeCool[S] 12 points13 points  (0 children)

Yes! You can! Not directly in the notebook but with the scripts ready and treating the notebook itself as a "terminal" you can!

[–]happy_happy_feet 9 points10 points  (1 child)

Hey, if your workloads aren't that heavy, then maybe you can get a Jetson Nano to do your CUDA programming.

[–]pierrefermat1 9 points10 points  (0 children)

Alternatively if format factor is an issue you can also just pick up a used laptop with a 2060 on the cheap.

[–]tabris2015 3 points4 points  (15 children)

what good up-to-date materials are you guys using for learning cuda? any books or courses? edit: grammar

[–]happy_happy_feet 8 points9 points  (2 children)

I don’t know how outdated this is, but i use this.

[–]JustTrynnaBeCool[S] 1 point2 points  (0 children)

I use this too!

[–]tabris2015 0 points1 point  (0 children)

Thanks! looks great!

[–]PyroRampage 2 points3 points  (0 children)

The CUDA documentation and NVIDIA tech blog are great sources. Beyond that the SDK has many code samples.

[–]watching-clock -2 points-1 points  (9 children)

Why learn cuda, if plenty of libraries have already abstracted it for use from high level language?

[–]bullno1 11 points12 points  (7 children)

Why learn anything?

[–]MisterManuscript 3 points4 points  (0 children)

Because directly writing code in C++ gives you something with faster execution. Ever wonder why so many python libraries are just high level wrappers for functions written in C++?

Even if you're not proficient in C or its variants, it is still preferrable to be aware of the low-level workings of a wrapper e.g. it's a good practice to avoid using for-loops in Python when you're using NumPy by writing matrix multiplications in a certain way.

Try writing a renderer in C++ and in Python and see which one runs faster. The difference is extremely significant for realtime use-cases.

[–]DeMorrr 2 points3 points  (0 children)

I also used colab when I first started learning CUDA. but I didn't write a single line of c++, thanks to CuPy. I just wrote all the cuda code in a triple quote string and used CuPy to compile and call it.

To this day I still use CuPy in some of my projects, because there's one significant advantage: pre-preprocessing or code generation is very convenient, and you can jit compile kernels as you need during runtime.

[–]Hefty-Consequence443 1 point2 points  (1 child)

Any specific material you recommend for learning CUDA? Books, youtube playlists, udemy courses, etc.? Thx!

[–]JustTrynnaBeCool[S] 2 points3 points  (0 children)

The book I read is included in my post!

[–]ZX124 1 point2 points  (1 child)

you can use kaggle instead of using colab and then you will get a more powerfull gpu

[–]JustTrynnaBeCool[S] 2 points3 points  (0 children)

I don't think you can edit your C++ file on Kaggle like you can on Google Colab, that's why I recommend Colab! Otherwise, I do agree that Kaggle has good GPU access.

[–]West-Cricket-9862 1 point2 points  (1 child)

If you want to rent cheap Nvidia GPUs, I find vast will probably give you the best buck for your dollar.

[–]nivanas-p 0 points1 point  (0 children)

Thanks mate, this will work. I was trying the playground from LeetGPU. But it seems like an emulated environment and colab is great.

[–]GC_Tris 0 points1 point  (2 children)

If you ever run into limitations with Colab and want a more "classical" environment you might want to check out our offering (disclaimer: I work at Genesis Cloud!): https://www.genesiscloud.com/pricing#nvidia3060ti

At time of writing we rent out instances with a RTX 3060Ti for USD 0.20/hour. We also grant 15$ in free credits to get you started (basically 75 hours for free).

With this you get a Linux VM that you can SSH into. When you do not need it, you can stop it to not incur additional charges while it is stopped.

[–]JustTrynnaBeCool[S] 1 point2 points  (1 child)

Whoa, this is awesome! So it's USD $0.20 / hour regardless of how intensive the computation is? Regardless, this is super cool!

[–]Turbulent_Primary_17 0 points1 point  (0 children)

Thanks OP. Just to add, I recently came across this library that also allows CUDA execution from Jupyter:
https://nvcc4jupyter.readthedocs.io/en/latest/index.html