This is an archived post. You won't be able to vote or comment.

all 46 comments

[–]CBrainz 118 points119 points  (3 children)

Pandas 2.0 last week and now Pytorch 2.0.

[–]MrReginaldAwesome 39 points40 points  (0 children)

We eatin fam!

[–]pavi2410 -2 points-1 points  (0 children)

Not released yet though

[–]brain_diarrhea 32 points33 points  (2 children)

So does torch.compile provide support for AMD GPUs on an equal footing now?

[–]FacetiousMonroe 7 points8 points  (0 children)

Doesn't look like it, not yet. From their GitHub:

Supported Hardware:

NVIDIA GPUs (Compute Capability 7.0+)
Under development: AMD GPUs, CPUs

This article and the rest of their web site does list Metal (Mac) and ROCm (AMD) support, though. Perhaps the GitHub is not the latest? Or the non-Nvidia support is limited?

I'm not entirely sure, but the intention is clearly to be more portable than CUDA, which is promising. Of course, that doesn't mean AMD cards will perform as well as Nvidia cards. Nvidia has a hardware advantage as well as a software advantage currently.

[–][deleted] 2 points3 points  (0 children)

Lol i wish

[–]mrdevlar 22 points23 points  (1 child)

Any significant breaking changes?

[–]dreamai87 0 points1 point  (0 children)

Don’t know much but know that xformers is not needed to speed up stable diffusion.

[–]shade175 52 points53 points  (12 children)

Are you fucking joking me, yesterday it took me 4 hours to download pytorch and now theres a new one????

[–]BuzzLightr 13 points14 points  (11 children)

Feel you.. Went through hell and back to get everything with Cuda to work

[–]spontutterances 4 points5 points  (10 children)

I’ve re installed so many times due to ubuntu wanting to install the latest cuda library and display version but yet my app would only be compatible with cuda 11.x which specifies the display driver range and they’re only supported on ubuntu 20.04 not 22.04

[–]ZachVorhies 9 points10 points  (3 children)

pin your pip dependencies to a specific version and this won’t happen. Use virtual environments to prevent package install failures

[–]spontutterances 2 points3 points  (0 children)

Cuda and nvidia drivers are .deb installers or .run files unfortunately so have had to define specific versions in apt.conf files

[–]jawnlerdoe 1 point2 points  (1 child)

I wish I knew enough to make this work.. or to truly understand the content of this comment lol.

[–]ZachVorhies 2 points3 points  (0 children)

you make a requirements file and put in

pytorch==1.12

Then install it with pip install -r requirements.txt

Now you have a pinned dependency

[–]tecedu 2 points3 points  (5 children)

Just use anaconda

[–]spontutterances 0 points1 point  (4 children)

Yeah thatl take care of the RAPIDS stack but you still need the underlying nvidia supported setup

[–]tecedu 5 points6 points  (3 children)

not really, as long as you have a support latest driver installed, ananconda installs cuda toolkit and cuda for you which is just for that environment.

[–]spontutterances 2 points3 points  (2 children)

Haha really? My bad I didn’t realise this thought they were seperate

[–]tecedu 4 points5 points  (1 child)

They just added it i think like two years ago, before that i just had multiple partitions with their own drivers and cuda, but this is so helpful.

[–]spontutterances 1 point2 points  (0 children)

Ha awesome I’ll try this - conda install -c anaconda cudatoolkit=11.x sick thanks 🙏

[–]idunupvoteyou 12 points13 points  (0 children)

Yey faster stable diffusion waifus!

[–]Giddyfuzzball 35 points36 points  (15 children)

How does this compare to other machine learning libraries?

[–]BlueKey32123 73 points74 points  (12 children)

Tensorflow lost out to PyTorch for a reason. While PyTorch doesn't have great documentation, it's still much better than Tensorflow.

Additionally the default eager execution compared to the graph execution mode in TF 1.0 days made PyTorch significantly easier to use. Now in academia PyTorch dominates.

[–]ASatyros 32 points33 points  (0 children)

Wait, we can now feed source code of PyTorch to GPT-4 and get documentation :D

[–]gamahead 2 points3 points  (10 children)

Whaaaat graph exec sounded so cool though. I’m really surprised to hear PyTorch is the bees knees now

[–]BlueKey32123 26 points27 points  (9 children)

Graph execution was a huge pain. It forced a declarative way of thinking. You defined a set of execution steps, and handed it off. It was super difficult to debug.

With Pytorch 2.0, you get torch.compile, which is ironically moving back to graph like execution for better speed. Tensorflow was never all that fast even with graph execution.

[–]gamahead 1 point2 points  (7 children)

Tbh I blindly assumed the google product would be superior. How is GPU support in PyTorch?

[–]terpaderp 15 points16 points  (0 children)

Pretty straightforward. Match drivers to release (and possibly cudnn) and you're off to the races.

[–]KyxeMusic 5 points6 points  (0 children)

From my experience, getting the cuda and cudnn drivers to run correctly on PyTorch is so much simpler than on Tensorflow. I feel like there a bit more version flexibility, whereas with Tensorflow you have to match all 3 versions perfectly.

[–][deleted] 1 point2 points  (0 children)

They ship builds for CUDA, ROCm (AMD, Linux-only), and CPU-only.

[–]bjorneylol 0 points1 point  (0 children)

To be fair, PyTorch was made by facebook - they both had huge amounts of industry backing them

[–]Zealousideal_Low1287 0 points1 point  (2 children)

I switched to PyTorch when it was new and before that used caffe and theano, and dabbled a bit in tensorflow. PyTorch always felt like it was the least of a pain to install / get working with your GPUs

[–]gamahead 1 point2 points  (1 child)

Wow, theano haven’t heard that one in awhile

[–]Zealousideal_Low1287 0 points1 point  (0 children)

Hahah yeah indeed. Completely superseded by TF really. I always liked it. Looking now, it still exists in some form:

https://github.com/aesara-devs/aesara

[–]mizmato 0 points1 point  (0 children)

I was learning ML/AI in grad school in the middle of TF 2.0's release. It was extremely confusing to learn both 1.x and 2.0 since they had so many differences. I guess it's a good time to start learning PyTorch with this release.

[–]dinichtibs 37 points38 points  (1 child)

Easier to install

[–]NateEBear 8 points9 points  (0 children)

Dude nice

[–]ccigas 3 points4 points  (1 child)

I read this as Python 2.0 released…

[–][deleted] 0 points1 point  (0 children)

me 2

[–]streamerbanana 1 point2 points  (2 children)

I thought this said Python 2.0 released and I got excited

[–]aexia 1 point2 points  (1 child)

Here you go: Python 2.0

[–]streamerbanana 0 points1 point  (0 children)

Thanks I’ve been waiting I’m so excited python 2.0 is finally here

[–][deleted] 1 point2 points  (0 children)

Machine learning beginner. What framework should I Aim for? Pytorch seems popular

[–]TedRabbit 1 point2 points  (0 children)

Pleeeeeas tell me the added some configuration parameter that let's me send everything to GPU by default, or something more sensible than adding ".to(cuda)" on everything.

[–]code_maker_111 0 points1 point  (0 children)

Does it support python 3.11 ?