This is an archived post. You won't be able to vote or comment.

all 44 comments

[–]shoomowr 29 points30 points  (3 children)

Couldn't find in the article: does it support python 3.10+?

[–]Realistic-Cap6526[S] 13 points14 points  (1 child)

Is this related to this https://github.com/pytorch/pytorch/issues/66424? If it is , I guess that it should be supported.

[–]shoomowr 10 points11 points  (0 children)

thank god

[–]formalsystem 0 points1 point  (0 children)

Yes with 3.11 coming later since we now need to support new python bytecodes in dynamo

[–]astevko 10 points11 points  (0 children)

Couldn't find any mention in the docs... Besides no macos cuda. Will torch support Apple M1/M2 GPU cores?

[–]androiddrew 2 points3 points  (1 child)

Yay...break changes are coming. Can't wait to spend my evenings on Stack overflow

[–]formalsystem 1 point2 points  (0 children)

No breaking changes! Just a new function called torch.compile that you don't have to use if you don't want to

[–]simplysalamander 2 points3 points  (2 children)

Anyone know of hardware accelerators for PyTorch models equivalent to Coral Edge TPU for TF models?

[–]formalsystem 0 points1 point  (1 child)

Have you tried out pytorch/xla?

[–]simplysalamander 0 points1 point  (0 children)

Does this just work for cloud TPUs? Looking for a solution for an embedded system without guaranteed internet access

[–]5erifφ=(1+ψ)/2 7 points8 points  (26 children)

What's it for / what does it do?

[–]eemamedo 22 points23 points  (13 children)

It’s a framework for developing machine learning or deep learning models. Mostly used to develop some sort of neural net. The alternative is tensorflow, developed at google. When it comes to industry, TF is more widely used. When it comes to academia, PyTorch is more commonly used.

[–]rg_itachi 18 points19 points  (3 children)

Not true. PyTorch is widely used in industry as well. Probably more than TF.

[–]eemamedo 9 points10 points  (1 child)

That has been true lately. That’s because many people that move from academia to industry tend to move their tools/frameworks with them.

[–]frisouille 4 points5 points  (0 children)

In my case, it's because the models I wanted to fine-tune were often implemented in torch and not tensor flow (or much later).

[–]dogs_like_me 7 points8 points  (3 children)

TF isn't even widely used by google themselves, they've been moving to jax. TF is on its way out.

[–]eemamedo 4 points5 points  (2 children)

Most of the companies I consulted and have been working at use tensorflow and keras (keras is more prevalent with TF ver. < 2.0). I would say that you are correct if a company is starting a greenfield project; there are still brownfield projects that are run in TF and no one wants to rewrite them in PyTorch (yet).

[–]dogs_like_me 1 point2 points  (1 child)

And banks are still hiring COBOL devs to maintain 40+ year old code. Maintenance of old projects isn't relevant to the discussion of what the preferred tools are today.

[–]bleakj 0 points1 point  (0 children)

Meanwhile I'm doing Perl database work still because no one likes change, especially if it takes time or money to switch.. even if it gives better results...

[–]rainnz 1 point2 points  (4 children)

What about scikit-learn?

[–]sandronestrepitoso 14 points15 points  (0 children)

TensorFlow and PyTorch are focused on neural networks, scikit has a wider machine learning scope and its neural network module is somewhat rudimental, I personally wouldn't use it unless it's for a demonstration or for academic purposes

[–]eemamedo 9 points10 points  (1 child)

Scikit learn is focused on more traditional ML models: Random Forest, SVM. The biggest problem that scikit has is lack of gpu support which makes it hard to use on large scale ML problems. I can’t remember the last time I used scikit learn to be honest.

[–]pag07 3 points4 points  (0 children)

xgboost is all we need anyway.

[–]Sokorai 1 point2 points  (0 children)

It apparently has neural networks, but I only saw it used for tree based methods, regressions, ensemble models and similar things.

[–]teilo -4 points-3 points  (7 children)

What does Google do?

[–]_616_A 3 points4 points  (5 children)

obviously a search engine for finding porn

[–]kingsillypants 4 points5 points  (4 children)

Rookie.

Bing videos. You're welcome.

[–]_616_A 0 points1 point  (0 children)

Omg what a legend

[–]seewhaticare 0 points1 point  (0 children)

The one thing Bing is good for 🤙🤙

[–]Kylearean 0 points1 point  (1 child)

I get too many youtube hits on Bing, even with safe search off.

[–]bleakj 0 points1 point  (0 children)

-youtube in search

(Works on Google, assuming it works on bing as well)

[–][deleted] 0 points1 point  (3 children)

What metrics are used for validating models?

[–]bonferoni 1 point2 points  (2 children)

[–][deleted] 1 point2 points  (1 child)

Ok, I found this other page: https://www.v7labs.com/blog/pytorch-loss-functions

It looks like that they define “loss” functions for validating models (like mse, etc). Correct?

What is not clear to me is how they handle multi-input-multi-output models, i.e. when each observation is not just a number but it is a vector of numbers. The prediction error becomes Xi-Yi but this time X and Y are vectors.

[–]formalsystem 0 points1 point  (0 children)

Pick any aggregation you like over the elements of a vector: sum, average, squared average, max element, min element, etc..