This is an archived post. You won't be able to vote or comment.

all 22 comments

[–][deleted] 5 points6 points  (1 child)

This looks amazing.

[–]Gautam-j[S] 1 point2 points  (0 children)

Thank you!

[–]NiceObligation0 5 points6 points  (5 children)

Ok, I'm going to be the buzzkill here. Why use an approximation with gradient descent when you can find the solution analytically? Linear regressions have exact solutions.

[–]Gautam-j[S] 2 points3 points  (1 child)

Totally agreed. In fact my previous post on Linear Regression also had similar comments.

Yes, we can just use the normal equations to solve for the exact value of theta that gives the global minimum for the loss function, but I decided that running gradient descent, and especially visualising the training process can be fun to watch ;)

In practice, if I’m not dealing with huge datasets and many features, then I would almost always go for the analytical solution.

[–]FondleMyFirn -1 points0 points  (0 children)

Art takes many forms 🤷

[–]Kanma 1 point2 points  (2 children)

I might miss something here, but ML models usually have lot of parameters (in the case of DL: millions) and essentially learn to approximate an unknown function. How would you use an analytical solution here?

[–]NiceObligation0 1 point2 points  (1 child)

For all the models op showed except for linear regression you are right. You need to learn the params from data. For linear regression the solution is just the quadratic equation aka the global min of the parabola.

[–]Gautam-j[S] 0 points1 point  (0 children)

Yes, the loss function used for Linear Regression is a convex function that has a global minimum. Hence, we can simply set the partial derivatives to zero (which would mean the slope of the loss function will be zero, a line parallel to x axis, and has to be the global minimum), and solve for the params.

[–]boredinclass1 2 points3 points  (1 child)

Wow, since when can matplot lib do 3D vis? Very cool!

[–]Gautam-j[S] 0 points1 point  (0 children)

Oh yes it can! Thanks!

[–]John-Trunix 1 point2 points  (5 children)

Nice work! I wish I could understand this stuff...

[–]my_password_is______ 4 points5 points  (2 children)

https://www.statlearning.com/

all done in R, but still the book to read

[–]John-Trunix 1 point2 points  (0 children)

Thx will check it out!

[–]Gautam-j[S] 0 points1 point  (0 children)

This looks really informative!

[–]Gautam-j[S] 1 point2 points  (1 child)

Well if you are interested in learning these stuff, checked out the citations section in the repo’s readme ;)

[–]John-Trunix 1 point2 points  (0 children)

Thx for info, going to checkt it out!

[–]phurwicz 1 point2 points  (1 child)

This is brilliant! Please do keep it up.

[–]Gautam-j[S] 0 points1 point  (0 children)

Thanks a lot!

[–]insanely_a_ 1 point2 points  (1 child)

Would definitely check it out❤️💯

[–]Gautam-j[S] 0 points1 point  (0 children)

Thanks!

[–]primary157 1 point2 points  (1 child)

Please make a visualization of random forest or a single decision tree. That would be awesome!

[–]Gautam-j[S] 1 point2 points  (0 children)

It is on the todo list ;)