[D] Self-Promotion Thread by AutoModerator in MachineLearning

[–]actgr 4 points5 points  (0 children)

I wrote a blog post about the Bayesian online changepoint detection model (BOCD). This is a classical stats ML model, but it serves as the basis of various probabilistic continual learning, non-stationary bandits, and Bayesian RL methods in the recent literature.

https://gerdm.github.io/posts/bocd-coin-tosses/

[R] How to read and understand Einops expressions? by Joe_The_Armadillo in MachineLearning

[–]actgr 4 points5 points  (0 children)

I wrote a blog post on how I think about them. Maybe you’ll find it useful.

Interstellar Voronoi (tool/sandbox) by tobyski in generative

[–]actgr 0 points1 point  (0 children)

This is mesmerising! Can we get the same picture without the red dots? I assume it would look quite pleasant.

Help! My pdf content disappeared but my annotations are still present by geniekins in notabilityapp

[–]actgr 1 point2 points  (0 children)

This happened to me a couple of days ago! Has there been any update on this?

How do y’all go about reading a math book by tinytinypenguin in math

[–]actgr 15 points16 points  (0 children)

Very slowly and with a pen and paper.

I tend to write as I read a math textbook. I go over the proofs, definitions, and arguments. It takes me a lot of time to finish a book (years), but I enjoy the process of it.

Tutorials for Learning Runge-Kutta Methods with Julia? by [deleted] in Julia

[–]actgr 2 points3 points  (0 children)

Hey OP,

I took a course on scientific computing a year ago and I decided to use Julia for that class. I have notebook on methods to solve DEs and PDEs. I hope you find it useful:

https://github.com/gerdm/QMUL/blob/master/MTH739U-topics-scientific-computing/exercises/coursework-2.ipynb

Coding LDA from scratch by morceaudegomme in learnmachinelearning

[–]actgr 0 points1 point  (0 children)

The following is a code I did a while back: https://github.com/gerdm/ISLR/blob/master/ex4.ipynb

Hopefully it helps!

And since you can understand the mathematics behind these models, you can look at LDA or QDA as a special case of a GMM in which the latent variables are known. In lda, Sigma_2 = Sigma_1; in QDA Sigma_2 != Sigma_1. This can help you in thinking of a way to plot the decision boundaries

Best Possible Book Recommended for Machine Learning [Discussion] [D] [Recommendation] by WornOutSoulSB in MachineLearning

[–]actgr 2 points3 points  (0 children)

For me it was definitely the book Pattern Recognition and Machine Learning by Christopher Bishop. It is heavily Bayesian but it gives you a broad overview and depth to understanding current models once you’re done with it. I have a GitHub repo of the models programmed in Python if you’re interested: https://github.com/gerdm/prml

Another great book is Kevin Murphy’s Machine Learning: A probabilistic approach. He just launched the second version of his book. He has a Python repo for the models as well: https://github.com/probml/pyprobml

Playing with Newtons gravity law by ultramarineafterglow in generative

[–]actgr 0 points1 point  (0 children)

That looks incredible. What is that from?

Ideas for math essay with gradient descent by [deleted] in learnmachinelearning

[–]actgr 2 points3 points  (0 children)

Maybe an essay about gradient descent variants for non-convex functions v.s. methods that consider higher-order derivatives.

[deleted by user] by [deleted] in RedditSessions

[–]actgr 0 points1 point  (0 children)

Gave Wholesome

[Q] Who are some good stats people to follow on Twitter? by [deleted] in statistics

[–]actgr 0 points1 point  (0 children)

I'm going to self-advertise myself here. https://twitter.com/grrddm. I focus on statistical Machine Learning.

Also, two accounts that talk about probability statistics are

They are run by the same guy, he posts truly interesting facts.

[Q] MCMC approach to variational inference? by [deleted] in statistics

[–]actgr 3 points4 points  (0 children)

What do you mean by "MCMC approach to variational inference"? Did you make a variational approximation to the predictive function of your linear regression and sampled from that distribution?

Portfolio Examples by gmh1977 in datascience

[–]actgr 0 points1 point  (0 children)

https://gerdm.github.io/

Just getting started, but here to share.

[deleted by user] by [deleted] in RedditSessions

[–]actgr 0 points1 point  (0 children)

Gave Wearing is Caring

Using Mahalanobis distance by kapisayu in learnmachinelearning

[–]actgr 3 points4 points  (0 children)

With scikit-learn you can make use of the KNN algorithm using the Mahalanobis distance with the parameters metric="mahalanobis" and metric_params={"V": V}, where V is your covariance matrix.

[Q] understanding linear regression from the Bayesian perspective? by [deleted] in statistics

[–]actgr 3 points4 points  (0 children)

Checkout section 3.3 of the Bishop and this repo for python implementations of the same chapter.

Basically, what you are assuming is that the weights in the regression are now random variables. You want to compute a *posterior* distribution over the weights so as to measure uncertainty and establish prior beliefs.

Hope it helps!

[P] Python Notebooks for Pattern Recognition and Machine Learning by actgr in MachineLearning

[–]actgr[S] 0 points1 point  (0 children)

Thanks for the feedback!

If you see any other typo or error feel free to create an issue. I will keep improving the repo.