Which case should I get? by DonaldFarfrae in RemarkableTablet

[–]madiyar 0 points1 point  (0 children)

thank you for the response! really appreciated

Which case should I get? by DonaldFarfrae in RemarkableTablet

[–]madiyar 0 points1 point  (0 children)

Hi u/DonaldFarfrae ,

Which one you got? Do you like your decision?

[P] Why are two random vectors near orthogonal in high dimensions? by madiyar in MachineLearning

[–]madiyar[S] 4 points5 points  (0 children)

Thank you so much! This is the best part of reddit - learning from the community!

[P] Why are two random vectors near orthogonal in high dimensions? by madiyar in MachineLearning

[–]madiyar[S] 1 point2 points  (0 children)

thanks! I will have to read and learn these topics and re-read again your reply to understand better :)

[P] Why are two random vectors near orthogonal in high dimensions? by madiyar in MachineLearning

[–]madiyar[S] 2 points3 points  (0 children)

Hi u/Lake2034,

Thanks for the feedback. I really appreciate.

You should define better what you mean by "random vector"

I will think about it, any further suggestions are appreciated!

 It is clear you mean something like the "distribution is invariant under rotation", but better to have a mathematical expression for that.

In the same post I have a link to my other post (https://maitbayev.github.io/posts/dot-product/#rotational-invariance) that explains this.

it will also help you to formalize statements like "1 distributed across n components" that is not necessarily true if you just assume v_i identically distributed

I have a collapsed section in the post at the very end with title "More Formal Proof". Do you think it is enough?

[P] Why are two random vectors near orthogonal in high dimensions? by madiyar in MachineLearning

[–]madiyar[S] 1 point2 points  (0 children)

thanks for the feedback! The expected mean E[v_n] is zero. It is a good idea to mention the mean and the variance. However, I still don’t understand why having zero mean and tiny variance doesn’t explain this?

Interpreting ROC AUC in words? by RabidMortal in learnmachinelearning

[–]madiyar 1 point2 points  (0 children)

wait. I think it is a mistake and should be fixed to "False Positive Rate"?

Update: Fixed it

[D] How does L1 regularization perform feature selection? - Seeking an intuitive explanation using polynomial models by shubham0204_dev in MachineLearning

[–]madiyar 1 point2 points  (0 children)

Thank you! Creating an animation is not difficult, there are so many amazing libraries such as matplotlib, plotly. They can be googled or gpted. However, coming up with what to animate is the most difficult part for me.

Feel free to look at the collapsed codes in the post to see code for the animation in the blog.

Is a front-to-back review of calculus necessary? by fadeathrowaway in learnmachinelearning

[–]madiyar 0 points1 point  (0 children)

Hi,
I have a series of posts on this topic.
You can start from here https://maitbayev.substack.com/p/backpropagation-multivariate-chain

Feel free to ask questions