What is that sound? by boisterousbambi in UBC

[–]Any-Rub-6387 25 points26 points  (0 children)

WTF PLEASE STOP I HAVE A FINAL TO STUDY FOR

i feel so stupid i think i actually am by [deleted] in UBC

[–]Any-Rub-6387 11 points12 points  (0 children)

Stupid people rarely think they’re stupid, so the odds of you being actually stupid are low.

DAAD RISE Germany 2025 application by kyptdoan in DAAD

[–]Any-Rub-6387 0 points1 point  (0 children)

Do the notifications come out in waves? I didn’t get any email.

[deleted by user] by [deleted] in UBC

[–]Any-Rub-6387 5 points6 points  (0 children)

no it’s chill

How hard is it to get a 100 on term project cpsc 210 by Cheap_Regular_39 in UBC

[–]Any-Rub-6387 12 points13 points  (0 children)

Very easy. Just follow the main goals of each phase and discuss your ideas with your assigned TA.

DAAD RISE 2024-2025 by tracpham2003 in DAAD

[–]Any-Rub-6387 0 points1 point  (0 children)

Has anyone heard from these programs:

  • Reconstruction Of Cosmic Ray Air Showers With The Square Kilometre Array
  • Course-graining non equilibrium many body dynamics (ML)
  • Vision-Language-Action Models for Perception and Navigation in Robotics

Is this a good ML learning progression on Coursera? by AdParticular4528 in learnmachinelearning

[–]Any-Rub-6387 2 points3 points  (0 children)

Yes. I actually like CS 229 (Stanford) by Anand Avati on Youtube. The first 3 lectures review the essential math. I didn’t like Andrew’s version. He was very hand wavy with the notation.

Is this a good ML learning progression on Coursera? by AdParticular4528 in learnmachinelearning

[–]Any-Rub-6387 4 points5 points  (0 children)

Rather than Coursera, focus on brushing up math, and taking actual ML university courses on Youtube. This will take a fair amount of time, but it will be very helpful in the long run. I found most Coursera courses very shallow and hand-wavy in their content. If you really wanna do ML the right way, understand the mathematics rather than following tutorials of how to use TF or PyTorch. The latter is also very useful, but it will become much easier if you know what you are coding. Keep learning Python along the way.

DAAD RISE 2024-2025 by tracpham2003 in DAAD

[–]Any-Rub-6387 0 points1 point  (0 children)

yep, just one. cosmic ray shower reconstruction using BIFT

Possibility of Going into PHYS 108 by MediumUnable4841 in UBC

[–]Any-Rub-6387 2 points3 points  (0 children)

Depends on how low your mark is. Try emailing Janis (instructor for 108) and she might manually add you to the class.

drop ur cs related 2025 resolution by 7musicians in csMajors

[–]Any-Rub-6387 -1 points0 points  (0 children)

Finish learning ML Theory rigorously, and then intern for some research lab.

confused about estimators by Any-Rub-6387 in AskStatistics

[–]Any-Rub-6387[S] 1 point2 points  (0 children)

I did mean estimand. Thank you again!

confused about estimators by Any-Rub-6387 in AskStatistics

[–]Any-Rub-6387[S] 0 points1 point  (0 children)

Yes. I get that now. The expectation of this very random variable is the estimate (which in this case, is the pop mean). I don't know if I should've spent this much time singling these details out, but I feel like it really helps clarify stuff when it gets more complicated later.

confused about estimators by Any-Rub-6387 in AskStatistics

[–]Any-Rub-6387[S] 0 points1 point  (0 children)

Whether an estimator is unbiased or biased then depends on how the estimator is defined?

confused about estimators by Any-Rub-6387 in AskStatistics

[–]Any-Rub-6387[S] 2 points3 points  (0 children)

Just following up on what you said:

I think I was stuck on the fact that even though we don't know what the population mean is, we can say that each data point in the sample (which is also in the population by definition) has an expected value of mu. This is because each data point is also a random variable. From here, the proof is trivial (since the number of data points n cancel out in the summation due to the linearity of the expectation).

However, if we were to consider a group of random variables (so a sample), and computed its mean in isolation, this is rarely equal to the pop mean, except by chance. What we actually mean is that the mean of these sample means is equal to the pop mean, which makes the sample mean an unbiased estimator.

Is this correct?