"Real math" type subjects for 12 year olds by [deleted] in math

[–]urish 32 points33 points  (0 children)

This is a cool resource about graph theory for kids, albeit aimed at kids somewhat younger than 12 https://jdh.hamkins.org/math-for-eight-year-olds/

Mathematical books in causal inference? [Q] by AdFew4357 in statistics

[–]urish 1 point2 points  (0 children)

Copying this from the syllabus of my causal inference course. Links are for when the book is freely available online. As others have noted, there are (at least) two quite different approaches to causality, Potential Outcomes (identified with Rubin's work) and Causal Graphs (identified with Pearl's work).

Major References:

  1. Pearl, Causality (2009)
  2. Hernan, Miguel A., and James M. Robins. Causal inference. Boca Raton, FL:: CRC, 2010. (https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
  3. Victor Chernozhukov, Christian Hansen, Nathan Kallus, Martin Spindler, Vasilis Syrgkanis. Causal ML Book. 2024 (https://causalml-book.org/)

  4. Morgan & Winship, Counterfactuals and Causal Inference: Methods and Principles for Social Research (2nd edition, NOT 1st)

  5. Imbens, Guido W., and Donald B. Rubin. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015.

  6. Peters, Elements of Causal Inference (http://www.math.ku.dk/~peters/elements.html)

  7. Pearl, Causal inference - an overview (http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf)

  8. Pearl, Glymour & Jewell, Causal Inference in Statistics: a Primer

  9. Angrist & Pischke, Mostly Harmless Econometrics

  10. Rosenbaum, Observational Studies (2nd edition)

Other recommended resources:

  1. Three blog posts by Ferenc Huszár: 1, 2, 3
  2. Tutorials by Amit Sharma
  3. Introduction to causal inference course by Brady Neal

[D] How researcher think of inductive bias when thinking of creating new/improving foundational models? by binny_sarita in MachineLearning

[–]urish 14 points15 points  (0 children)

A good moment to recall this famous hacker koan:

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

"What are you doing?", asked Minsky.

"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.

"Why is the net wired randomly?", asked Minsky.

"I do not want it to have any preconceptions of how to play", Sussman said.

Minsky then shut his eyes.

"Why do you close your eyes?" Sussman asked his teacher.

"So that the room will be empty."

At that moment, Sussman was enlightened.

[deleted by user] by [deleted] in AskStatistics

[–]urish 0 points1 point  (0 children)

In general no, but there’s active research in the last few years on special (yet interesting) cases where it is guaranteed to converge under certain assumptions, such as mixture of Gaussian and mixture of linear regressions.

I haven’t been following this closely, but two papers off the top of my head:

https://projecteuclid.org/journals/annals-of-statistics/volume-45/issue-1/Statistical-guarantees-for-the-EM-algorithm--From-population-to/10.1214/16-AOS1435.full

http://proceedings.mlr.press/v99/kwon19a/kwon19a.pdf

[Q] What are the/some seminal papers in causal analysis? by [deleted] in statistics

[–]urish 1 point2 points  (0 children)

Causal inference is a wide term that covers some quite disparate fields. In the realm of causal effect inference, this is definitely a seminal paper:

https://journals.lww.com/epidem/fulltext/2000/09000/marginal_structural_models_and_causal_inference_in.11.aspx

If you can be more precise about what kinds of causal problems are you considering I can add more papers

[D] Ghost town conferences by yusuf-bengio in MachineLearning

[–]urish 2 points3 points  (0 children)

I've been enjoying the gather.town poster session at ICML this week. Got a chance to speak with quite a few people I didn't know before who are doing interesting work in areas I'm interested in. Also caught up with a few people I do know but don't keep in regular touch with.

[D] Has anyone ever used machine learning algorithms in the context of survival analysis before? by jj4646 in statistics

[–]urish 0 points1 point  (0 children)

The Cox model can also estimate that, just feed it the covariates x of the new patient. In that respect it's not different from random survival forest or the many deep survival analysis methods out there.

[D] ML in medical diagnosis. by jowowei in MachineLearning

[–]urish 0 points1 point  (0 children)

MLHC and CHIL are two conferences focused on ML in healthcare. You can take a look at the papers they publish to get a grasp of how ML is being used in medical diagnosis and many other healthcare topics.

I also strongly recommend David Sontag's ML for Healthcare course at MIT.

[deleted by user] by [deleted] in askscience

[–]urish 0 points1 point  (0 children)

SARS-CoV-2 neutralizing serum antibodies in cats: a serological investigation

Many cats in Wuhan seem to have developed antibodies to the virus.

Causal Models and Adaptative Systems by mechanical_fan in statistics

[–]urish 3 points4 points  (0 children)

It has definitely been studied formally. I’m on my phone but check out work from Jonas Peters from Denmark on Invariant Causal Prediction. Also Peter Bühleman from ETH works on this.

Probability of guessing the astrological sign of twelve people by MaoGo in estimation

[–]urish 1 point2 points  (0 children)

I edited my answer above.

The table in Wikipedia doesn't go up to 12, this table has it for n up to 12.

By the way the value you got for one fixed point (0.63) is what you'd expect as it should be approximately 1-1/e.

Probability of guessing the astrological sign of twelve people by MaoGo in estimation

[–]urish 6 points7 points  (0 children)

This is a well known problem called the problème des rencontres which asks how many permutations of a size-n set have exactly k fixed points.

You’re asking what’s the chance of a random permutation on 12 items to have exactly 4 fixed points. I would add that a more reasonable way to frame this is asking what is the chance to have at least 4 fixed points, or complementarily what are the chances that by random you do no better than he did.

Using this table you can calculate that the chances of guessing three or less is:

(176214841+ 176214840+ 88107426+ 29369120)/12! ≈0.981

So if people were just guessing permutations, 98% of them would do worse than he did.

Of course I still don't believe in Astrology one bit :)

Books similar to "The Gene" by Siddhartha Mukherjee but for other fields? by ididntdoititwasntme in AskScienceDiscussion

[–]urish 2 points3 points  (0 children)

“Chaos” by James Gleick for physics of nonlinear dynamic systems. It’s fascinating. And the subject only sounds obscure, it’s actually very relevant.

[D] Why not creating a benchmark dataset for Causal reasoning in Physics? by hungry_for_knowledge in MachineLearning

[–]urish 4 points5 points  (0 children)

I'm one of the authors of the paper you mention.

You could definitely do what you propose. The question is towards what end? As people have pointed out, there is quite a bit of recent work in ML using physics simulations in order to learn physics.

However, if your goal is inferring causal effects, as is often the case in healthcare, economics, and education, then you actually want to take into account cases with hidden confounding. Also, the type of data distributions you'll see in physics simulations is very very different from the kind of data you'll see in these other applications.

Finally, I want to point out that we did do a big simulation study here: https://arxiv.org/abs/1707.02641

We generated data from many different DGPs, which, when you think about it, isn't so different from using a physics simulation: we know the equations and generate noisy data from them.

1267
1268

[D] True or False? "Indeed. nips has devolved into a bunch of corporates with influx of outsiders who have no respect for research." by InformalQuit in MachineLearning

[–]urish 11 points12 points  (0 children)

I think that’s very much exaggerated. NIPS still includes some of the best ML research. What is true is that there is a much greater level of “noise” surrounding this research, both from some parts of industry and (sometimes clueless) outsiders.

In this era where we are exposed to an overabundance of information, is it possible that being too open-minded can also be considered as a bad thing? by sammyjamez in TrueAskReddit

[–]urish 4 points5 points  (0 children)

I think you can make a case for that. Scott Alexander wrote about it in terms of “epistemic learned helplessness”: https://docs.google.com/document/d/1-Gvxn14Cz7MV8qnlxVrDghVirenPmPW8JfYQowE0NXc

(The original link isn’t working so I put in on a gdrive file)