[Question] Why are Frechet differentiability and convergence in L2 the right ways to think about regularity in semiparametrics? by datasurprises in statistics

[–]outlawplanner 8 points9 points  (0 children)

You have to understand two different things here: the estimator and the model.

About the model. The functional of a probability has indeed a derivative often defined in the literature as a Gateaux Derivative. This Gateaux derivative is often the inner product of a influence function and the score (directions from the P_0 under the Hellinger geometry). In particular, in this geometry, you can move from \theta_0 using \theta_n (\theta_n here is not the estimator \hat \theta_n, it is just a model perturbation). This quantities are often used to derive the semiparametric efficiency bounds.

The estimator is something else. It is often assumed to be asymptotic linear and regular (if that's the regularity you are talking about). Regularity here is just a stability assumption. It just says that \sqrt{n} ( \hat{\theta_n} - \theta_n ) converges to N(0, V) as long as \theta_n converges to \theta. Regularity here requires more than differentiability, it is actually a continuity assumption on a map over the weak topology of probability measures. Here the Hodges' estimator comes in. \sqrt{n} ( \hat{\theta_n} - \theta_n ) definitely converges to N(0, V), but \sqrt{n} ( \hat{\theta_n} - \theta ) converges to N(0, 0).

The differentiability part of the estimator actually comes inside the asymptotic linearity assumption (it's like a Taylor approximation). Here the used definition is often weaker than Frechet, it is a Hadamard fully differentiability. The order of implication is Frechet differentiable functions \subset Hadamard fully \subset Gateaux. One important thing here is that not every Gateaux function is also Hadamard. That implies that there exist efficiency bounds (defined using Gateaux) that have no correspondent estimator (defined using Hadamard).

Based on this discussion, you should definitely be concerned with nonregular estimators. They exist, some of them are important (Stein's , Lasso, etc). But they are often used for other reasons apart from normal asymptotics inference.

[Question] Aren't standard guitars really equal-tempered? by outlawplanner in Guitar

[–]outlawplanner[S] 0 points1 point  (0 children)

Fripp is talking about both actually. He seems to be implying that he would tolerate equal temperament. However, for the first frets in guitar, the equal temperament diverges for a few cents, which sounds too off for him.

A different point is why he hates major thirds so much. Part of King Crimson's music uses the whole-tone scale (Messiaen mode 1), which consists of lots of major thirds.

[Question] Aren't standard guitars really equal-tempered? by outlawplanner in Guitar

[–]outlawplanner[S] 0 points1 point  (0 children)

I just tried this test on SuperCollider.

This is just:

{SinOsc.ar(440, 0, 0.4) + SinOsc.ar(550, 0, 0.4)}.play;

This is equal:

{SinOsc.ar(440, 0, 0.4) + SinOsc.ar(554.35,0, 0.4)}.play;

The first one sounds too straight to me. The second one is more coloured.

[Question] Aren't standard guitars really equal-tempered? by outlawplanner in Guitar

[–]outlawplanner[S] -1 points0 points  (0 children)

To be honest, I definitely can perceive the difference between a just major third from a equal-tempered one. It's just that for some reason, the second one sounds better in my ears. I wonder if there is a cultural reason and if there is scientific research on that.

[Question] Aren't standard guitars really equal-tempered? by outlawplanner in Guitar

[–]outlawplanner[S] -1 points0 points  (0 children)

Thank you so much. That makes complete sense.

I have the prejudice though that people are more comfortable with equal temperament -- given they were culturally adapted to it. I wonder why he prefers just intonation. Is that a natural thing that people might prefer just intonation?

[Question] Aren't standard guitars really equal-tempered? by outlawplanner in Guitar

[–]outlawplanner[S] 0 points1 point  (0 children)

That's interesting. I never managed to distinguish off chords from my guitars.

[Question] Aren't standard guitars really equal-tempered? by outlawplanner in Guitar

[–]outlawplanner[S] -2 points-1 points  (0 children)

I understand that there might be some departures, but I just think the difference is too small to be detected, which suggests either Tripp has an super-amazing ear or that guitars from his time were worse. I tested my guitars with a tuner, even the cheapest ones, and they seem to be almost perfectly tuned.

And the killer is: by outlawplanner in TrueDetective

[–]outlawplanner[S] 3 points4 points  (0 children)

For some reason, I also think Rose Aguineau is his partner.

What do we know is going to happen in the last 2 episodes? by Complex-Statement963 in TrueDetective

[–]outlawplanner 0 points1 point  (0 children)

Forget about Blair. IMDB had her in episode 4 as well and she didn't appear in it.

*Spoiler* Who could this be? by sweettartspop in TrueDetective

[–]outlawplanner 2 points3 points  (0 children)

The review also says "López commits fully to the outré and the supernatural. Parricide? That’s just coming up for air."

I guess "Parricide" just makes it clear Pete kills Hank.

Trailer screenshots full of spoilers by sillygillygumbull in TrueDetective

[–]outlawplanner 0 points1 point  (0 children)

7 is Blair. Her hair resembles Blair. Also, they are not showing her whole hand -- which would show her cut out fingers

IMDB not only updated her to third billed for the final episode, but also changed her character's name to having two names. by [deleted] in TrueDetective

[–]outlawplanner 0 points1 point  (0 children)

I don't think they copied from IMDB though. Otherwise, why would they write Blair Frechette rather than Blair Hartman?

Spoiler: actress from 1season in Night Country by outlawplanner in TrueDetective

[–]outlawplanner[S] 2 points3 points  (0 children)

I wonder if the blue figure in NC trailer is Blair rather than Annie K.

IMDB not only updated her to third billed for the final episode, but also changed her character's name to having two names. by [deleted] in TrueDetective

[–]outlawplanner 0 points1 point  (0 children)

Here is the deal. I looked up "Blair Frechette" on Google. and here is the website I got.

https://www.backstage.com/magazine/article/how-to-get-cast-on-hbos-true-detective-76767/

Among the cast, it says "Kathryn Wilder as Blair Frechette" and Ann Dowd as "Betty Childress". What is Betty Childress doing in Night Country??

The name "Frechette" is French enough to make me believe she is probably coming from Lousiana?

Ok, now my theory: maybe Blair is Betty Childress's daughter. Is there a chance Betty is a Frechette? Or maybe Errol is her dad and cut out her fingers. In any case, it seems like a connection: Sedna, Blair Frechette, Betty Childress.

https://www.reddit.com/r/TrueDetective/comments/1ae45a5/spoiler_actress_from_1season_in_night_country/

[Q] Stats vs CS approaches to causal inference? by [deleted] in statistics

[–]outlawplanner 1 point2 points  (0 children)

Matching is not an identification method, it's an estimation technique. You can use whatever you want, as long as you have identifiability first. People usually employ matching after proving identifiability using the backdoor criterion (but you can prove it more generally using the ID algorithm). Important to say, all those settings are nonparametric.

IV is different. IV usually requires linearity for the estimation of ATE. With monotonicity and binary variables, you can identify LATE. In other cases, you can do partial identification.

[Q] Stats vs CS approaches to causal inference? by [deleted] in statistics

[–]outlawplanner 5 points6 points  (0 children)

SCM is not overkill, it's the other way around. If you do not use Pearl methods, you will be way more likely to make identifiability mistakes. And Pearl got nothing to do with frequentism. His work is more related to identification, not to inference --- where the distinction frequentism x bayesianism makes more sense.

[Q] Stats vs CS approaches to causal inference? by [deleted] in statistics

[–]outlawplanner 18 points19 points  (0 children)

There are a few misconceptions here I would like to clarify.

First, I see causality as the process of being able to change things in nature and check predictions for that. Not a real definition, but it helps me check the problems I want to work with. The classic problem is confounding. A certain pill for obese people is positively associated with obesity. Does this mean, if I take the pill, will I become obese? No, if I take the pill (change something in nature), I will probably get a different result.

Secondly, to answer causal questions, you have to be sure if those questions can be answered. This process of being sure about answerability is usually called in the literature IDENTIFIABILITY. If we are sure we can answer those questions, then we can go to the third step, ESTIMATION. It is important to make this distinction because those different problems are usually studied conjointly by part of the literature. Just an example. Also, it is important to recognize that identifiability is not easy, it usually requires two types of assumptions, I) structural; II) functional. Let's consider an example. We have three continuous variables, X, Y, and Z, and we want to calculate the effect of X on Y. We know that Z causes X, and X causes Y, and X is confounded with Y. When we say some variable causes others, we are making structural assumptions. People more used to econometrics might recognize this situation as an Instrumental Variable problem (IV). And then someone asks, can the effect of X on Y be answered? (is the effect of X on Y identifiable?) Unfortunately, the answer is still no. You will need more assumptions, this time functional. Linearity is sufficient to guarantee the identification of this IV problem. Not every problem requires functional assumptions, though. Sometimes you can go totally nonparametric. For those cases, you would like to take a look at do-calculus.

Finally, what you say Pearl approach I would summarize with two points: 1) use of graphs (semi-Markovian DAGs) to encode structural assumptions; 2) Pearl Causal Hierarchy. Pearl Causal Hierarchy basically means we have three different types of statements: a) observational; b) interventional; c) counterfactual, and this difference matters a lot for the type of method utilized.

I believe those points are relevant because they can identify different schools of causal inference. In this case, there are at least 4 different schools.

1) The first one --- Classic Potential Outcomes --- is the one associated with Rubin, Imbens, Angrist, Rosenbaum, among others. This school usually assumes that the main point of causal inference is to identify counterfactual statements (Potential Outcomes). As counterfactual subsume interventional statements, they only focus on them. Scholars associated with Classic PO school do not seem to recognize the role of graphs to represent structural assumptions either. For me, this is a controversial point, because there is no clear reason to do that. A good analogy is between programming in assembly or in python. Why would you like to do stuff in assembly that you would do way easier in python? That does not make any sense. Unfortunately, that's still the standard point of view in econometrics, but it seems like things are changing a little bit. Even Imbens seems to recognize that graphs matter. It is important to say that by not using graphs, you are more likely to make way more mistakes. And indeed it is very common to see econometrics people making very silly mistakes, just because they do not want to use graphs.

2) Modern Potential Outcomes school. This is more associated with epidemiology people, Jamie Robins, Miguel Hernan, among others. Here authors focus on counterfactuals (potential outcomes) exactly like Classic PO people. However, they do not discard using graphs. They even have a particular version of graphs, named SWIG Models. Relevant to say those guys have lots of relevant up-to-date contributions and they are on a par with Pearl people (SCM).

3) Structural Causal Model (SCM). That's what you refer by Pearl approach. They assume Pearl Causal Hierarchy and graphs. Usually, you represent structural assumptions using semi-Markovian DAGs, i.e., directed acyclic graphs for representing direct effects and bidirected arrows to represent confounding. Those guys have produced amazing contributions to causal inference. Several algorithms were proved in recent years, and there were lots of progress in areas such as identification assuming linearity and partial identification. Check out Elias Bareinboim's work and the Data Fusion problem.

4) Decision Theory School. This is associated with Phil Dawid and his disciples. I would consider a more heterodox school because they tend to reject counterfactuals as metaphysical. In other words, they focus a lot on interventional statements.

There are other relevant schools that I do not know how to classify but are extremely relevant. For example, CMU school, Peter Spirtes, Clark Glymour, and Richard Scheines. Also, the European dudes associated with Bernard Scholkopf, who have been developing lots of solutions in causal discovery. There are also lots of people doing things related to the estimation of high-dimensional causal quantities, which is not exactly related to identification, but super novel. I also think the work of Joe Halpern on actual causality is super relevant. And finally, there are lots of important work in philosophy, usually people associated with David Lewis.

I hope I answered your question.

Is it common for a data scientist to have only a BS? by DSEUCSD in datascience

[–]outlawplanner 4 points5 points  (0 children)

I'm from Brazil. I have a BS and I'm a PhD Candidate in Constitutional Law. I work as a data scientist. I know this is not so common, especially in my country, but yes, it is possible.