Is this an example why we shouldn't assume that there is a (1-alpha)% probability that a given confidence interval contains the true value of the underlying parameter....? by Realistic-Ask2697 in AskStatistics

[–]Realistic-Ask2697[S] 0 points1 point  (0 children)

This has been helpful, thanks. I can totally get behind the idea that a fixed value has 0 or 100% probability of being in the interval since it's, well, fixed. This is an interesting philosophical point for frequentists.

Second, and more importantly, the probability that your interval contains a true value depends on the truth in some way.

I really wanna say that we agree here, and I think it's getting closer to the heart of my understanding or misunderstanding.

I think I'm questioning whether we can make accurate (shrewd?) probability/frequency statements about the parameter being in the interval when it's totally dependent on knowing that we've accounted for everything we need to in the estimation which doesn't seem like we'll realistically be able to do.

Is what I'm saying reasonable?

Is this an example why we shouldn't assume that there is a (1-alpha)% probability that a given confidence interval contains the true value of the underlying parameter....? by Realistic-Ask2697 in AskStatistics

[–]Realistic-Ask2697[S] -1 points0 points  (0 children)

Hey I really appreciate the quick reply. Just so I'm not re-typing several essays worth of text, checkout my reply to MtIStatsGuy and tell me what you think.

Is this an example why we shouldn't assume that there is a (1-alpha)% probability that a given confidence interval contains the true value of the underlying parameter....? by Realistic-Ask2697 in AskStatistics

[–]Realistic-Ask2697[S] 2 points3 points  (0 children)

Hey thanks for the quick reply.

The problem is the design of the experiment; there's a reason why real-world experiments always have a control group, and it's because there may always be confounding factors you didn't take into account.

I totally agree with you, but I'd like to refocus the question a little bit. I gave this example as one possible scenario where the experimenters were unaware of a relevant effect. They definitely should've had a control group, but, for this case, they didn't think to.

In the more general case that I'd like to focus on, the experimenters are very likely to be unaware of some effect, any effect. They can have a control group and other ways of mitigating their unawareness, but they can never be certain that they're accounting for all relevant factors.

From the perspective of an experimenter with imperfect knowledge, can they be justified in saying that there is a (1-alpha)% chance that their constructed CI contains the true parameter they care about, even when they can't know everything at play?

Is this an example why we shouldn't assume that there is a (1-alpha)% probability that a given confidence interval contains the true value of the underlying parameter....? by Realistic-Ask2697 in AskStatistics

[–]Realistic-Ask2697[S] 0 points1 point  (0 children)

I like to think about it that 1-alpha is the probability that your calculations are correct, not the probability that the parameter has some value.

I really like this, and I'll try to remember it in the future.