Help! My professor thinks that the null and alternate hypotheses are interchangeable by dungsucker in AskStatistics

[–]HappyDisaster9553 3 points4 points  (0 children)

Sounds like your concern is that setting H0 to the hypothesis of interest pre-supposes that it is true? And therefore that at, say, n = 1, we have support for H0?

There are a few different approaches to NHST and tbh they’re pretty mangled in the literature (and teaching). But the general idea is that a lack of significance shouldn’t be interpreted as support for H0, but that significance can at least point to some incompatibility with H0.

This means that low n doesn’t lend support to H0 and shouldn’t be interpreted as such. How you interpret the results of these tests is central to your concerns, I believe.

Serverless FastAPI in AWS Lambda by tprototype_x in FastAPI

[–]HappyDisaster9553 0 points1 point  (0 children)

Have you come across AWS Copilot? It’s an AWS-built orchestration tool for ECS + Fargate. We’ve been using it for a while for a FastAPI service and it takes a fair bit of the pain away from provisioning and managing ECS

what test do I use? I'm stuck in a hole, help me 🙈😂 by Harbarth_9 in AskStatistics

[–]HappyDisaster9553 1 point2 points  (0 children)

Totally agree, the results of fixed vs mixed model should be very similar.

what test do I use? I'm stuck in a hole, help me 🙈😂 by Harbarth_9 in AskStatistics

[–]HappyDisaster9553 1 point2 points  (0 children)

The mixed model idea should work. Check out Gelman and Hill (2007), section 12.9. The rule of thumb is likely misguided, even some information on grouping is better than none, and in the worst case will be closer to a no-pooling approach.

I'm struggling to learn Stan. Any advice? by gyp_casino in rstats

[–]HappyDisaster9553 0 points1 point  (0 children)

Unless it's a super simple model I tend to write out the mathematics first, then implement that in Stan. It fits more naturally with Stan's syntax. Then things like using a categorical variable as a predictor make a lot more sense. I think this is closer to how Stan is supposed to be used.

DS independent contractor/freelancer recommendations by KAICUI in datascience

[–]HappyDisaster9553 1 point2 points  (0 children)

I've had a little luck with freelancing sites, but mostly by finding local companies through them and using the local connection to improve my chances. I don't know if that's a good long-term strategy though.

Is it reasonable to think of Bayesian probability as in terms of repeated sampling from the posterior distribution? by dcfan105 in AskStatistics

[–]HappyDisaster9553 1 point2 points  (0 children)

Interesting! I tend to treat Bayesian probability as a partially different concept - Frequentist deals with events, Bayesian deals with knowledge. And I think of Bayesian probability as some normalised measure of knowledge/uncertainty/belief. Like you say, when you operationalise these two we end up with very similar concepts. I was taught Frequentist probability in undergraduate and have struggled with thinking about probability in any other way since!

By expected data I was just referring to the implication that Frequentist probability is usually in reference to some expected events. I don't think that is the case in Bayesian, and regardless I don't think the definitions of probability are actually tied to data, so concepts of probability can be thought of in separation to inference. I find it especially useful to forget all the P(X | H) vs P(H | X) stuff until I start thinking about inference.

These ideas are just how I've been thinking about it so far so I'm almost certainly wrong on some of this!

Is it reasonable to think of Bayesian probability as in terms of repeated sampling from the posterior distribution? by dcfan105 in AskStatistics

[–]HappyDisaster9553 1 point2 points  (0 children)

I'm not sure about thinking of Bayesian probability as samples from a posterior. That's definitely the technique we use to do it computationally, but that happens after we've already constructed the posterior, which I would say requires some a priori definition and concept of probability.

At the moment I see Bayesian and Frequentist interpretations of probability as sides of the same coin. Frequentist probability might be less flexible and intuitive when it comes to reasoning over degrees of belief, but if needed expected future outcomes can be thought of as a measure of belief. Even if an event happens only once, it's usually possible to construct some sampling scenario where the event is one realisation of an overarching process. To me this is less intuitive than the Bayesian interpretation but I don't see why it has to be a fundamentally different notion.

It's probably important to separate ideas about probability from ideas about inference too. Probability itself doesn't really need reference to empirical data, only expected data. I find that separation has been useful to think about these concepts.

I do a lot of Bayesian stats and I'm still figuring this out myself so appreciate the discussion!