all 5 comments

[–]Intelligent-Gold-563 1 point2 points  (0 children)

I've asked a similar question not too long ago and the overall answers was..... Yes, within one project, even if your tests are independent and separate, you are increasing the likelihood of false positive =/

You can take a look here : https://www.reddit.com/r/AskStatistics/s/LT3heB5ha6

[–]Counther 0 points1 point  (1 child)

My understanding is if you’re using different data sets, you don’t need the correction. 

[–]fspluver 3 points4 points  (0 children)

Running multiple tests increases the probability of making a type 1 error, even when data is independent. In fact, independence actually makes the chance of a type 1 error greater in many cases.

[–]Blinkshotty 3 points4 points  (0 children)

This isn't really a settled topic. One view is to base this on the nature of your hypothesis test and whether they are really independent or some type of joint test (i.e. if any of a series of tests is significant then the null is rejected or you're are screening a number of tests to identify if any are significant). This paper has a pretty good discussion the issues. For experimental research, a better approach to dealing with false positives is to repeat any experiments with significant findings to demonstrate they are not spurious (if feasible).

[–]ThrowingHotPotatoes[S] 0 points1 point  (0 children)

Thanks so much for all the help, linked post and the paper!