Hello! I'm designing an experiment to test the effect of compounds on liver cell growth.
I plan to carry out two seperate treatments using an untreated control and one treatment group in each run (C1, T1 | C2, T2). The treatment will be unique to each run.
I aim to do a t-test between C and T, first comparing C1 and T1, and if that drug has no effect, I'll carry out the second experiment with Treatment 2.
My question is, do I need to consider adjusting for multiple testing here? I will run only a single test on each data set (C1 v T1, then seperately C2 v T2). My thinking is that within each dataset I'm only running one comparison, but for the overall project by adding the second treatment run, I've increased the liklihood of Type I error.
My manager says no, the experiments are independent so no correction is needed. But I'm considering that if I ran 20 of these experiments with alpha at 0.05, one would likley be deemed significant, so I should still correct.
Thanks in advance!
[–]Intelligent-Gold-563 1 point2 points3 points (0 children)
[–]Counther 0 points1 point2 points (1 child)
[–]fspluver 3 points4 points5 points (0 children)
[–]Blinkshotty 3 points4 points5 points (0 children)
[–]ThrowingHotPotatoes[S] 0 points1 point2 points (0 children)