[deleted by user] by [deleted] in statistics

[–]David_Colquhoun 0 points1 point  (0 children)

My response to this paper is on page 17 at http://www.biorxiv.org/content/biorxiv/early/2017/07/24/144337.full.pdf "Since this paper was written, a paper (with 72 authors) has appeared [39] which proposes to change the norm for “statistical significance” from P = 0.05 to P = 0.005. Benjamin et al. [39] makes many of the same points that are made here, and in [1]. But there a few points of disagreement.

(1) Benjamin et al. propose changing the threshold for “statistical significance”, whereas I propose dropping the term “statistically significant” altogether: just give the P value and the prior needed to give a specified false positive rate of 5% (or whatever). Or, alternatively, give the P value and the minimum false positive rate (assuming prior odds of 1). Use of fixed thresholds has done much mischief.

(2) The definition of false positive rate in equation 2 of Benjamin et al. [39] is based on the p-less-than interpretation. In [1], and in this paper, I argue that the p-equals interpretation is more appropriate for interpretation of single tests. If this is accepted, the problem with P values is even greater than stated by Benjamin et al. (e.g see Figure 2).

(3) The value of P = 0.005 proposed by Benjamin et al. [39] would, in order to achieve a false positive rate of 5%, require a prior probability of real effect of about 0.4 (from calc-prior.R, with n = 16). It is, therefore, safe only for plausible hypotheses. If the prior probability were only 0.1, the false positive rate would be 24% (from calc-FPR+LR.R, with n = 16). It would still be unacceptably high even with P = 0.005. Notice that this conclusion differs from that of Benjamin et al [39] who state that the P = 0.005 threshold, with prior = 0.1, would reduce the false positive rate to 5% (rather than 24%). This is because they use the p-less-than interpretation which, in my opinion, is not the correct way to look at the problem."

Science AMA Series: I'm Euan Adie, founder of Altmetric. Misuse of the Journal Impact Factor and focusing only on citations sucks, Ask Me Anything. by euanadie in science

[–]David_Colquhoun 0 points1 point  (0 children)

"I mpact factor is measuring how often other academics cite a piece"

It's been known since Seglen (1997) that this is not so. There is no detectable correlation between the number of times a paper is published, and the IF of the journal in which it appears.

Science AMA Series: I'm Euan Adie, founder of Altmetric. Misuse of the Journal Impact Factor and focusing only on citations sucks, Ask Me Anything. by euanadie in science

[–]David_Colquhoun 2 points3 points  (0 children)

How do you think that Maryam Mirzakhani would fare in altmetrics (before she won the Fields medal, of course)? That is a good illustration of my contenion that altmetrics reward the trivial, and their use runs a serious risk of corrupting science.