Monday, 12 January 2015

Statistical significance V practical significance

Andrew Gelman has a great article on the above today at his blog which is HERE.

Okay you are too lazy to go look there.

Here it then.
Thanks a lot Andrew. This article is gold

"You’ve heard it a million times, the idea is that if you have an estimate of .003 (on some reasonable scale in which 1 is a meaningful effect size) and a standard error of .001 then, yes, the estimate is statistically significant but it’s not practically significant.
And, indeed, sometimes this sort of thing comes up (and, irritatingly, such studies get publicity in part because of their huge sample size, which seems a bit unfair in that they need the huge sample size in order to detect anything at all), but not so often.
What is much more common are small studies where estimated effects are statistically significant but the estimates are unrealistically huge (remember, the statistical significance filter).
We’ve spend a lot of space on this blog recently on studies where the noise overwhelms the signal, where any comparisons in the data, statistically significant or not, are essentially meaningless.
But today (actually, in the future, whenever this post appears; I’m actually writing it on 22 Nov), I’d like to focus on a more interesting example where an interesting study was performed on an important topic, the estimate was statistically significant, but I think the estimate is biased upward, for the usual reason of the statistical significance filter.
It’s the story of an early childhood intervention on children that, based on a randomized experiment, was claimed by a bunch of economists to have increased their earnings (as young-adults, 20 years later) by 25% or 42%. Here’s what I wrote:
From the press release: “This study adds to the body of evidence, including Head Start and the Perry Preschool programs carried out from 1962-1967 in the U.S., demonstrating long-term economic gains from investments in early childhood development.” But, as I wrote on an earlierpost on the topic, there is some skepticism about those earlier claims.
And this:
From the published article: “A substantial literature shows that U.S. early childhood interventions have important long-term economic benefits.”
From the press release: “Results from the Jamaica study show substantially greater effects on earnings than similar programs in wealthier countries. Gertler said this suggests that early childhood interventions can create a substantial impact on a child’s future economic success in poor countries.”
I don’t get it. On one hand they say they already knew that early childhood interventions have big effects in the U.S. On the other hand they say their new result shows “substantially greater effects on earnings.” I can believe that their point estimate of 25% is substantially higher than point estimates from other studies, or maybe that other studies showed big economic benefits but not big gains on earnings? In any case I can only assume that there’s a lot of uncertainty in this estimated difference.
Here’s the point
The problem with the usual interpretation of this study is not that it’s statistically significant but not practical significant. We’re not talking about an estimate of .003 with a standard error of .001. No, things are much different. The effect is statistically significant and huge—indeed, small sample and high variation ensure that, if the estimate is statistically significant, it will have to be huge. But I don’t believe that huge estimate (why should I? It’s biased, it’s the product of a selection effect, the statistical significance filter).
And all this “statistically significant but not practically significant” talk can completely lead us astray, by leading us to be wary of very small estimates, while what we should really be suspicious of, is very large estimates! "

Go to his blog and read the comments as well lazybones!

 This is why I highlight his blog on my sidebar!