Dollars increase wealth, but cents don't
Is there a correlation between twenty-dollar bills and money? I think there is. Here's what I did: I took a thousand random middle-class people off the street. I counted how many twenties they had in their wallet, and how much money they had altogether. Then, I ran a regression.
It turns out that there is a strong relationship between twenties and money. The result:
Coefficient of twenties = $19.82 (p=.0000)
Because the coefficient for twenties was very strongly statistically significant, we can say that every twenty dollar bill increases wealth by about $20.00.
I was so excited by this conclusion that I wondered whether a similar result holds for pocket change. So I added quarters to the regression. The result:
Coefficient of quarters = $0.26 (p=.25)
As you can see, the p-value is only .25, much higher than the .05 we need for statistical significance. Since the coefficient for quarters turns out not to be statistically significant, we conclude that there is no evidence for any relationship between quarters and money.
This is surprising – in the popular press, there is a widespread theory that quarters are worth $0.25. But, as this study shows, no statistically significant effect was found, using 1,000 people and the best methodology, so we have to conclude that quarters aren't worth anything.
That sounds ridiculous, doesn't it? We have a regression that shows that quarters are worth about 25 cents, but we treat them as if they're worth zero just because the study wasn't powerful enough to show statistical significance.
But we're biased here because of the choice of example.
So suppose that instead of adding quarters to the equation, we had added something else, something that we could agree was completely irrelevant. Say, number of siblings.
And, suppose that we got exactly the same results for siblings: for each sibling in our random subject's family, he winds up with an 26 cent increase in pocket money. The signficance level the same 0.25. (The result is certainly not farfetched: one in four times, we'd find an effect of at least this magnitude.)
In this case, if we said "there is no evidence for any relationship between siblings and money," that would be quite accceptable.
What's the difference between quarters and siblings? The difference is that there is a good reason to believe that the quarters result is real, but there is no good reason to believe that the siblings result is. By "good reason," I don't mean just our prior intuitive beliefs. Rather, I mean that there's a good reason based partly on the results of the study itself.
The study showed us that twenty-dollar bills were highly significant. We therefore concluded that there was a real relationship between twenties and wealth. But we know, for a fact, that 80 quarters equal one twenty. It is therefore at least reasonable to expect that the effect of 80 quarters should equal the effect of a twenty – or, put another way, that the effect of one quarter should be 1/80 the effect of a twenty. And that was almost exactly what we found.
How does it make sense to accept that twenty dollar bills have an effect, but 1/80ths of twenty-dollar bills do not? It doesn't.
If the convention in these kinds of studies is to treat any non-significant coefficient as zero, I think that's wrong. A reasonable alternative, keeping in mind the "sibling" argument, might be that if a factor turns out to be statistically insignificant, and there is no other reason to suggest there should be a link, only then can you go ahead and revert to zero. But if there are other reasons – like if you're analyzing cents, and you know dollars are significant – reverting to zero can't be right.
I argued this same point a little while ago when describing the Massey/Thaler football draft study. That situation was remarkably similar to the pocket money example.
The study was attempting to figure out how much an NFL player's production correlates to draft position. They broke production into something similar to dollars and cents.
"Dollars" were the most obvious attributes of the player's skill. Did he make the NFL? Did he play regularly? Did he make the Pro Bowl?
"Cents" is what was left after that. Given that he played regularly but didn't make the Pro Bowl, was he a very good regular or just an average one? If he made the Pro Bowl, was he a superstar Pro Bowler, or simply an excellent player?
The study found strong significance for "dollars" – players drafted early were much more likely to play regularly than players drafted late. They were also more likely to make the Pro Bowl, or to make an NFL roster at all.
But it found less signficiance for the "cents." The authors did find that players with more "cents" were likely to be better players, but the result was significant only at the 13% level (instead of the required 5%). From this, they concluded
"there is nothing in our data to suggest that former high draft picks are better players than lower draft picks, beyond what is measured in our broad ["dollar"] performance categories."
And that's got to be just plain wrong. There is not "nothing in the data" to suggest the effect is real. There is actually strong evidence in the data – the significance of the other, broader, measure of skill. If you assume that dollars matter, you are forced to admit that cents matter.
There's an expression, "absence of evidence is not evidence of absence." This is especially true when you find weak evidence of a strong effect. If you find a correlation that's significant in football terms, but not significant in statistical terms, your first conclusion should be that your study is insufficiently powerful to be able to tell if what you found is real. Ideally, you would do another study to check, or add a few years of data to make your study more powerful. But it seems to me that you are NOT entitled to automatically conclude that the observed effect is spurious based on the significance level alone, especially when it leads to a logical implausibility, such as that dollars matter but cents don't.
In this particular study, I think that if you do accept the coefficient for cents at face value, instead of calling it zero, you reach the completely opposite conclusion than the authors do.
The reason I'm writing about this again is that I've just found another occasion of it, in the study discussed here (full review to come). The authors come up with an estimate of a coefficient for three separate NBA seasons. The coefficient is (to oversimplify a bit) the amount by which you'd expect a team that's been eliminated from the playoffs to underperform in winning percentage.
Their three results are .220 (significant), .069 (not significant), and .192 (significant).
My conclusion would be to say that the effect of being eliminated in the middle season is much lower than the effect in the other two seasons, and whether the difference was statistically significant. I would point out, for the record, that the .069 is not signficantly different from zero.
But the study's authors go further – they say that the .069 should be arbitarily treated as if it were actually .000:
"Our results show that teams that were eliminated from the playoffs [in the .069 year] were no more likely to lose than noneliminated teams." [emphasis mine.]
That's is simply not true. The results show those teams were .069 more likely to lose than noneliminated teams.
That is: given that our prior understanding of how basketball works, given the structure of the study (which I'll get to in a future post), given that the study shows a strong effect for other years, and given that the effects are in the same direction, there is certainly enough evidence that the .069 is likely closer to the "real" value than zero is.
In this case, the conclusions of the study -- that the second year is different from the other two -- don't really change. But the conclusion turns out much more punchy when the authors say there is no effect, instead of a small one.