Arguments vs. studies
Last post, I argued that if Toyota would have raised the price of a Camry by one cent, it would have sold two or three fewer cars last decade. My argument went something like this:
"If Toyota raised the price by $20,000, it would have sold almost no cars, which is four million fewer than it did. That works out to two cars per penny, on average. You could argue that certain pennies had a larger effect, and certain pennies had a smaller effect, but the average has to be two cars per one cent increase. And there has to be at least ONE penny of increase that changes the expected number of cars sold, otherwise you'd never get from four million to zero."
I was inspired by a post from Bryan Caplan, that had come to the same conclusion with a different argument. Caplan's post was an example intended as a follow-up to one of his tweets:
"In social science, the best arguments prove more than the best studies. Hands down."
The argument I made convinced some of you. If I had tried to do a *study* to prove that ... well, I couldn't, really. Toyota doesn't vary their price by pennies, and, even if they did, there would certainly not be enough data. And there would be all kinds of other factors that you'd have to worry about. What if rich people were willing to pay more? What if Toyota raises prices on rainy days, and that accounted for the lower traffic? What about advertising campaigns, and recalls?
It just couldn't be done. If we insisted on a study, rather than an argument, we'd never have an answer.
On the same EconLib blog, David Henderson took Caplan one step further. Studies aren't just worse than arguments, Henderson said; they're almost useless!
"Economist Jeff Hummel said he couldn't think of even one controversial issue that had been resolved with econometrics. The other 4 economists present, including me, immediately started trying to think of counterexamples. The first one that came to my mind was Milton Friedman's consumption function. Jeff agreed that this had resolved an issue but pointed out that Friedman did it simply with data, not with econometrics. The other examples that the other economists came up with were similar: data had resolved the issue but it didn't require econometrics."
This echoes something I've been saying for a long time, for sabermetrics: complicated studies aren't needed. There are those who defend academic studies in sabermetrics, claiming that they're more rigorous and better evidence than what the "amateur" community has come up with. To them, I have issued a challenge -- show me just *one* academic study, or a study with a complicated methodology, that discovered something that couldn't be found using simpler methods. To date, nobody has replied.
Henderson's choice of words is interesting: the issue was resolved with "data" rather than "econometrics". I assume that's the same as "simple methods" and "complex methods".
If I'm interpreting it right, it goes a long way to explaining why academic journals won't publish studies that don't include regressions. They consider other methods to be just "data"!
I think that's a stunning admission, that fancy methods don't resolve issues. This is economics, a serious academic subject. But almost 100 percent of what gets published in academic journals -- even the most prestigious ones -- cannot resolve any issues! On the other hand, a simple argument in a single blog post can be totally convincing. And so can a simple study, one that's not deemed "rigorous" enough for publication.
But, I think it's true.
For you to decide an issue is "resolved", you need to understand it. Complex statistical studies are very, very difficult to understand, even for people who have been reading them for a long time. Some of the studies I've critiqued on this blog are like that ... it's taken me hours to figure out what's really going on, and what the regression really means.
Take something you believed for a long time, or something that seems intuitively obvious. Like, say, whether you'll sell fewer cars if you raise the price by a penny. And someone comes along and says, I did this really complicated study, and I've proved that, on average, two buyers quit over a single penny!
Are you going to change your mind? I bet none of you would. The study might be just plain wrong, and it's too complicated for you to get your head around to tell. In the best case, you might start to have a bit of doubt, and think, well, if a study shows it, maybe there's something to it. But, probably not.
But other people come along -- me, and Bryan Caplan -- and give you our arguments. Now do you change your mind? Some of you have!
Arguments can change minds. Complicated studies can't.
And this one particular argument, the one about the Camrys, is pretty simple. Even a child, I think, would understand the logic behind it.
Yet ... intelligent people disagree about it, and strongly. I'm absolutely sure it's right. You might be absolutely sure it's wrong. And we might both be of above-average intelligence, with no political stake in the argument, perfectly capable of understanding fairly complex mathematical principles, and both of us well-versed in analytics and sabermetrics.
But this simple argument, and we can't agree.
If that's the case, how is any complex sabermetric or econometric study going to be convincing?