Monday, November 05, 2012

Arguments vs. studies

Last post, I argued that if Toyota would have raised the price of a Camry by one cent, it would have sold two or three fewer cars last decade.  My argument went something like this:

"If Toyota raised the price by $20,000, it would have sold almost no cars, which is four million fewer than it did.  That works out to two cars per penny, on average.  You could argue that certain pennies had a larger effect, and certain pennies had a smaller effect, but the average has to be two cars per one cent increase.  And there has to be at least ONE penny of increase that changes the expected number of cars sold, otherwise you'd never get from four million to zero."

I was inspired by a post from Bryan Caplan, that had come to the same conclusion with a different argument.  Caplan's post was an example intended as a follow-up to one of his tweets:


"In social science, the best arguments prove more than the best studies. Hands down."

Absolutely right. 

The argument I made convinced some of you.  If I had tried to do a *study* to prove that ... well, I couldn't, really.  Toyota doesn't vary their price by pennies, and, even if they did, there would certainly not be enough data.  And there would be all kinds of other factors that you'd have to worry about.  What if rich people were willing to pay more?  What if Toyota raises prices on rainy days, and that accounted for the lower traffic?  What about advertising campaigns, and recalls?

It just couldn't be done.  If we insisted on a study, rather than an argument, we'd never have an answer.

------

On the same EconLib blog, David Henderson took Caplan one step further.  Studies aren't just worse than arguments, Henderson said; they're almost useless!


"Economist Jeff Hummel said he couldn't think of even one controversial issue that had been resolved with econometrics. The other 4 economists present, including me, immediately started trying to think of counterexamples. The first one that came to my mind was Milton Friedman's consumption function. Jeff agreed that this had resolved an issue but pointed out that Friedman did it simply with data, not with econometrics. The other examples that the other economists came up with were similar: data had resolved the issue but it didn't require econometrics."

This echoes something I've been saying for a long time, for sabermetrics: complicated studies aren't needed.  There are those who defend academic studies in sabermetrics, claiming that they're more rigorous and better evidence than what the "amateur" community has come up with.  To them, I have issued a challenge -- show me just *one* academic study, or a study with a complicated methodology, that discovered something that couldn't be found using simpler methods.  To date, nobody has replied.

Henderson's choice of words is interesting: the issue was resolved with "data" rather than "econometrics".  I assume that's the same as "simple methods" and "complex methods". 

If I'm interpreting it right, it goes a long way to explaining why academic journals won't publish studies that don't include regressions.  They consider other methods to be just "data"! 

------

I think that's a stunning admission, that fancy methods don't resolve issues.  This is economics, a serious academic subject.  But almost 100 percent of what gets published in academic journals -- even the most prestigious ones -- cannot resolve any issues!  On the other hand, a simple argument in a single blog post can be totally convincing.  And so can a simple study, one that's not deemed "rigorous" enough for publication. 

But, I think it's true. 

For you to decide an issue is "resolved", you need to understand it.  Complex statistical studies are very, very difficult to understand, even for people who have been reading them for a long time.  Some of the studies I've critiqued on this blog are like that ... it's taken me hours to figure out what's really going on, and what the regression really means. 

Take something you believed for a long time, or something that seems intuitively obvious.  Like, say, whether you'll sell fewer cars if you raise the price by a penny.  And someone comes along and says, I did this really complicated study, and I've proved that, on average, two buyers quit over a single penny!

Are you going to change your mind?  I bet none of you would.  The study might be just plain wrong, and it's too complicated for you to get your head around to tell.  In the best case, you might start to have a bit of doubt, and think, well, if a study shows it, maybe there's something to it.  But, probably not. 

But other people come along -- me, and Bryan Caplan -- and give you our arguments.  Now do you change your mind?  Some of you have!

Arguments can change minds.  Complicated studies can't. 

------

And this one particular argument, the one about the Camrys, is pretty simple.  Even a child, I think, would understand the logic behind it. 

Yet ... intelligent people disagree about it, and strongly.  I'm absolutely sure it's right.  You might be absolutely sure it's wrong.  And we might both be of above-average intelligence, with no political stake in the argument, perfectly capable of understanding fairly complex mathematical principles, and both of us well-versed in analytics and sabermetrics.

But this simple argument, and we can't agree.

If that's the case, how is any complex sabermetric or econometric study going to be convincing?  



Labels: , ,

4 Comments:

At Tuesday, November 06, 2012 12:43:00 PM, Anonymous David said...

Ok, I'll bite. I would say that your division between "arguments" and "studies" is a false dichotomy. Academic studies (good ones at least) are arguments that use data and econometrics to make a very precise argument. I'm sympathetic to the argument that many economic studies are more complicated than necessary, but some complications actually matter.

A good example today (election day!) of a study that requires econometrics to give this sort of precise answer is David Lee's work on the advantage incumbents have in elections (http://www.princeton.edu/~davidlee/wp/RDrand.pdf). The basic argument is in figure 3a: if we look at someone who ran for congress last time and this time, someone who got 51% of the vote does dramatically better than someone who got 49% of the vote.

Now, someone might say: why not just make an argument using the picture and cut the rest of the article. Or, why not just put the data in "buckets" and then compare. The reason is that we generally want more than just a result of "there is some difference." We want to measure the size of the difference, and measuring that through "buckets" turns out not to be the best way. Linear regression with some polynomials or local linear regression do better.

For your pennies and cars example, I think the argument is very convincing in saying that "some penny change in price makes a difference." But how much of a difference matters if you're actually deciding what price to set (or which baseball player to hire), and to do that you sometimes need to actually do a study that gets the statistical details right.

 
At Tuesday, November 06, 2012 5:33:00 PM, Blogger pt scott said...

I wholeheartedly agree that you need an argument, and not just a fancy method. Furthermore, complex methods (when not accompanied by a good argument) often just serve to hide the fact that the authors don't fully understand what's going on.

On the other hand, simple methods are often misleading/wrong in economics and social science more broadly. Sometimes fixing problems with simple methods leads to more complex methods. As you say, very few big problems are ever "resolved" by the complex methods, but I don't think that should be taken as an argument against adding complexity. It's an indication that most big issues in social science are extremely hard to tackle empirically and there are still lots of problems even with the most complex methods out there.

 
At Tuesday, November 06, 2012 5:38:00 PM, Blogger Phil Birnbaum said...

David,

I agree with you. I've argued before that an academic study is really an argument. And there are lots of things that can't be argued without data.

But ... if you really want to convince me, the "bucket" demonstration will work better than the regression. And if the regression is very complicated, it won't work at all ... from a Bayesian standpoint, I'll assign a sufficiently high probability to something being wrong that it will barely move my prior.

More complex studies certainly can advance discussion on an issue, and can point to directions for further research ... but it would be rare for them to RESOLVE an issue the way an argument, or a bucket study, can.

 
At Tuesday, November 06, 2012 9:45:00 PM, Blogger Don Coffin said...

I can think of several issues in economics that could have only been addressed sensibly by fairly complex studies. The most famous is probably the effort to determine what impact income-support programs have on incentives to work. The studies here were (a) experimental in nature and (b) required significanly complex statisticsl (econometric) analyses.

Another example is the employment impact of minimum wage laws. The theoretical argument is *easy*--such laws reduce employment among low-skilled workers. And the initial studies aimed at *quantifying* that effect--but couldn't actually find it. As the studies got more powerful, the results got murkier. So the *argument* still gets made, but the *answer* is still really unknown.

A third (you'll note that all three of these are labor economics issues, which is all but inevitable, because labor economics is my academic field) is the impact of union representation of workers on labor productivity. Again, the *argument* seemed (but turned out *not*) to be simple--by restricting management discretion, unionization would lead to lower labor productivity. Again, economists undertook studies designed to measure the extent of this, and found, in what seemed at the time to be a paradox, that unionized workers were 15% to 25% *more* productive than were non-union workers in the same industries. While refinements to these studies tended to reduce the magnitude of the union productivity advantage, none of them actually overturned it.

The problem with relying on argument is that our arguments are never complete. And unless we confront them with evidence, we will make mistakes. The problem with "simple" evidential tests is that sometimes the world is complex enough that complex methods are required to tease out what the real effects are.

In baseball, for example, one cas ask, "What impact do sacrifice bunts have on the probability that a team will (a) score more runs and/or (b) actually win the game?" Is the argument simple? Well, probably not. Is it easy to confront the situation wit h date? Well, probably not, as well, because the answer is probably, "It depends on when you bunt, whether the opposing team anticipated it, (and on and on and on..."

So while it's nice to think that the world can be understood by using argument and simple analytical techniques, that doesn't always (or, in my opinion, often) work.

 

Post a Comment

<< Home