Tuesday, October 19, 2010

How should the mainstream media report on sabermetric research?

A lot of ideas for posts to this blog come from mainstream media reports of academic sports studies. Often, those turn out to be flawed. Take, for instance, the recent article claiming that .300 hitters hit well over .400 in their last AB because they are highly motivated to succeed. It turns out that the result is caused by selective sampling, and not actually by batters "hunkering down" (as the New York Times put it, in its headline).

However, the media don't normally follow up after a study turns out to be flawed. As a result, there are probably a lot of people running around today believing that batters do actually exhibit clutch behavior when hitting .299. After all, two Ph.D.s said so, it passed peer review, and the New York Times reported it as fact.

That can't be a good thing, to wind up with people believing something that turns out to be false -- not just from the standpoint of sabermetrics, but also from the standpoint of the press.

Is there a better way to report these things?

My first reaction is that the press should report scientific findings the same way they report claims from political think tanks or interest groups -- with a skeptical tone, and with a response from those who might disagree. But that won't happen, for several reasons.

1. The paper is often not yet published and not yet available, so who's going to be able to say what's wrong with it when they can't even see it?

2. It takes a considerable amount time for anyone else to read and digest the paper, especially if it uses complex methodology. Reporters don't have that kind of time to wait.

3. If the paper has already been peer reviewed and accepted for publication, there is a presumption that the paper is correct. And it's been thoroughly reviewed by experts with doctorates. What could an amateur skeptic bring to the story?

4. It doesn't even matter if the paper is wrong. The story is not "players hit .463 in their final at-bat because they're motivated." The story is "Academics say that players hit .463 in their final at-bat because they're motivated." That's true, and newsworthy, even if the embedded claim turns out to be false, because, at the time of publication, there's a strong possibility it might be true.

5. Reporters rely on friendly sources. They don't want to get a reputation for being hostile to academics who come to them with newsworthy ideas. Would the two authors of the .300 paper have talked to the reporter had they expected to have their paper challenged? I doubt it.

So what's the solution?

One thing I'd like to see is for the press to insist that, if they're going to publish a story, the study has to be publicly available at the time the story comes out. To their credit, the authors of this particular study had a working version of the paper on the web. But, sometimes, the paper won't come out for days or weeks. To me, when someone says, "I've discovered X is true but I won't allow you to see the evidence until next month," that shouldn't be a story. Further, it should be something that *academia* frowns upon. Science, after all, is supposed to be open and free, not something you exploit so that your institution looks good. If you're not going to allow the world to see the evidence until November 22, you shouldn't promote it until November 22.

But, having said that ... I have to admit that if academics *do* promote a finding before the paper comes out, it's still a story -- if the academics are credible, knowledgeable experts.

And, not to sound like I'm bashing academia, but ... when it comes to sabermetrics, the Ph.D. economist is usually *not* the expert -- the sabermetric community is the expert. And that, I think, is how the press needs to see it.

In my experience, the way the mainstream media works is that, when they quote a lower-credentialed party, they will almost always go to the higher-credentialed party for a counterpoint. But when they quote the higher-credentialed party, that's often enough.

And that kind of makes sense. When some amateur insists that Saturn's rings are made of beer, you publish it as a novelty story if it's interesting, but you make sure you quote a real astronomer saying the guy is nuts. On the other hand, when you're writing a story about Saturn on the science page, you quote the astronomer, but, obviously, you don't need to go to the amateur for the "beer" counterpoint.

That's a reasonable way of doing it. But what the press doesn't understand yet is that when you evaluate credentials, you have to give the nod to *subject matter expert* (as Tango calls it). In this case, that's the sabermetricians. In this case, it's the Ph.D. who's the amateur, because the subject matter isn't established economic knowledge -- it's established *baseball* knowledge. Any established sabermetrician would instantly realize that a .463 average for a .300 hitter isn't plausible at all, given what's known about clutch hitting. They'd have been able to provide a decent opposing point of view for the article, and perhaps even convince the reporter to write about the issue with a skeptical eye.

Perhaps, for that to happen, sabermetrics needs to lose a little of its "nerd" image. But, even so, it's a reporter's responsibility, when writing about scientific research, to be aware of what's mainstream expertise and what's not. Academics who don't specialize in sabermetrics, and are making surprising or outlandish claims, definitely fall into the "not" category.


Labels: , ,

4 Comments:

At Tuesday, October 19, 2010 12:07:00 PM, Anonymous Anonymous said...

Good column. With a little tweaking, it could describe general science reporting in the press. In particular, there is little interest in reporting on the follow-ups to the "big breakthrough" findings, follow-ups that often show that the original study was wrong -- which is especially common with the most dramatic, novel "breakthroughs."

 
At Wednesday, October 20, 2010 2:36:00 AM, Anonymous Anonymous said...

Very nice article, Phil. I don't know much about "applied" economics, econometrics, or whatever it is called, but the field, at least as exemplified by books like Freakonomics, is all about econometricians doing research on all kinds of subject matter that they know nothing about. That seems to be accepted in the field. Obviously it can be problematic, but I don't necessarily think that it is automatically irresponsible (I realize that you are not suggesting that it is).

MGL

 
At Wednesday, October 20, 2010 10:35:00 AM, Blogger Phil Birnbaum said...

Right, I think it's a very positive development that economists are looking to all kinds of subject areas for data.

But, that means they are publishing results in other fields. And that means that they need to take those fields seriously, and that they and academia and peer review and reporters need to understand that there is a large body of outside relevant expertise in those subjects, even if the are not academic areas of study.

 
At Thursday, November 04, 2010 6:04:00 AM, Anonymous Research Paper Writing said...

After read blog topic's related post now I feel my research is almost completed. happy to see that.Thanks to share this brilliant matter.

 

Post a Comment

<< Home