Why don't reporters consult sabermetricians?
Here's a completely misguided article about baseball production and salary. It's from Forbes magazine, which should know better.
The piece tries to figure out which players are the most overpaid in baseball. To get a measure of production, they compare a player's raw stats to the average at his position. The article doesn't give an actual formula, but it says that
"we compared the major offensive stats--batting average, home runs, runs batted in and OPS ... of each player to the league average for starters at his position. Then we did the same with salaries."
The idea being, if player X hits .300, and the league average is .270, player X's salary should be 11% higher than average.
That's just plain wrong -- a .300 hitter is a lot more than 11% more valuable than a .270 hitter. You have to measure from replacement value. If the best available minor-leaguer would hit .240, then you can see that the .300 hitter is actually twice as valuable as the .270 hitter -- he gains 60 points over replacement, as compared to 30 points. (This doesn't mean his salary should be double; only the part of the salary above replacement level should be double.)
Of course, batting average isn't a very good proxy for performance; OPS is much better (although there are other measures
And, since we're already shooting fish in a barrel ... why would you evaluate a hitter by his RBI? We've known better than to do that for, what, around 30 years now.
This brings up a question, which Tom Tango asks on "The Book" blog: How did this article make it into print? Shouldn't Forbes, which is a pretty sophisticated business magazine, know better than to run a financial analysis that's wrong on so many levels?
The answer, I think, is that writers and editors' don't care as much about accuracy when it comes to baseball. I can think of several reasons, off the top of my head. Feel free to suggest more in the comments.
1. People don't think of sabermetrics as a formal field of knowledge, one with subject-matter experts who should be consulted. Forbes would never have dabbled in this kind of layman analysis in the field of, say, engineering. Can you imagine an article on which bridges are most overpriced, based on dividing the price by the total length of the bridge compared to the average length of bridges? That would never happen. Forbes would consult an engineer, who would tell them that there are many, many factors affecting how much it costs to build a bridge, and that their analysis was simplistic, silly, and very wrong.
2. My perception is that among most of the media (the Wall Street Journal among the notable exceptions), sabermetricians are seen as fringe researchers, nerds with the "blogging out of their parents' basement" stereotype. Certainly a veteran financial reporter should be more expert at player valuation than a unpaid 24-year-old blogger? Well, no. In this field, it turns out that the experts are among those with the fewest formal credentials. And that might be difficult for the mainstream media to accept.
3. It's only baseball. Remember when Bill James, in the 1991 Baseball Book, reamed out David Halberstam for some awful inaccuracies in "Summer of '49". James wrote,
"It is frightening to think that Halberstam, one of the nation's most respected journalists, is this sloppy in writing about war and politics, yet has still been able to build a reputation simply because nobody has noticed.
"What seems more likely is that Halberstam, writing about baseball, just didn't take the subject seriously. He just didn't figure that it *mattered* whether he got the facts right or not, as long as we was just writing about baseball."
I think that's what's happening here. Baseball is frivolous, Forbes probably thinks, and so it's not worth caring about getting it right. It's easy to forget, when you're dealing with a subject you don't take seriously, that there are other people who do. Freakonomics thought it was the first to talk about baby-naming trends, and was criticized by serious experts whom it would have been good to consult. I've done things like that myself, assuming that the subject is so trivial that I must be just as much an expert as anyone else. It's usually not the case.
4. Sabermetrics is a field composed more of logic than fact; sabermetric analysis sounds like opinion, not science. What treatments work for Hepatitis? I don't know, and Forbes reporters don't know; it's a question of medicine, a question of fact. We can't even guess. You have to ask a doctor; they know.
But when it comes to evaluating a player's performance ... well, if you don't think about it too deeply, it seems like you don't need to know anything. Sure, just divide the salary by the numbers ... it seems reasonable and logical, doesn't it, that if you drive in 20 percent more runs you should make 20 percent more money? It doesn't seem like there's an underlying fact there, something that an expert has to tell you. You can come up with something plausible in your head.
And if someone tells you you're wrong, so what? It doesn't look like it's a question that has a right answer. If you can find two financial analysts, one who says a stock is worth $30, the other who says the same stock is worth $10, and it's OK to stick them on consecutive pages of Forbes ... then who's to say that figuring out the value of a player is any different? It's just one opinion instead of another.
This isn't a problem just in sabermetrics ... a few years ago, I read an economist (I think it was Steven Landsburg) with the same complaint. He said (and I'm paraphrasing) that economics is such that everyone thinks they understand it as well as people who have studied it for years -- people who wouldn't dream of having an opinion on a medical issue will confidently come forth with solutions -- wrong solutions -- to whatever economic problem is on today's agenda.
And, again, I think that's because a lot of it is logic. And logic is hard -- it's hard to reason through an argument than to memorize a fact. There are lots of arguments that sound plausible, but are, in fact, flawed. It takes work to figure out which is which, especially when you don't have enough background to be able to avoid the common fallacies, when you might be too proud of your own flawed logic to accept that you might have left something out, and when you might, ignorantly, be rejecting certain principles that sound silly to you but that the field has empirically found to be correct.
It's funny, too, because reporters will be willing to publish their half-baked theories without a second thought -- but when it comes to a specific mathematical calculation, they'll often cite a source, even when the calculation is simple enough to state as fact. It's as if they know their math *facts* might be shaky, but believe their mathematical *arguments* are beyond error.