Tuesday, April 06, 2010

JSE: Rodney Fort on sabermetrics and salary data

The February, 2010 issue of the "Journal of Sports Economics" contains eight articles, all of which are on topics of interest to sabermetricians. I'll try to review them all over the next couple of weeks. [2015 update: well, that didn't happen!]

I've already talked about one of them: the one by David Berri and JC Bradbury, "Working in the Land of the Metricians." This post is on one of the other seven articles: "Observation, Replication, and Measurement in Sports Economics," by Rodney Fort.

Dr. Fort's research here shows that the reliability of MLB salary data, as found on the internet and elsewhere, is somewhat variable. For instance, of the 13 sources of payroll data that Fort found, seven of them have data for 1999. Those estimates of average player salary are:

$1,567,873
$1,604,972
$1,569,000
$1,733,557
$1,926,565
$1,720,050
$1,609,418

The difference between the high and low is 23%, which is a lot. Fort guesses that some of the difference is what time of year the calculations were made, since rosters change continuously, and that seems like a reasonable explanation. In 2009, the USA Today data shows 812 players, as opposed to the 750 players that would appear if opening day rosters only were used.

Still, for the years there are multiple estimates, why does every one seem to be different? These are very good questions to be asking.

The article contains multiple (different) estimates for every year between 1976 and 2006. I suspect the discrepancies won't make a huge difference in most studies, but, still, accurate data is better than inaccurate data, and I didn't realize before Dr. Fort's article that the differences were so large.

-----

Fort's article serves as an introduction to his guest-edited "Special Issue on the Challenges of Working With Sports Data," and so he also talks about the sabermetric community, and his personal interactions with us. One of those examples mentions me:


"I have had a couple of informative interactions at ... Phil Birnbaum's [blog] ... Recently, Birnbaum attempted to decompose team valuations from Forbes into cash flow value and "Picasso Value". The posts that followed were all of a mind on improving Birnbaum's estimation approach. I posted a suggestion that his approach did not include other monetary values of ownership (documented in some of my work). Thus, he ran the danger of "inflating" Picasso value. Birnbaum acknowledged the input but the flow of discussion did not change."


Dr. Fort has a point -- he did disagree with my post, and those of commenters, and, indeed, the flow of discussion didn't change. In the first post, Fort wrote that Forbes' team values didn't include the value of tax breaks and corporate cross-overs (such as when a team sells broadcasting rights to its own parent company at a discounted rate, thus keeping the team's stated profits artificially low). I should have acknowledged that in a subsequent comment.

In the second post, Dr. Fort mentioned that again, and I asked him what his estimate of Picasso Value is in the light of his estimates of the cash flow value of owning a team. He argued that in the absence of hard evidence for consumption value (i.e., evidence that team owners are willing to accept less profit for the thrill of owning the team), we should assume that value is zero. There, I disagree -- even accepting Dr. Fort's estimates of other-than-operating-proft, I think there is still evidence of consumption value. But, I should have said that online at the time. Certainly, when an expert like Dr. Fort drops by with some relevant knowledge on an issue, we absolutely should acknowledge the contribution. My apologies to Dr. Fort that such was not the case.

-----

In another example, Dr. Fort talks about his experience at an American Statistical Association conference, where a bunch of academic statisticians had a session to debate how to figure out the best slugger in baseball history:


"The primary issue for nearly all in the room had to do with holding all else constant (dead ball era, end of the spitball, changes in rules ... and so on), not at all unfamiliar to economists. I interrupted the back-and-forth question and answer session and offered a normalized pay measure: the highest-paid slugger must surely be the best. The rest of the discussion held fast to its previous course ..."


Here, I'd argue that Fort's suggestion is not really answering the question that was asked. The statisticians are asking, what is the proper algorithm for calculating how good the player is? And Fort is saying, "don't bother! The team GMs already know the algorithm. If you want to know who the best player is, just look at their payrolls."

That's fair enough. But it's only natural that the statisticians won't be satisfied by that: they don't just want the answer, they want to know how to calculate it.

This narrative reminds me of an old story: A physics student sits down to an exam, and one of the questions is: "show how you can use a barometer to determine the height of a tall building." The student is stumped at first, but then writes: "Find the superintendent of the building, and say to him: 'If you tell me how tall your building is, I will give you this fine barometer.'"

-----

In both the introduction and conclusions, Fort comments on the issues raised by the Berri/Bradbury paper:

Berri and Bradbury relate to us their extensive experience with the SABR-metric and APBR-metric communities ... There are tensions and jealousies aroused over the issue of proper credential and peer reivew. There is the issue of proper assignation of credit; when incorporating a technique or measurement developed outside the [academic] peer-review process, what to do? Finally, there is the issue of dealing online with these fellow travelers who often adopt the anonymity of pseudonyms. ...

"I cannot help but think that part of the tension with the metricians revolves around desires for immortality in the naming of a thing. This seems patently absurd to me ... In sports, the authors point this out for the case of Dobson's creation of batting average in 1872. I do not ever remember seeing anybody cite Gauss (1821) and Karl Pearson (1984) on the origins of the standard deviation or the subsequent coinage of the variance by Ronald Fisher (1918). And I doubt these authors took much umbrage over the fact that their inventions simply became household terms in record time. However perhaps, we are wrong in this; if someone grouses, perhaps we should respond."


Hmmm ... from my standpoint, I don't think naming is the issue at all. I agree with Dr. Fort here that it's perfectly reasonable to omit citations for results that have become household terms. We all talk about "DIPS" or "Runs Created" or "Linear Weights" without mentioning Voros McCracken or Bill James or Pete Palmer every time; that's just normal. And I don't think I've ever seen any sabermetrician anywhere demanding that his particular term be used, or that anything be named after him.

One of the things we *do* expect from the academic community is that they be aware of our own research and conventions, and, unless they disagree with the findings, that they show as much respect for them as they would if the research had been published academically. That's just common sense, isn't it? Take DIPS, for example. Do a couple of quick Google searches, and you should find at least 20 studies testing and confirming the hypothesis. But when Berri and Bradbury write about it, in this very same journal as Fort's article, what do they do? They mention Voros McCracken, and then do a quick regression and announce that this "supports" DIPS. Which is fine, but, if those twenty studies were published academically, they wouldn't dare omit those citations. The omission gives the reader the (perhaps unintended) impression that the subject hasn't been studied yet.

It would have been very easy for Berri and Bradbury to have quoted existing research in addition to their own. They could have said (taking two of the many existing studies at random), " the DIPS theory was repeatedly tested and supported in numerous studies after McCracken's article; for example, Tippett (2003) and Gassko (2005). A simpler confirmation is that we now calculate the persistence over time of batting average on balls in play, and find it much lower than for other statistics."

They deliberately chose not to do that.

Most interestingly, Dr. Fort writes,


"Do not get me wrong. The internet seems a place of great potential. Suppose the living Nobel Prize winners start a blog. An idea occurs, they debate it, the courageous among us try to contribute, and the discipline moves forward on that issue. Few would doubt the value ... "


And you know what? That's exactly what's happening! Of course, there's no Nobel in Sabermetrics, but the best and brightest minds in the field are already online doing exactly what Dr. Fort wishes the best economists would do. And that's why the field moves so fast.

With perhaps the exception of computer software development, I bet that there is no other field of human knowledge today in which you can see progress move so quickly in real time, where the best minds in the field publish so much excellent, rigorous work so reliably, where collaboration happens so easily and instantly, and where even an unknown can have something to contribute to the work of even the most experienced sabermetrician.

Fort continues,

"In the publish-or-perish world [of academic economists, we] all know why this has not happened yet. The best we get is editorials by Nobel Prize winners, occasionally scolding each other. And the worst we get is anonymous yelling at each other behind pseudonyms in blog discussion areas. .... This may be entertaining ... but I wonder how it will all play out in the rigorous pursuit of answers to sports economics questions."

Well, in economics, maybe; I'll take Dr. Fort's word for it. But in sabermetrics, the internet has indeed provided the ideal breeding ground for the rigorous pursuit of answers.

As Dr. Fort implies, the incentives facing academic economists are very different from those facing amateur sabermetricians, or even professional sabermetricians (who write books or work for teams). To avoid "perishing," and suffering in their career, academics are forced to write formally and rigorously, submit to peer review by two or three colleagues, wait several months (or more) before getting a response, and publish in journals where relatively few people will read their work.

In contrast, non-academic sabermetricians have different goals: a love for baseball, a dream of working for a ballclub, the recognition and status in the community, or, yes, even just the thrill of scientific discovery. To those ends, they have incentives to explain their ideas informally, publish online instantly, have their work peer-reviewed within minutes or hours, have all the best researchers in the field be aware of their research almost immediately, and maintain credibility in the community by engaging critics and acknowledging shortcomings in their work.

My prediction: over the next few years, non-academic sabermetrics will gain more and more credibility, as more and more of it comes out of major league front offices, and journalists continue to acknowledge it. In that light, academia will find it harder and harder to ignore the accepted, established and growing body of knowledge on what will appear to be flimsy procedural grounds. (Does anyone outside of academia care that Bill James' ideas weren't peer reviewed?)

My view is that the academics, with incentives to maintain their established and expensive barriers to publication, will never be able to keep up with the freewheeling, open-source, dynamism of the non-academic crowd. Eventually, economists will come to the realization that progress in sabermetrics is best done by the non-academic community of sabermetricians, just as the invention of better computers is best done by the non-academic community of computer manufacturers. Academics will still publish papers of interest to sabermetricians, and influence their work, just as materials scientists and computer theoreticians publish work that's of interest to Apple and Google. But the bulk of sabermetrics itself will be seen as coming from non-academic sabermetricians, not academic economists.

It's not a knock against economists, who are certainly as capable of doing sabermetrics as anyone. It's just that academic processes are too rigorous and expensive for a world that moves so fast. Academia, I think, will stick to areas where it has a monopoly, like the more traditional areas of economics, or expensive fields like physics.


Labels: , , , ,

6 Comments:

At Wednesday, April 07, 2010 12:59:00 AM, Blogger Don Coffin said...

"...as opposed to the 750 players that would appear if opening day rosters only were used..."

I may have additional comments as I read more. But the "opening day roster" salary data typically does include players on the disabled list as of opening day. So there can easily be more than 750 such players.

 
At Wednesday, April 07, 2010 1:06:00 AM, Blogger Don Coffin said...

"...that they show as much respect for them as they would if the research had been published academically..."

I think part of the problem is a different set of norms about what constituted published results. Most of us who do "academic" research approach peer-reviewed studies with a different attitude than we do non-peer-reviewed (or non-reviewed) studies. I certainly don't expect people to take the things I write for my blog as seriously as I do the things I publich in per-reviewed journals, if only because, to get it published I have to expose the work to, and respond to, the informed criticism (in the best sense of the word) of other people working in the field. (That's most of the peer reviews, not all of them, by the way).

 
At Wednesday, April 07, 2010 1:13:00 AM, Blogger Don Coffin said...

Finally, I would suggest that the signal-to-noise ratio is higher in some forms of publishing one's work than in others. My own sense is that it's higher in peer-reviewed work (which does take longer to get into print, although not longer to be seen, necessarily). And, in the social sciences, the give-and-take does take place, in unpublished, still work-in-progress papers such as one finds on the Social Science Research Network (http://www.ssrn.com/), which at the end of 2009 had "265,000 documents and 129,000 authors." So academic research isn't quite as formal and polite as all that.

Not as raucus as some of the exchanges on the web, but, as I think you'd agree, you can also get a lot of truly uninformed comment as well as a lot of useful feedback on the work you do.

 
At Wednesday, April 07, 2010 10:38:00 AM, Blogger Phil Birnbaum said...

Doc,

1. Good catch about the disabled list players. Never thought of that.

2. Agreed that the signal-to-noise ratio is higher on the internet than in academic journals (although that doesn't mean the "signal" is better quality in academia, just that there's more noise). But many of the results of the last 25 years or so have established themselves in the field. My argument is that those particular results -- DIPS, for instance -- deserve the same kind of respect as if they had been published elsewhere.

 
At Wednesday, April 07, 2010 10:39:00 AM, Blogger Phil Birnbaum said...

More comments at Tango's site:

http://www.insidethebook.com/ee/index.php/site/comments/phil_on_rodney_fort_and_academia/

 
At Wednesday, April 07, 2010 10:47:00 AM, Blogger Phil Birnbaum said...

More comments at Tango's site:

http://www.insidethebook.com/ee/index.php/site/comments/phil_on_rodney_fort_and_academia/

 

Post a Comment

<< Home