How fast did the market learn from "Moneyball?"
In “Moneyball,” Michael Lewis told the story of how the Oakland A’s were able to succeed on a small budget by realizing that undervalued talent could be picked up cheap. Specifically, GM Billy Beane realized that the market was undervaluing on-base percentage, and acquired hitters who took lots of walks at salaries that undervalued their ability to contribute to victory.
But once Moneyball was released in 2003, every GM in baseball learned Beane’s "secret." Furthermore, a couple of his staff members left that year for GM jobs with other teams. In theory, this should have increased competition for high-OBP players, and raised their salaries to the point where the market inefficiency disappeared.
Did that really happen? In “An Economic Evaluation of the Moneyball Hypothesis,” economists Jahn K. Hakes and Raymond D. Sauer say that yes, it did.
Hakes and Sauer ran regressions for each year from 2000 to 2004, trying to predict (the logarithm of) players’ salary from several other variables: on-base percentage, slugging percentage, whether they were free agents, and so on. They found that in each year from 2000 to 2003, slugging percentage contributed more to salary than on-base percentage. But, in 2004, the first year after Lewis’s book, the coefficients were reversed – on-base percentage was now more highly valued that slugging percentage.
"This diffusion of statistical knowledge," they write, " … was apparently sufficient to correct the mispricing of skill."
But I’m not really sure about that.
The main reason is that, taking the data at face value, the 2004 numbers show not the market correcting, but the market overcorrecting.
For 2004, the study shows 100 points of on-base percentage worth 44% more salary, and 100 points of slugging worth only 24% more salary.
At first, that looks like confirmation: a ratio of 44:24 (1.8) is almost exactly the “correct” ratio of the two values (for instance, see the last two studies here). But the problem is the “inertia” effects that Hakes and Sauer mention: the market can’t react to a player until his long-term contract is up.
Suppose only half the players in 2004 were signed that year. The other half would have been signed at something around the old, 2002, ratio, which was a 14% increase for OBP, but a 23% increase for SLG.
So half the players are 14:23, and the average of both halves is 44:24. That means the other half must be about 74:25. That is, the players signed in 2004 would have had their on-base valued at three times their slugging, when the correct ratio is only around two. Not only did GMs learn from “Moneyball,” they overlearned. The market is just as inefficient, but in the other direction!
And only free-agent salaries are set in a competitive market. The rest are set arbitrarily by the teams, or by arbitrators. If those salaries didn’t adjust instantly, then the market ratio for 2004 is even higher than 3:1. In the extreme case, if you assume that only free-agent salaries were affected by the new knowledge, and only half of players were free agents, the ratio would be higher, perhaps something like 6:1.
Could this have happened, that teams suddenly way overcompensated for their past errors? I suppose so. But the confidence intervals for the estimates are so wide that the difference between the 14% OBP increase for 2003 and the 44% increase for 2004 isn’t even statistically significant – it’s only about one standard deviation. So we could just be looking at random chance.
Also, the regression included free-agents, arbitration-eligible players, and young players whose salaries are pretty much set by management. This forces the regression to make the implicit assumption that all three groups will be rewarded in the same ratios. For instance, if free agents who slug 100 points higher increase their salaries by 24%, this assumes that even rookies with no negotiating power will get that same 24%. As commenter Guy wrote here, this isn’t necessarily true, and a case could be made that these younger players should have been left out entirely.
Suppose that teams knew the importance of OBP all along, but players didn’t. Then it would make sense that it would appear to be “undervalued” among players whose salaries aren’t set by the market. Teams are paying players the minimum they would accept, and if those high-OBP players don’t know to ask what they’re worth, they would appear to be underpaid. That could conceivably account for the study’s entire effect, since the study combines market-determined salaries equally with non-market salaries.
So the bottom line is this: if you consider the 2004 data at face value, the conclusion is that GMs overcompensated for the findings in “Moneyball” and paid far more than intrinsic value for OBP. But because of the way the study is structured, there is reason to be skeptical of this result.
In any case, the study does seem to suggest something about salaries in 2004 was very different from the four years preceding. What is that something? I don’t think there’s enough evidence there yet to embrace the authors’ conclusions … but if you restricted the study to newly-signed free agents, and added 2005 and 2006, I’d sure be interested in seeing what happens.