Tuesday, July 02, 2013

Disputing Hakes/Sauer, part II

(continued from previous post)

To recap: the Hakes/Sauer study ran a regression to predict the log of player salary from last season's "eye" (BB/PA), "bat" (batting average), and "power" (TB/H).  From 2001 to 2006, they got these coefficients:

       eye  bat power
---------------------
2001  0.53 5.28 0.84
2002  1.52 3.64 0.68
2003  2.12 3.07 0.57
2004  5.26 4.14 0.78
2005  4.19 5.38 0.86
2006  2.14 4.66 0.58

The salary return to "eye" was much higher in 2004 and 2005, immediately following the release of "Moneyball."   The authors conclude that's because GMs were quick to adjust their salary offers in light of the new information.

Last post, I argued why, in theory, the results and conclusion are doubtful at best.  Now, I'm going to argue using the data.

------

Let me tell you what I think is *really* going on.  I think it has little to do with valuation of walks.  I think that, in 2004 and 2005, it just turned out that the players who walked the most happened to be more valuable than normal.  

As an analogy ... the data show that general managers paid much more for cars in 2004 than in 2003.  But it's not because "Motorball" showed them that automobiles were underpriced.  It's because in 2003, they were buying Chevrolets, but in 2004, they were buying Cadillacs.  

Why do I think that?  Because I ran a similar regression to the authors, but using performance rather than salary.  And I got similar results.

-------

Here's what I did.  Like the authors, I found all players in 2003 who had at least 130 plate appearances.  And I ran a regression on those players for 2004.  But: instead of trying to predict their 2004 *salary*, I tried to predict their 2004 *performance*.

Specifically, I tried to predict their "batting runs," which is the linear weights measure of their offensive performance.  However, I wanted to use the logarithm, like the authors did.  The problem is that runs can be negative, and you can't take the log of a negative number.

So, I converted batting runs to "runs above replacement," by adding 20 runs per 600 PA, and rounding any negative numbers to zero.  That still leaves a "log of zero" problem.  Since even "zero" players make a minimum salary equal to about 4/5 of a run, I added 0.8 to every player's total.  

Actually, when I got to that point, I figured, why not just convert to dollars?  So I took that "runs above replacement plus 0.8" and multiplied it by $500,000.  Effectively, I converted performance to "free agent value" -- what a team would have paid in salary if they had known in advance what the performance would be.  

And then, of course, I took the logarithm of that earned salary.  

(Notes: Unlike the Hakes and Sauer regression, mine didn't include a term for position, and I didn't include year dummies.  Also, I didn't include free-agent status, which doesn't matter much here because I used "earned salary" on the free agent scale for all players.  Oh, and I used only a player's first "stint" both years, just to save myself programming time.  I don't think any of those compromises affect the results too much.  

I did, however, include a variable for plate appearances, as the original study did.)

I ran my regression for every pair of seasons from 2000-01 to 2005-06.  Here are my coefficients:

       eye  bat power
---------------------
2001   4.06 5.52 1.43
2002   5.82 3.53 1.20
2003   5.97 4.76 1.46
2004   8.48 4.76 1.46
2005   6.92 2.58 1.08
2006   7.61 5.98 0.82

My coefficients are higher than the originals.  That's because I used a fake salary exactly commensurate with the player's performance.  In real life, much of the performance is random, which means it wouldn't be reflected in salary.

That is: some mediocre players might hit .300 just by luck.  My study would value that .300 at face value.  In real life, though, that player would probably have been paid as the .250 hitter he really is, which means the real life coefficient would be smaller.

But my point isn't about the actual magnitude of the coefficients -- it's about the year-to-year trend.  For the "eye" coefficient that we're talking about here, what does the Hakes/Sauer study show? 

Small increases to 2004, then a big jump, then a small dropoff to 2005, then a bigger dropoff to 2006.  

Mine show almost exactly the same pattern, except for 2006.

Here, I'll put them in the same chart for you:

       H/S   Me
----------------
2001  0.53  4.06
2002  1.52  5.82
2003  2.12  5.97
2004  5.26  8.48
2005  4.19  6.92
2006  2.14  7.61

See that they're the same trend?  If not, run a regression on these two columns.  You'll see that the correlation coefficient is +.82.  That's significant at the .05 level (actually, .0465).  

(It's cheating to drop the last datapoint ... but, if you drop it anyway, the correlation rises to .96, and is now significant at 1 percent.)

------

What does this tell us?  It tells us that the observed effect remains *even when you don't look at the team's salary decisions*.  That is: the spike in 2004/5 is NOT about how much the teams paid the players.  It's about the players' performances.  It just so happens -- whether for random reasons or otherwise -- that walks were a particularly good predictor in 2003 of how well a player would do in 2004.  

Hakes and Sauer claim teams paid more for players with walks in 2004 because they were enlightened by Moneyball.  However, this analysis shows that the teams would have paid more for those players *even if Moneyball had never happened* -- because those players were simply better players, even under the old, flawed methods of evaluation.

The spike, I would argue, is not because teams chose to pay more for Chevrolets in 2004 and 2005.  It's because they just happened to buy more Cadillacs those years.

-------

One loose end:   As I showed, there's a strong correlation between my "eye" column and the Hakes/Sauer "eye" column.  But there's no similar correlation for the other two columns.  Why not?

It's many factors combined.  One of them is probably the size of the effect.  For whatever reason, the values in the walk column jump around a lot more than the other two columns.  In the Hakes/Sauer regression, the range is huge ... from 0.53 to 5.26, a factor of nearly 10 times.  The other two columns are much narrower.

When I rerun my regression, I'm effectively taking the Hakes/Sauer regression, changing some of the stuff and removing some randomness and adding other randomness and leaving out some variables.  That shakes the sand castle around some, and many of the features get evened out a bit.  Only in the "eye" column was the original situation extreme enough to preserve the shape in the new situation.

More on this subject next post.

Labels: , ,

0 Comments:

Post a Comment

<< Home