Friday, June 01, 2012

Defending my "walk year" study

Over at Slate, Dan Engber talks about the hypothesized "walk year" effect, in which players put out extra effort when they're about to negotiate a new contract (or, correspondingly, put out less effort after signing a long-term deal).  He discusses various studies (including one of mine) that look at player statistics to try to see whether this happens.  Some of those papers found an effect, and some (like mine) did not.

I agree with Dan that the hardest thing about these kinds of studies is coming up with a way to estimate what the player "should have" done under neutral circumstances.  For instance, suppose a player OPSes .800 and .750 in the third-last and second-last year of his contract, but then hits .780 in his walk year.  How do you evaluate that?  Is it evidence that he's trying harder, since he improved from the year before?  Or, is that if the player hit .800 two years ago, and he only hits .780 now, obviously he isn't showing much of an effect?

What you need is to have a way of benchmarking.  You need to say, for instance, "if a player hits .800 and .750 in his two previous years, he's expected to hit .770 this year.  So, if he hits more, that's confirmatory evidence of a walk year effect, and if he hits less, that's evidence against."

But how do you come up with that number, the .770?  That's critical to your conclusions.

The various academic studies Dan cites come up with various methods.  Most of them do it by some kind of regression.  They try to predict performance by various combinations of variables: age, contract status, salary, experience, number of years with the same team ... stuff like that.

The problem with that approach, as Dan points out, is that you really have to get everything right if you're going to avoid confounding effects.  For instance, suppose your regression includes a variable for age, but you incorrectly assume that the relationship is linear (it actually rises to a peak in a player's 20s, then falls).  In that case, you're probably going to overestimate older and younger players, but underestimate players in their peak ages.  And so, if players in the last year of a contract tend to be those in their prime, you'll underestimate them, and therefore appear to find a walk year effect when, actually, you're really just seeing an artifact of age.
 
If you find an effect, you don't know if it's real, or if it's just that you're consistently biased in your expectations for the players.

----

How, then, do I defend my own study?  Why do I think it works better than the other studies Dan mentions?

There's one big reason.  Most of the other studies try to predict what the player should have done based on his past performance.  My study tries to predict based on this past *and future* performance.

That is: the other studies asks, what comes next in the series ".750, .800"?  My study adds the two years *after* the walk year, and asks, what's the missing number in the series ".750, .800, ???, .900, .950"?

Obviously, the second question is easier to answer than the first.  The first way, if the player hits .850 the next year, it looks like he must be trying harder.  The second way, it's obvious that .850 is exactly what you'd expect.

And it's not just the fact that you have twice as much data.  It's also that age matters a lot less now.  If you only look at the "before" years, you have to figure out whether the current year should be higher or lower based on age.  But if you look at "before" and "after", it doesn't matter as much, because the subsequent years tell you which way the player is headed.  Regardless of whether he's is getting better or worse, the average of the four years should still be roughly what you'd expect in the middle.

The only time you'd have to significantly adjust for age is when a player is near his peak.  For instance -- and I'll switch to small numbers to make it simpler -- if a young guy goes 4, 6, ?, 4, 6, you'll probably guess the middle should be around 5.  And if an old player does the same thing, again you'll guess the middle should be around 5.

But if the "?" year is at the peak, when the player is, say, 27, maybe you want to bump that to 5.3 or 5.4, to reflect that most players are a bit better at 27 than in the average of the surrounding years.  Still, that's a much simpler age adjustment than trying to predict age-related changes for every player, at any age.

------

One more example, to maybe make things clearer.

1.  At the end of 1980, the Dow-Jones Industrial Average was about 825.  What should you have expected it to be at the end of 1981?

2.  At the end of 1980, the Dow-Jones Industrial Average was about 825.  At the end of 1982, it was about 1046.  What should you have expected it to be at the end of 1981?

Much easier to get an answer to the second question than the first, right?  Sure, for any given year, you might be off, but, overall, knowing the subsequent year will make you much, much closer, on average.





Labels: ,

3 Comments:

At Friday, June 01, 2012 2:16:00 PM, Anonymous mettle said...

I saw the mention of your article in that Slate post!
I haven't read your study, but how do you deal with selection bias? Do you only take players that have played 5 years (N-2, N-1, N, N+1, N+2) or three years (N-1, N, N+1) and either way, how do you deal with the fact that people who play N+1 had to have done decently in N to begin with to have the chance to play again?

 
At Friday, June 01, 2012 2:19:00 PM, Blogger Phil Birnbaum said...

Players who don't play year N are not included in the study.

Players who don't play year N+1 are given an estimate for year N based on their two previous years only. I don't think I adjusted those for age, so there's a bit of bias there.

 
At Friday, June 01, 2012 10:40:00 PM, Blogger Chris Phillips said...

I'm with you (players in their walk years don't perform significantly better), but for a completely non-mathematical reasons. From being a fantasy baseball owner, it's my experience that pursing players in their walk years is chasing after fools gold. It's an urban myth that I have yet to see an example of. It may even be propaganda put out by owners to generate anti-player sentiment. Ooooh, look at the evil high paid player. He only plays hard so that he can be overpaid for playing a game.

 

Post a Comment

<< Home