Do NFL teams overvalue high draft picks?
Do NFL teams irrationally overvalue draft choices? A 2005 study by Cade Massey and Richard Thaler argues, emphatically, that they do.
The study is called "The Loser's Curse: Overconfidence vs. Market Efficiency in the National Football League Draft." (The title is a play on "The Winner's Curse," which refers to the tendency for the winner of an auction to have overpaid. It is also the title of an excellent book by Thaler that discusses ways in which markets appear to be inefficient.)
The study is quite readable, and you can safely ignore the more complex-looking math (as I did).Massey and Thaler (M&T) start by identifying the market value of a draft choice. Their method is ingenious – they just look at all cases where teams traded draft choices for other draft choices. They find that draft choice values are very consistent; indeed, teams have internalized these rules, to the extent that they develop charts of their relative values. Each team has almost the same chart, and so when M&T superimpose actual trades on their theoretical curve, they fit almost perfectly. That is, all thirty teams in the league have independently reached the same conclusions about what draft choices are worth – or at least act as if they have. It turns out, for instance, that teams think the first pick is worth the 10th and 11th picks combined, or the sum of the last four picks of the first round.
But M&T conclude that all thirty teams are wrong.
Here's what they did. They divided all drafted players into one of five groups, based on their status for a given season: (1) not on the roster; (2) on the roster but did not start any games; (3) started between 1 and 8 games; (4) started more than 8 games but didn’t make the Pro Bowl; and (5) made the Pro Bowl.
Then, they ran a regression on free agent salaries, to predict what a player in each group at each position should earn. Just for fun, here are the values for quarterbacks:
$0 ........... not on the roster
$1,039,870 ... on the roster but didn't start
$1,129,260 ... started between 1 and 8 games
$4,525,227 ... started more than 8 games
$9,208,248 ... made the Pro Bowl
Then, for each draft position, they computed the average free-agent value for the player, and compared it to the salary he was actually paid. So, a Pro Bowl quarterback draftee who made only $4 million would have earned the team a surplus of $5,208,248.
As it turns out, for their first five seasons in the league (free agency begins in year six), drafted players produced an average surplus of about $470,000 per year. The surprise is that you'd expect the early picks to be the most valuable. But they're not. The surplus is highest roughly between picks 25 and 75 (about $700,000). It's lower for the first few picks. In fact, the number one pick in the draft produces a surplus of only about $500,000.
That's because there's a rookie salary scale that's based on draft position, and it's very steep – first picks make a lot more than, say, tenth picks. And so, although first picks turn out to be better players than later picks, they are also paid much more. The pay difference is higher than the ability difference, and so first picks don't turn out to be such big bargains after all.
And this is why M&T argue that teams are irrational. To get a single first pick overall, teams are willing to give up a 27th pick, plus a 28th pick, plus a 29th pick, plus a 30th pick. Any one of those four later picks is worth more than the number one pick. To trade four more valuable picks for one less valuable pick, the authors say, is clearly irrational – caused by "non-regressive predictions, overconfidence, the winner's curse, and false consensus."
I'm somewhat convinced, but not completely.
My problem with the analysis is that the authors (admittedly) use "crude performance measures" in their salary regression. Their five performance categories are extremely rough. Specifically, the fourth category, starters who don't make the Pro Bowl, contains players of extremely different capabilities. If you treat them the same, then you are likely to find that (say) the 20th pick is not much different from the 30th pick – both will give you roughly the same shot as a regular. It may turn out that the 20th pick will give you a significantly *better* regular, but the M&T methodology can't distinguish the players unless one of them makes the Pro Bowl.
(For readers who (like me) don't know football players very well, consider a baseball analogy. Suppose AL shortstops Derek Jeter and Carlos Guillen go to the All-Star game. A study like this would then consider Miguel Tejada equal to Angel Berroa, since each started half their team's games, and neither was an All-Star. Of course, Tejada is really much, much better than Berroa.)
In fact, the study does note the difference in quality, but ignores it. The salary regression includes a term for where players were picked in the draft. They found that keeping all things equal, including the player's category, players drafted early make more money than players drafted late. Part of that, no doubt, is that players drafted early can negotiate better first-year contracts. But, presumably, for years 2-5, players are paid on performance, so if early draftees make more than late draftees in the same category, that does suggest that players drafted earlier are better.
But M&T don't consider this factor. Why? Because the regression coefficient doesn't come out statistically significant. For their largest sample, the coefficient is only 1.7 standard deviations above the mean, short of the 2 SDs or so required for significance. And so, they ignore it entirely.
This may be standard procedure for studies of this sort, but I don't agree with it. First, there's a very strong reason to believe that there is a positive relationship between the variables (draft choice and performance). Second, a significant positive relationship was found between draft choice and performance in another part of their study (draft choice vs. category). Third, it could be that the coefficient is strongly positive in a football sense (I can't tell from the study -- they don't say what the draft variable is denominated in). Fourth, the coefficient was close to statistical significance. Fifth (and perhaps this is the same as the first point), ignoring the coefficient assumes that all non-Pro-Bowl starters are the same, which is not realistic. And, finally, and most importantly, using the coefficient, instead of rejecting it and using zero instead, might significantly affect the conclusions of the study.
What the authors have shown is that if you consider Miguel Tejada equal to Angel Berroa, draft choice doesn't matter much. That's true, but not all that relevant.
There's a second reason for skepticism, too. The author's draft-choice trade curve is based on trades that actually happened. Most of those trades involve a team moving up only a few positions in the draft – maybe half a round. But a team won't make that kind of trade just for speculation; they'll make it because there's a specific player they're interested in. It's quite possible that a first pick is worth four later picks only in those cases when the first pick is particularly good. By evaluating trades as exchanges of average picks for average picks, the authors might be missing that possibility.
It wouldn't be hard to check – just find all those trades, see the players that were actually drafted in the positions traded, and see how the actual surpluses worked out. It could be that there's no significant difference between random trades and real trades – but shouldn't you check to be sure?
M&T do give one real-life example. In the 2004 draft, the Giants had the fourth pick (expected to be Philip Rivers). They could trade up for the first pick (Eli Manning), or they could trade down for the seventh pick (Ben Roethlisberger). Which trade, if any, should they have made? According to the historical trends the authors found, they should have traded down – a seventh pick is actually worth more than a fourth, and the Giants would have even received an extra second round pick as a bonus! But, in this specific case, the Giants would have to consider the relative talents of the actual three players involved. The authors assume that the Manning, Rivers and Roethlisberger are exactly as talented as the average first, fourth, and seventh picks. But not every draft is the same, not every team is equal in their player evaluations, and, most importantly, you can't assume that untraded draft choices are the same as traded draft choices.
So I'm not completely convinced by this study. But I'm not completely unconvinced either. I think there's enough evidence to show that high draft picks aren't all they're cracked up to be. But, because the authors' talent evaluations are so rough precisely where they're most important, I think there's a possibility their actual numbers may be off by quite a bit.
(Hat tip to The Wages of Wins blog.)
UPDATE, 2/18/2012: Link to .PDF updated, since original study moved