Tuesday, December 05, 2006

Do NFL teams overvalue high draft picks?

Do NFL teams irrationally overvalue draft choices? A 2005 study by Cade Massey and Richard Thaler argues, emphatically, that they do.

The study is called "
The Loser's Curse: Overconfidence vs. Market Efficiency in the National Football League Draft." (The title is a play on "The Winner's Curse," which refers to the tendency for the winner of an auction to have overpaid. It is also the title of an excellent book by Thaler that discusses ways in which markets appear to be inefficient.)

The study is quite readable, and you can safely ignore the more complex-looking math (as I did).

Massey and Thaler (M&T) start by identifying the market value of a draft choice. Their method is ingenious – they just look at all cases where teams traded draft choices for other draft choices. They find that draft choice values are very consistent; indeed, teams have internalized these rules, to the extent that they develop charts of their relative values. Each team has almost the same chart, and so when M&T superimpose actual trades on their theoretical curve, they fit almost perfectly. That is, all thirty teams in the league have independently reached the same conclusions about what draft choices are worth – or at least act as if they have. It turns out, for instance, that teams think the first pick is worth the 10th and 11th picks combined, or the sum of the last four picks of the first round.

But M&T conclude that all thirty teams are wrong.

Here's what they did. They divided all drafted players into one of five groups, based on their status for a given season: (1) not on the roster; (2) on the roster but did not start any games; (3) started between 1 and 8 games; (4) started more than 8 games but didn’t make the Pro Bowl; and (5) made the Pro Bowl.

Then, they ran a regression on free agent salaries, to predict what a player in each group at each position should earn. Just for fun, here are the values for quarterbacks:

$0 ........... not on the roster
$1,039,870 ... on the roster but didn't start
$1,129,260 ... started between 1 and 8 games
$4,525,227 ... started more than 8 games
$9,208,248 ... made the Pro Bowl


Then, for each draft position, they computed the average free-agent value for the player, and compared it to the salary he was actually paid. So, a Pro Bowl quarterback draftee who made only $4 million would have earned the team a surplus of $5,208,248.

As it turns out, for their first five seasons in the league (free agency begins in year six), drafted players produced an average surplus of about $470,000 per year. The surprise is that you'd expect the early picks to be the most valuable. But they're not. The surplus is highest roughly between picks 25 and 75 (about $700,000). It's lower for the first few picks. In fact, the number one pick in the draft produces a surplus of only about $500,000.

That's because there's a rookie salary scale that's based on draft position, and it's very steep – first picks make a lot more than, say, tenth picks. And so, although first picks turn out to be better players than later picks, they are also paid much more. The pay difference is higher than the ability difference, and so first picks don't turn out to be such big bargains after all.

And this is why M&T argue that teams are irrational. To get a single first pick overall, teams are willing to give up a 27th pick, plus a 28th pick, plus a 29th pick, plus a 30th pick. Any one of those four later picks is worth more than the number one pick. To trade four more valuable picks for one less valuable pick, the authors say, is clearly irrational – caused by "non-regressive predictions, overconfidence, the winner's curse, and false consensus."

I'm somewhat convinced, but not completely.

My problem with the analysis is that the authors (admittedly) use "crude performance measures" in their salary regression. Their five performance categories are extremely rough. Specifically, the fourth category, starters who don't make the Pro Bowl, contains players of extremely different capabilities. If you treat them the same, then you are likely to find that (say) the 20th pick is not much different from the 30th pick – both will give you roughly the same shot as a regular. It may turn out that the 20th pick will give you a significantly *better* regular, but the M&T methodology can't distinguish the players unless one of them makes the Pro Bowl.

(For readers who (like me) don't know football players very well, consider a baseball analogy. Suppose AL shortstops Derek Jeter and Carlos Guillen go to the All-Star game. A study like this would then consider Miguel Tejada equal to Angel Berroa, since each started half their team's games, and neither was an All-Star. Of course, Tejada is really much, much better than Berroa.)

In fact, the study does note the difference in quality, but ignores it. The salary regression includes a term for where players were picked in the draft. They found that keeping all things equal, including the player's category, players drafted early make more money than players drafted late. Part of that, no doubt, is that players drafted early can negotiate better first-year contracts. But, presumably, for years 2-5, players are paid on performance, so if early draftees make more than late draftees in the same category, that does suggest that players drafted earlier are better.

But M&T don't consider this factor. Why? Because the regression coefficient doesn't come out statistically significant. For their largest sample, the coefficient is only 1.7 standard deviations above the mean, short of the 2 SDs or so required for significance. And so, they ignore it entirely.

This may be standard procedure for studies of this sort, but I don't agree with it. First, there's a very strong reason to believe that there is a positive relationship between the variables (draft choice and performance). Second, a significant positive relationship was found between draft choice and performance in another part of their study (draft choice vs. category). Third, it could be that the coefficient is strongly positive in a football sense (I can't tell from the study -- they don't say what the draft variable is denominated in). Fourth, the coefficient was close to statistical significance. Fifth (and perhaps this is the same as the first point), ignoring the coefficient assumes that all non-Pro-Bowl starters are the same, which is not realistic. And, finally, and most importantly, using the coefficient, instead of rejecting it and using zero instead, might significantly affect the conclusions of the study.

What the authors have shown is that if you consider Miguel Tejada equal to Angel Berroa, draft choice doesn't matter much. That's true, but not all that relevant.

There's a second reason for skepticism, too. The author's draft-choice trade curve is based on trades that actually happened. Most of those trades involve a team moving up only a few positions in the draft – maybe half a round. But a team won't make that kind of trade just for speculation; they'll make it because there's a specific player they're interested in. It's quite possible that a first pick is worth four later picks only in those cases when the first pick is particularly good. By evaluating trades as exchanges of average picks for average picks, the authors might be missing that possibility.

It wouldn't be hard to check – just find all those trades, see the players that were actually drafted in the positions traded, and see how the actual surpluses worked out. It could be that there's no significant difference between random trades and real trades – but shouldn't you check to be sure?

M&T do give one real-life example. In the 2004 draft, the Giants had the fourth pick (expected to be Philip Rivers). They could trade up for the first pick (Eli Manning), or they could trade down for the seventh pick (Ben Roethlisberger). Which trade, if any, should they have made? According to the historical trends the authors found, they should have traded down – a seventh pick is actually worth more than a fourth, and the Giants would have even received an extra second round pick as a bonus! But, in this specific case, the Giants would have to consider the relative talents of the actual three players involved. The authors assume that the Manning, Rivers and Roethlisberger are exactly as talented as the average first, fourth, and seventh picks. But not every draft is the same, not every team is equal in their player evaluations, and, most importantly, you can't assume that untraded draft choices are the same as traded draft choices.

So I'm not completely convinced by this study. But I'm not completely unconvinced either. I think there's enough evidence to show that high draft picks aren't all they're cracked up to be. But, because the authors' talent evaluations are so rough precisely where they're most important, I think there's a possibility their actual numbers may be off by quite a bit.

(Hat tip to
The Wages of Wins blog.)

UPDATE, 2/18/2012: Link to .PDF updated, since original study moved



Labels: , ,

8 Comments:

At Wednesday, December 06, 2006 2:17:00 AM, Blogger Phil Birnbaum said...

As I wrote, I can't tell what the "draft value" variable is denominated in for the salary regression. But if it's a 0 to 1 scale, where 1 is the first draft choice and the rest are percentage of value, then the first choice is underrated (in terms of salary) by about 20% compared to later choices.

The first pick appears to have a "performance" of $3MM and a compensation of $2.5MM, for a surplus of $500,000. If you bump performance by 20%, you now get a performance of $3.6MM and compensation of $2.5MM, for a surplus of $1.1MM -- more than twice what it comes out to if you ignore that one variable.

 
At Wednesday, December 06, 2006 10:29:00 AM, Anonymous Anonymous said...

Interesting analysis. In addition to your point about the talent differences obscured by such large categories, I wonder if using the ProBowl to define elite performers introduces a possible bias. I don't know enough about the ProBowl selection process to know if this is a problem, but it seems possible that veterans have an advantage over younger players. That is, if a 2nd-year and a veteran player are equally talented, the latter may be more likely to make the Pro Bowl. If so, then the value of elite young players will be understated, which would tend to make high draft choices look bad (assuming they produce more elite performers).

 
At Wednesday, December 06, 2006 3:00:00 PM, Blogger Phil Birnbaum said...

Good call, that's quite possible.

From Table 2 of the study, here are the probabilities of making the Pro Bowl by year in the NFL. Probabilities here assume the player is actually on a roster ... derived by dividing column 5 of panel B by column 2 of panel B.

.006
.029
.040
.064
.079
.097
.111
.119
.101
.091
.080
.095

So an eighth-year player has three times the chance of making the Pro Bowl than a third-year player. Of course, the veterans could just be better players than the younger guys.

 
At Wednesday, December 06, 2006 10:48:00 PM, Anonymous Anonymous said...

The authors assume that the Manning, Rivers and Roethlisberger are exactly as talented as the average first, fourth, and seventh picks. But not every draft is the same, not every team is equal in their player evaluations

That's somewhat addressed in the first part of the paper where they discuss overconfidence and false consensus. Teams overestimate their ability to discriminate between the best player at a position and the next best one, and they overestimate the chance that the player will be picked early. They obviously don't have access to the teams' actual draft boards and player rankings to support these contentions directly, but I found the references to the psych experiments relevant and likely applicable to the NFL draft. I'd also be somewhat skeptical that teams can accurately distinguish between the quality and depth of available talent from one draft to the next except at a very rough level.

Although I'm also unconvinced by all the authors' conclusions, I also think there are some very important findings here. This paper should be a springboard for further refinements, as I understand the authors are working on.

I agree that the performance measures are somewhat crude, but unfortunately, unlike baseball, accurate individual metrics don't currently exist in football that can compare players across positions. Given the available data, I thought the groupings based on roster, starts, and Pro Bowls were reasonable. It's difficult to tell how much bias this brings over the long term. If they wanted to limit their study to QBs, or RBs, or WRs, they could obviously use better metrics, but then the sample size would be much smaller.

If nothing else, the simple finding that a player is only slightly more likely ("near chance") to outperform (with the aforementioned performance metrics caveats) the next player chosen at the same position should encourage teams to think of the draft more in tiers rather than absolute rankings of players. It's not clear whether there's great benefit to splitting hairs among similarly judged players.

 
At Wednesday, December 06, 2006 11:12:00 PM, Blogger Phil Birnbaum said...

Hi, Jim A.,

I generally agree with much of your comment.

With regard to teams overestimating their ability to discriminate between players, I also suspect that's somewhat true. My point, however, is that there might be a difference between cases where the teams choose to trade for a specific pick, and cases where they do not. Clearly, in the former case, there's a player they feel very strongly about. It could be that there's no difference, and the team is just experiencing the winner's curse, but the fact remains that those are the cases where the team had the most incentive to make a trade, and you should look at those specificially.

The authors did indeed find only small differences between successive players chosen at the same position (several draft picks apart). But again, there is no way to distinguish between non-Pro-Bowl starters, so I also take that finding with a grain of salt.

I agree with you that the problem is that there are few metrics for accurately evaluating players. One thing you could do is value them by salary in year 6 – that would encapsulate the teams' evaluation of the players, which is, in my view, better than using the crude categories the authors chose.

In 1984, Bill James did a study of baseball draft choices from 1965-1983. He found that in terms of value, the first pick represented 6.6% of the total value of the first 50 picks, and subsequent picks declined from the previous by the formula

V(x) = v(x-1) * (x+5) / (x+6).

I think that's quite close to the market values for picks that Massey and Thaler found. One difference, of course, is that NFL teams have to pay the value of those picks, while MLB teams didn't (salaries were very low at the time). Another difference is that in baseball, it's much easier for teams to make more rational decisions than in football, because of the availability of reliable performance statistics. Those two factors would suggest that the authors are correct in their conclusions about teams not being rational.

However, there's still what I consider the main flaw, ignoring evidence that prior picks are better than later picks (because the coefficient turned out not to be statistically significant). That undercuts one of the authors' most important conclusions. If you keep the coefficient, first picks are more valuable than tenth picks after all.

You write, "I also think there are some very important findings here."

I agree, and look forward to the refinements.

 
At Thursday, December 07, 2006 11:33:00 PM, Anonymous Anonymous said...

I like the idea of using players' salaries (or cap-charge compensation) in their free agency years as an estimate of performance.

By the way, unrestricted free agency actually begins in year 5. This is an error in the paper, though I don't know whether using year 6 instead would introduce much bias.

 
At Monday, February 26, 2007 1:37:00 PM, Blogger The Sage said...

I read this article and Moneyball. What I find interesting is that it took the NFL and other major Sabermetric groups to get onto what Bill Walsh was doing this 20 years ago with the 49ers. Yes the 9ers did have much $ to pay players and they did especially for particular free agents. But Walsh never drafted like he had a lot of money to spend. He always drafted with guys that would produce the most regardless. At any point Walsh was involved in the draft the 9ers always produced low draft stars. The only other teams that have come close have been the Broncos (although not this yr) the Patriots, and the Ravens. These three teams have enjoyed much more success than most other teams in the NFL. The Broncos have run the ball over more yards than any team since 1998. The Ravens have been extremely competitive with virtually no offense and the Patriots have 3 rings.
High draft picks are not a true waste but very few turn out the high yield production over the long term. you can count the amount on your hand that have in the last 10 years. I personally think it is more emotional because teams want that impact player to turn it around. When few have.
I am very interested to see how the 49ers will do this year. Mainly because they have a lot of draft picks and money to spend. It'll either be a complete bust or foundation for a lot of success over the coming years.

 
At Monday, March 05, 2007 11:53:00 AM, Anonymous Anonymous said...

The blog states:
"... players drafted early can negotiate better first-year contracts. But, presumably, for years 2-5, players are paid on performance, so if early draftees make more than late draftees in the same category, that does suggest that players drafted earlier are better."

This is not true. Virtually all contracts signed by draft picks are multi year. The length has varied as the CBA has changed but generally 4-6 years for first day picks. Thus the draftees' contract price is determined ex ante performance.

Generally, I tend to agree with the authors and also with Jim A's comments.

 

Post a Comment

<< Home