Are early NFL draft picks no better than late draft picks? Part II
This is about the Dave Berri/Rob Simmons paper that concludes that QBs who are high draft choices aren't much better than QBs who are low draft choices. You probably want to read Part I, if you haven't already.
--------
If you want to look for a connection between draft choice and performance, wouldn't you just run a regression to predict performance from draft choice? The Berri/Simmons paper doesn't. The closest they come is their last analysis of many, the one that starts at the end of page 47.
Here's what the authors do. First, they calculate an "expected" draft position, based on a QB's college stats, "combine stats" (height, body mass index, 40 yard dash time, Wonderlic score), and whether he went to a Division I-A school. That's based on another regression earlier in the paper. I'm not sure why they use that estimate -- it seems like it would make more sense to use the real draft position, since they actually have it, instead of their weaker (r-squared = 0.2) estimate.
In any case, the use that expected draft position, and run a regression to predict performance for every NFL season in which a QB had at least 100 plays. They also include terms for experience (a quadratic, to get a curve that rises, then falls).
It turns out, in that regression, that the coefficient for draft position is not statistically significant.
And, so, Berri and Simmons conclude,
"Draft pick is not a significant predictor of NFL performance. ... Quarterbacks taken higher do not appear to perform any better."
I disagree. Two reasons.
1. Significance
As the authors point out, the coefficient for draft position wasn't nearly significant -- it was only 0.52 SD from zero.
But, it's of a reasonable size, and it goes in the right direction. If it turns out to be non-significant, isn't that just that the authors didn't use enough data?
Suppose someone tells me that candy bars cost $1 at the local Kwik-E-Mart. I don't believe him. I hang out at the store for a couple of hours, and, for every sale, I mark down the number of candy bars bought, and the total sale.
I do a regression. The coefficient comes out to $0.856 more per bar, but it's only 0.5 SD.
"Ha!" I tell my friend. "Look, that's not significantly different from zero! Therefore, you're wrong! Candy bars are free!"
That would be silly, wouldn't it? But that's what Berri and Simmons are doing.
Imagine two quarterbacks with five years' NFL experience. One was drafted 50th. The other was drafted 150th. How much different would you expect them to be in QB rating? If you don't know QB rating, thing about it in terms of rankings. How much higher on the list would you expect the 50th choice to be, compared to the 150th? Remember, they both have 5 years' experience and they both had at least 100 plays that year.
Well, the coefficient would say the early one should be 1.9 points better. I calculate that to be about 15 percent of the standard deviation for full-time quarterbacks. It'll move you up in the rankings two or three positions.
Is that about what you thought? It's around what I would have thought. Actually, to be honest, maybe a bit lower. But well within the bounds of conventional wisdom.
So, if you do a study to disprove conventional wisdom, and your point estimate is actually close to conventional wisdom ... how can you say you've disproven it?
That's especially true because the confidence interval is so wide. If we add 2 SD to the point estimate, we find that the effect of draft choice could be as high as -0.091. That means that 100 draft positions is worth 9.1 points. That's a huge difference between quarterbacks. Nine points would move you up at least 6 or 7 positions -- just because you were drafted earlier. It's almost 75 percent of a standard deviation.
Basically, the confidence interval is so wide that it includes any plausible value ... and many implausible values too!
That regression doesn't disprove anything at all. It's a clear case of "absence of evidence is not evidence of absence."
2. Attrition
In Part I, I promised an argument that doesn't require the assumption that QBs who never play are worse than QBs who do. However, we can all agree, can't we, that if a QB plays, but then he doesn't play any more because it's obvious he's not good enough ... in *that* case, we can say he's worse than the others, right? I can't see Berri and Simmons claiming that Ryan Leaf would have been a star if only his coaches gave him more playing time.
If we agree on that, then I can show you that the regression doesn't work -- that the coefficient for draft choice doesn't accurately measure the differences.
Why not? Again, because of attrition. The worse players tend to drop out of the NFL earlier. That means they'll be underweighted in the regression (which has one row for each season). So, if those worse players tend to be later draft choices, as you'd expect, the regression would underestimate how bad those later choices are.
Here, let me give you a simple example.
Suppose you rate QBs from 1 to 5. And suppose the rating also happens to be the number of seasons the QB plays.
Let's say the first round gives you QBs of talent 5, 4, 4, and 3, which is an average of 4. The second round gives you 4, 2, 1 and 1, which averages 2.
Therefore, what we want is for the regression to give us a coefficient of 4 minus 2, which is 2. That would confirm that the first round is 2 better than the second round.
But it won't. Why not? Because of attrition.
-- The first year, everyone's playing. So, those years do in fact give us a difference of 2.
-- The second year, the two "1" guys are gone. The second round survivors are the "4" and "2", so their average is now 3. That means the difference between the rounds has dropped down down to 1.
-- The third year, the "2" guy is gone, leaving the second round with only a "4". Both rounds now average 4, so they look equal!
-- The fourth year, the "3" drops out of the first round pool, so the difference becomes 0.33 in favor of the first round.
-- The fifth year ... there's nothing. Even though the first round still has a guy playing, the second round doesn't, so the results aren't affected.
So, see what happens? Only the first year difference is correct and unbiased. Then, because of attrition, the observed difference starts dropping.
Because of that, if you actually do the regression, you'll find that the coefficient comes up 1.15, instead of 2.00. It's understated by almost half!
This will almost always happen. Try it with some other numbers and assumptions if you like, but I think you'll find that the result will almost never be right. The exact error depends on the distribution and attrition rate.
Want a more extreme case? Suppose the first round is four 4s (average 4), and the second round is a 7 and three 1s (average 2.5). The first round "wins" the first year, but then the "1"s disappear, and the second round starts "winning" by a score of 7-4.
In truth, the first round players are 1.25 better than the second round. But if you do the Berri/Simmons regression, the coefficient comes out negative, saying that the first round is actually 0.861 *worse*!
So, basically, this regression doesn't really measure what we're trying to measure. The number that comes out isn't very meaningful.
------
Choose whichever of these two arguments you like ... or both.
I'll revisit some of the paper's other analyses in a future post, if anyone's still interested.
------
UPDATE: Part III is here.
Labels: Berri, draft, football, freakonomics, NFL