Is NFL defense mostly luck?
My last post linked to a study by Brian Burke that showed the Patriots scored a touchdown twice as often as the Giants, but that the Patriots' defense *prevented* a touchdown only slightly more often than the Giants'.
I wondered whether this meant that defense didn't vary between teams as much as offense does, and commenter "w. jason" confirmed that.
Now there's another confirmation, from this study at pro-football-reference.com (by Doug Drinen?). Near the end, Drinen found that the year-to-year correlation of NFL teams' defensive stats is ... *negative*:
-.10 Turnovers forced
I assume it's just random chance that these numbers are negative, unless you think there's some reason teams that are above-average this year should be below-average next year. If you had enough data, you'd probably find a positive, but small, correlation.
Does this mean that defense doesn't matter much? Is any player pretty much as good as any other on your defensive line?
That's possible. It's also possible, for instance, that a defense is only as good as its weakest link, and it's your *worst* players that make the difference, not your best. Or, it could be that teams can't tell the good defenders from the bad, and wind up with a random assortment.
In any case, shouldn’t this imply that you shouldn't pay a premium for defenders? I assume teams pay more for their offensive squad than their defensive, but probably not as much as these results say they should. I'll check it out if I have a chance.
That Drinen link, by the way, came from today's "The Numbers Guy" article by Carl Bialik. He notes that the outcomes of sporting events are hard to predict. An organization called "Accuscore" is able to predict only 63% of NFL games, 57% of baseball games, and 68% of basketball games. (Hockey gets screwed again – not even mentioned.) Bialik attributes the difference to the amount of information known about the teams, but I think it's actually that basketball is intrinsically less random than baseball – as I have argued here and elsewhere.
Bialik also debunks one of the naive arguments against sabermetrics: that since statistical analysis thought the Giants should have lost all three games, and was wrong three times, the analysis must be wrong. I've seen that argument a couple of times lately, and it goes beyond silly. Even the best analysis only gives you a probability estimate, and even low-probability events happen sometimes.
And one last interesting tidbit: For NFL games, Accuscore has 54% accuracy against the spread. That seems pretty impressive to me.