Tuesday, March 19, 2013

NFL coaching decisions cost 0.73 wins per team

By making bad decisions on fourth down, NFL coaches are sacrificing almost three-quarters of a win per season.  That's from Matt Meiselman, who crunched some numbers with Brian Burke and posted on Brian's site.  

In 2012, the Cleveland Browns were the "worst", sacrificing a probabilistic 1.02 wins by making 42 "wrong" decisions.  The Packers were the least "worst", giving up only around half an expected win.  I would have expected New England to represent well in this measure, since Bill Belichick has often been touted as a sabermetrically-savvy coach, but the Patriots were only a bit better than average, at 0.6.

Those numbers are based on expectations for an average team.  It's quite likely that they overstate the cost, if the probabilities vary a lot based on quality of team.  My suspicion is that the quality effect is pretty small, because the spread of "wrongness" is so narrow.  In fact, the spread suggests to me that coaches are generally following the same "book" of conventional wisdom, with individual differences being pretty minor.  

The article implies that the losses are due to coaches generally being risk-averse, but doesn't give the numbers.  Is *every* bad decision caused by playing it too safe?  95 percent?  50 percent?  I don't know the answer.  My gut says ... I dunno, I'll guess 92 percent of cases are when the coach should have gone for it and didn't, instead of when he shouldn't have and did.  Matt/Brian, if you're reading this, am I close?  


I'm shocked at how high the numbers are.  Losing 0.73 wins is huge, considering that the difference between a playoff team and an average team is only, what, two games out of sixteen?  

I'd bet that's by far the biggest in-game coaching factor in any major sport (leaving out the decision of who plays).  In baseball, it's the equivalent of 4.6 games per 162, which is about the same percentage of distance to the playoffs.  But I can't see that MLB managers would have anything near that much influence.


At the Sloan convention, there was a lot of talk about how analytics people can increase their influence ... like, what to do or say to get coaches and management to listen to us numbers geeks.  

But, in this case, I think there's an easier path.  Any time there's a fourth-down decision, the TV broadcast could put the probabilities on the screen.  Like, for instance, "teams that go for it should be expected to win 48% of the time, while teams that punt should win only 30% of the time."  That's simple enough for viewers to understand ... which means, fans will be second-guessing the coach based on the numbers, rather than random feelings.  It would still be fun to discuss ... the ESPN guys could argue about why the percentages don't apply in this particular case, because the offense is poor, or the defense has momentum, or whatever.

In any case, it would change the nature of the second-guessing.  Right now, a coach may attract 1 pound of criticism when he plays it by the book, and 5 pounds when he goes for it.  With the probabilities on the screen, maybe the 1:5 ratio will immediately change to 1:3 or something, and then, over time, as the stats gain acceptance, all the way to 1:1.  Then, you've reached the tipping point where the coaches' incentives change.  Now, they take more flak, and sacrifice more job security, when they *don't* go by the percentages.  It wouldn't take long, I suspect, for things to change after that.

Labels: , ,


At Tuesday, March 19, 2013 11:34:00 PM, Blogger Danny Page said...

I would be quite pleased if they came up with a "% to Win" meter and put it on TV. I almost imagine it as revolutionary (if not more) as the Hole Cam of TV Poker back in 2003.

At Friday, March 22, 2013 10:20:00 AM, Anonymous Guy said...

I think NFL coaches are too conservative on 4th down. But, a couple of points about this analysis. One is that zero credit is awarded when a coach correctly goes for it. This is simply the sum of the projected loss in WP when the coach gets it "wrong" (according to Burke's expectations) -- so the range of scores begins at zero. That's not wrong, per se, but let's recognize that the standard here is perfection.

More importantly, this is not an actual analysis of outcomes. We don't know if teams actually lost win expectancy at the conclusion of these plays. This analysis simply assumes that Burke's estimates of future probabilities -- including 4th down conversion rates -- are correct. If coaches know something specific that is missing here, about either their own likelihood of converting, or this opponent's likelihood of scoring from this position, at this time in this game -- we aren't measuring that. Or if forcing the opponent's offensive unit to run more plays from worse field position impacts their success in scoring later in the game (or if resting my offensive unit impacts their future performance), we also won't measure that. At some point I'd love to see an empirical assessment of how going for it and punting are actually impacting WP.

At Saturday, March 23, 2013 12:19:00 AM, Blogger Phil Birnbaum said...


1. Agreed, we're measuring against perfection. But, again, I bet in-game decisions in baseball, measured against perfection, are nowhere near as close in magnitude. The average "wrong" decision is about .02 of a win, which is the equivalent of 0.2 runs in baseball. That's huge. That's like pinch hitting replacement level for Albert Pujols in an average-leverage situation.

2. Agreed, too, that it depends on circumstances. But how much? Yes, you might not want to go for it on fourth down against a good defense with a poor offense. But the difference is minimal on one play. Even when teams are mismatched, it takes an entire game's worth of plays for the better team to have an 80% (say) chance of winning. It gains .300 wins over (maybe?) 100 plays, or .03 wins per play. So, no team is so good that the success probability on a single play varies that much from the average.

But, yeah, the probabilities could be wrong for winning the game. So, if the stats say you have a 40% chance if you go for it but a 20% chance if you punt, maybe those probabilities are actually 30% and 12% or something, if you're a bad team, and vice-versa if you're a good team. But it still seems to me that it shouldn't make a big difference in most cases.

At Wednesday, April 03, 2013 8:40:00 PM, Blogger Geoff Buchan said...

I can accept that sub-optimal play calling costs teams wins in a particular circumstance, but in aggregate, each game still has one winner and one loser (except for the extremely rare NFL tie - and we had one last season!).

So the advantage of optimizing strategy in any area is temporary, if you assume that eventually other teams will do the same.

Of course, I'm probably sounding like an efficient markets theorist - in theory market participants are rational and profit maximizing. In practice some will understand probabilities better than others, and those with a relative edge will be a little more likely to win, all other things being equal.

But also over time success is emulated, so just as MLB teams started to value OBP more after the Moneyball-era Athletics had success, as some NFL coaches improve 4th down play calling and start to gain an edge because of it, others will start to copy that behavior, reducing the edge.


Post a Comment

Links to this post:

Create a Link

<< Home