How good are sports pundits' predictions?
According to this Freakonomics post, the experts who make NFL predictions aren't very good at it. Freakonomist Hayes Davenport checked the past three years' worth of predictions from USA Today, Sports Illustrated, and ESPN. He found that the prognosticators correctly picked only 36% of the NFL division winners.
That doesn't seem that great. Picking randomly would get you 25%. And, as the post points out,
"if the pickers were allowed to rule out one team from every division and then choose at random, they’d pick winners 33% of the time. So if you consider that most NFL divisions include at least one team with no hope of finishing first (this year’s Bengals, Chiefs, Dolphins, Panthers, Broncos, Vikings, and Manning-less Colts, for example), the pickers only need a minimum of NFL knowledge before essentially guessing in the dark."
Well, it sounds right, but you have to look deeper.
Winning depends on two things: talent, and luck. Since luck is, by definition, random, when you predict a winner, the only thing you can do is pick the team with the most talent. And, despite the 36% figure, there's no evidence that the pundits misjudged the talent. Because, just by luck, sometimes the best team won't win, and that's unpredictable.
Two days ago, the Buffalo Bills upset the New England Patriots, despite being 7:2 underdogs (I'm estimating 7:2 based on the 9-point spread). What percentage of experts would you expect to have got that right? Your answer should be zero percent. Nobody with any knowledge of football should have thought the Bills had a better than 50 percent chance of winning. On the off-chance that you DO find someone who picked the Bills to win, he's probably a crappy predictor -- maybe he just flips a coin all the time.
In the case where the underdogs wind up winning a game, or finishing first in their division, the truth is the opposite of what Freakonomics implies. In that case, the higher percentage of correct predictions, the WORSE the pundits.
So what does that 36% figure actually tell you? By itself, absolutely nothing. You have no idea, looking at that bare number, how good the pundits are. It depends. If it was *all* bad teams that won, a number as high as 36% means the experts are wrong a lot, but 36% of their bad picks happened to turn out OK. If it was all good teams that won, a number as low as 36% means the experts are wrong a lot -- they must have picked 64% bad teams. And, if it was exactly 36% of the best teams that won, but those aren't the cases where the experts were right, then, again, the experts are wrong a lot.
But, if it was exactly 36% of the best teams that won, and those are exactly the cases where the experts were right ... then the experts are perfect predictors.
So you can't just look at a number. 36% may be bad, but it might be awesome. It depends what actually happened.
-------
However: while this logic applies to picking outright winners of games or divisions, it doesn't apply to picking against the spread. Why not? Because, against the spread, the presumption is that the odds are close to 50/50.
In the Patriots/Bills game, the odds were roughly 77/22. Some experts might have pegged the Bills as having a 25% chance of winning, while some may have estimated only 20%. Still, both pundits would have obviously still bet on the Patriots. The fact that New England ended up losing isn't really relevant.
But against the +9 spread, you might have a reasonable difference of opinion. One expert might figure the true spread should be +8.5, and another might figure +9.5. So the first guy takes the Bills, and the second takes the Patriots.
On bets that are approximately 50/50, reasonable experts can disagree. On bets that are 77/22, they cannot. So, when it's 50/50, a higher percentage could, in fact, mean better predictions.
Still, there's lots of luck there too. If a pundit predicts all 256 games in a season, the (binomial) standard deviation of his success rate will be a little over 3 percentage points. So one predictor in 20 will be over 56%, or under 44%, just by luck.
That means it's still hard to figure out who's a "better" expert and who's not.
--------
The post goes on to criticize the pickers for risk aversion. Why? Because, it seems, the experts tended to pick the same teams that won last year.
Um ... why is that risk aversion? It stands to reason that the teams that won before are more likely to still be pretty good, so they're probably reasonable picks. But, the author says,
"Over the last fifteen seasons, the NFL has averaged exactly six new teams in the playoffs every year, meaning that half of the playoff picture is completely different from the year before. ... Given that information, a savvy picker relying on statistical precedent would choose six new teams when predicting the playoffs."
That doesn't follow at all! Just because I know six favorites will lose doesn't mean I should pick six underdogs! That would be very, very silly.
It's like predicting whether John Doe will win the lottery this week. The odds say no, and that's the way I should bet. And it's the same for Jane Smith, or Bob Jones. If there are a million people in the lottery, I should pick them all to lose. I'll be right 999,999 times, and wrong once.
But, according to Freakonomics, I should arbitrarily pick one person to win! But that's silly ... if I do that, I'll almost certainly be right only 999,998 times!
It's not exactly the same, but this logic reminds me of the Jeff Bagwell prediction controversy. In the fall of 1990, Bill James produced forecasts of players' batting lines for 1991, and it turned out that Bagwell wound up with the highest prediction for batting average. It was said that Bill James predicted Jeff Bagwell to win the batting title.
But, obviously, he did not. At best, and with certain assumptions, you might be able to say that James had Bagwell with the *best chance* of winning the batting title. But that's different from predicting outright that he'd win it.
Back to the lottery example ... if I notice that John Doe bought two lottery tickets, but the other 999,999 people only bought one, I would be correct in saying that Doe has the best chance to win. That doesn't mean I'm *predicting* him to win. He still only has a 1 in 500,000 chance.
--------
Finally, in their introduction to their post, Levitt and Dubner say,
" ... humans love to predict the future, but are generally terrible at it."
I disagree, especially in sports. Yes, very few people can beat the point spread year after year. But that doesn't show that the experts don't know what they're doing. It shows that they DO! Because, after all, it's humans that set the Vegas line, the one that's so hard to beat!
I'd argue 180 degrees the opposite. In sports, humans, working together, have become SO GOOD at predicting the future, that nobody can add enough ingenuity to regularly beat the consensus prediction!
Labels: forecasting, gambling, luck