Tuesday, September 27, 2011

How good are sports pundits' predictions?

According to this Freakonomics post, the experts who make NFL predictions aren't very good at it. Freakonomist Hayes Davenport checked the past three years' worth of predictions from USA Today, Sports Illustrated, and ESPN. He found that the prognosticators correctly picked only 36% of the NFL division winners.

That doesn't seem that great. Picking randomly would get you 25%. And, as the post points out,

"if the pickers were allowed to rule out one team from every division and then choose at random, they’d pick winners 33% of the time. So if you consider that most NFL divisions include at least one team with no hope of finishing first (this year’s Bengals, Chiefs, Dolphins, Panthers, Broncos, Vikings, and Manning-less Colts, for example), the pickers only need a minimum of NFL knowledge before essentially guessing in the dark."


Well, it sounds right, but you have to look deeper.

Winning depends on two things: talent, and luck. Since luck is, by definition, random, when you predict a winner, the only thing you can do is pick the team with the most talent. And, despite the 36% figure, there's no evidence that the pundits misjudged the talent. Because, just by luck, sometimes the best team won't win, and that's unpredictable.

Two days ago, the Buffalo Bills upset the New England Patriots, despite being 7:2 underdogs (I'm estimating 7:2 based on the 9-point spread). What percentage of experts would you expect to have got that right? Your answer should be zero percent. Nobody with any knowledge of football should have thought the Bills had a better than 50 percent chance of winning. On the off-chance that you DO find someone who picked the Bills to win, he's probably a crappy predictor -- maybe he just flips a coin all the time.

In the case where the underdogs wind up winning a game, or finishing first in their division, the truth is the opposite of what Freakonomics implies. In that case, the higher percentage of correct predictions, the WORSE the pundits.

So what does that 36% figure actually tell you? By itself, absolutely nothing. You have no idea, looking at that bare number, how good the pundits are. It depends. If it was *all* bad teams that won, a number as high as 36% means the experts are wrong a lot, but 36% of their bad picks happened to turn out OK. If it was all good teams that won, a number as low as 36% means the experts are wrong a lot -- they must have picked 64% bad teams. And, if it was exactly 36% of the best teams that won, but those aren't the cases where the experts were right, then, again, the experts are wrong a lot.

But, if it was exactly 36% of the best teams that won, and those are exactly the cases where the experts were right ... then the experts are perfect predictors.

So you can't just look at a number. 36% may be bad, but it might be awesome. It depends what actually happened.

-------

However: while this logic applies to picking outright winners of games or divisions, it doesn't apply to picking against the spread. Why not? Because, against the spread, the presumption is that the odds are close to 50/50.

In the Patriots/Bills game, the odds were roughly 77/22. Some experts might have pegged the Bills as having a 25% chance of winning, while some may have estimated only 20%. Still, both pundits would have obviously still bet on the Patriots. The fact that New England ended up losing isn't really relevant.

But against the +9 spread, you might have a reasonable difference of opinion. One expert might figure the true spread should be +8.5, and another might figure +9.5. So the first guy takes the Bills, and the second takes the Patriots.

On bets that are approximately 50/50, reasonable experts can disagree. On bets that are 77/22, they cannot. So, when it's 50/50, a higher percentage could, in fact, mean better predictions.

Still, there's lots of luck there too. If a pundit predicts all 256 games in a season, the (binomial) standard deviation of his success rate will be a little over 3 percentage points. So one predictor in 20 will be over 56%, or under 44%, just by luck.

That means it's still hard to figure out who's a "better" expert and who's not.

--------

The post goes on to criticize the pickers for risk aversion. Why? Because, it seems, the experts tended to pick the same teams that won last year.

Um ... why is that risk aversion? It stands to reason that the teams that won before are more likely to still be pretty good, so they're probably reasonable picks. But, the author says,

"Over the last fifteen seasons, the NFL has averaged exactly six new teams in the playoffs every year, meaning that half of the playoff picture is completely different from the year before. ... Given that information, a savvy picker relying on statistical precedent would choose six new teams when predicting the playoffs."


That doesn't follow at all! Just because I know six favorites will lose doesn't mean I should pick six underdogs! That would be very, very silly.

It's like predicting whether John Doe will win the lottery this week. The odds say no, and that's the way I should bet. And it's the same for Jane Smith, or Bob Jones. If there are a million people in the lottery, I should pick them all to lose. I'll be right 999,999 times, and wrong once.

But, according to Freakonomics, I should arbitrarily pick one person to win! But that's silly ... if I do that, I'll almost certainly be right only 999,998 times!

It's not exactly the same, but this logic reminds me of the Jeff Bagwell prediction controversy. In the fall of 1990, Bill James produced forecasts of players' batting lines for 1991, and it turned out that Bagwell wound up with the highest prediction for batting average. It was said that Bill James predicted Jeff Bagwell to win the batting title.

But, obviously, he did not. At best, and with certain assumptions, you might be able to say that James had Bagwell with the *best chance* of winning the batting title. But that's different from predicting outright that he'd win it.

Back to the lottery example ... if I notice that John Doe bought two lottery tickets, but the other 999,999 people only bought one, I would be correct in saying that Doe has the best chance to win. That doesn't mean I'm *predicting* him to win. He still only has a 1 in 500,000 chance.

--------

Finally, in their introduction to their post, Levitt and Dubner say,

" ... humans love to predict the future, but are generally terrible at it."


I disagree, especially in sports. Yes, very few people can beat the point spread year after year. But that doesn't show that the experts don't know what they're doing. It shows that they DO! Because, after all, it's humans that set the Vegas line, the one that's so hard to beat!

I'd argue 180 degrees the opposite. In sports, humans, working together, have become SO GOOD at predicting the future, that nobody can add enough ingenuity to regularly beat the consensus prediction!



Labels: , ,

9 Comments:

At Tuesday, September 27, 2011 2:48:00 PM, Blogger Matt Crawford said...

I hear that exact same logic all the time with NCAA Basketball pools -- since every year there are some upsets, therefore you should pick some upsets. Completely wrong, as you said.

 
At Tuesday, September 27, 2011 6:43:00 PM, Blogger p e t e said...

He's also only talking about division winners. So some of those incorrect picks may have tied for the division lead and lost on tiebreakers. Others may have lost the division by a game or two and still made the playoffs.

And divisions where the winner wins less than 12 games are harder to pick as well.

I think you would find that the experts picks are in the top two (or in the playoffs or separated from the the division winner by two wins or less) much higher than random chance.

 
At Tuesday, September 27, 2011 7:53:00 PM, Blogger Cyril Morong said...

Not sure if this is relevant or if it makes sense. If 25% is the amount you expect to get right from random chance and you predict 36% correctly, it seems that you were 44% better than random chance (11/25 = .44). That seems pretty goo. Maybe such a calculation on an issue like this is meaningless. I really don't know.

If you are flipping a coing and someone predicts heads tales and they are 44% better than random guessing, that might be a little scary.

 
At Wednesday, September 28, 2011 1:09:00 AM, Blogger j holz said...

Things like this really make me wonder whether any of their regressions on social research are mathematically sound.

I especially like the part about completely ruling out one team per division just to make the numbers prettier. The 2008 Dolphins and Titans won their divisions as enormous longshots, but apparently they don't count. Neither do last year's Seahawks or Chiefs, or this year's Redskins or Bills.

 
At Thursday, September 29, 2011 10:46:00 PM, Anonymous Paul Thomas said...

Matt-- the reason to pick a (small) number of upsets in your March Madness pool (setting aside the situations where lower-seeded teams should be favored, which does happen with 5-ish games a tournament) is not to increase your overall score, it's to increase the VARIANCE in your possible score, so that you're more likely to achieve a score high enough to finish in first place.

In other words, you're more likely to score fewer points, but you're also more likely to win.

Of course, the way most sites do their pools (with points doubling every round, which is ludicrous), the first two rounds are utterly irrelevant to the winner of the thing, so none of this matters anyway... might as well have some fun with it.

But even if you're in a serious pool with a sane scoring system, calling SOME upsets correctly is a must if you've got a decent number of people in the pool.

 
At Saturday, October 01, 2011 5:26:00 PM, Anonymous Anonymous said...

In NCAA pools, score maximization is not the same as win probability maximization. You have to account for how many other people have the same champion pick as you - these are effectively the people you're really competing against. There are some papers about this which are quite interesting.

 
At Friday, October 07, 2011 12:11:00 AM, Anonymous Heikoop said...

I wouldn't assert that someone who picked the Bills was a poor prognosticator. That is, the odds of the Bills beating the Pats before the game was, as you stated, about 25%. If the Bills were facing the Pats the next weekend (after beating the Pats) would the odds increase? Certainly, after all, they had just defeated the Pats the previous weekend.

Now, the reason the odds increased isn't strictly based on the fact that the Bills won against the Pats, rather it is that the 'experts' saw something that made the Bills beating the Pats possible to occur again.

Thus, the prognosticator could have seen something in previous weeks that made them choose the Bills, not a flip of the coin.

 
At Friday, October 07, 2011 3:33:00 AM, Anonymous Anonymous said...

One bad assumption worth noting here. Lines are set not to primarily based on the probability of winning, but by the House trying to equalize the money wagered on each side of the line. That's how they make money. In the particular example of the Pats game, their offense had looked spectacular, which had hid the fact their defense had been almost equally porous to the casual fan and most likely most bettors. This inflated the line greatly and made a Bill's pick not outlandish.

 
At Tuesday, January 31, 2012 5:45:00 PM, Anonymous Anonymous said...

i once followed a betting system and lost money. not fun at all!

 

Post a Comment

<< Home