## Sunday, January 04, 2009

### The home underdog effect isn't holding up

A year ago, I posted about a Steven Levitt study showing that NFL bettors don't seem to like home underdogs. In the online tournament Levitt studied, it turned out that when the home team is the dog, only 31.8% of bettors backed it – the other 69.2% bet on the road favorite.

And, as it turned out, the home underdogs would have been good bets; they beat the spread almost 58% of the time. Levitt hypothesized that because bookies know that their customers prefer road underdogs, they shade the point spread in favor of the home team, in order to win more of those bets.

But in a post last week on Freakonomics, Levitt revisited the home underdog effect, and found that, over the past two seasons, it no longer held. In 2007, home underdogs were 44-45-1 against the spread in 2007, and 32-45-2 against the spread in 2008.

Why the change? Levitt thinks it's just luck, and not a change in the way bookmakers set their lines.

Thinking about the issue again, I wonder if the original finding, that 58% of home underdogs beat the spread, might not itself have been just luck. In Levitt's original paper, there were 2,286 bets on home underdogs. But that's the number of bets, not the number of games. There were only 85 NFL games studied in total. That means that there were probably less than 40 home underdogs to bet on. 58% of 40 games is only 23-17, which is really not that big a deal, is it?

When Levitt looked at 21 seasons' worth of NFL betting lines, instead of just those 85 games, home underdogs were still a good bet, beating the spread 53.3% of the time. That's significant, but not as impressive as 58%. Assuming 53.3% is the "real" figure, that means that in 2007-8, home dogs should have gone 90-79. They actually went 76-90-3, or 13 games below what they "should" have. That's almost exactly two standard deviations.

If you don't assume 53.3% is the "real" number, and instead you test, statistically, for the difference between the two means (53.3% earlier, and 45.9% later), you wind up with a bit less than two standard deviations. I'd agree with Levitt that it's probably just luck.

Labels: ,

At Sunday, January 04, 2009 9:12:00 PM,  Brian Burke said...

I think it has mostly to do with weather, such as when dome teams have to play in cold weather.

Betting markets might factor in home field advantage as a one-size-fits-all 3 points or so. Much of the advantage comes when the Vikings play in Chicago, or the Colts play in New England. This may be what causes the observed inefficiency.

Undoubtedly, luck has a lot to do with it, but much of it is also due to how many adverse weather match-ups there are. If lots of the dome team vs. northern climate games come in the early months of the season, the home underdog effect won't be as prominent.

I haven't counted how many such games there were like that this year and compared it to other years. But now that I think about it, I don't recall very many.

Here is a study on the phenomenon.

Here are a couple of relevant posts of mine about it. (The second graph in the second article is most pertinent.)

At Sunday, January 04, 2009 11:37:00 PM,  Anonymous said...

I certainly don't find any phenoma like that against the spread in college football (see this db: http://www.qdsanalytics.com/2/fb.asp), but I may be missing the question somehow.