NFL home underdogs win late in the season
A while ago, Steven Levitt pointed out that betting on NFL home underdogs is a winning strategy. In the sample he covered, home dogs won more than 53% of the time.
In the comments to my last post on the subject, Brian Burke pointed me to an academic study that found the effect is almost completely caused by late season effects.
That paper is called "The Late-Season Bias: Explaining the NFL's Home Underdog Effect." It's by Richard Borghesi of Texas State University.
In a series of tables, Borghesi shows us that in weeks 15-18 of the NFL season, home teams tend to overperform relative to the spread. First, the raw numbers. In weeks 1-14, home teams were favored by an average 2.57 points, and actually won by 2.48 points. In the later weeks, the spread dropped slightly to 2.40 points, but the home teams won by 4.46 points. That is, in the 704 late-season games in the sample (1981-2000), the home team beat the spread by an average of 2.04 points.
The effect was even more extreme in the 188 playoff games, where the home team beat the 5.75 point spread by 2.85 points.
Here it is in an easier-to-read font:
Weeks 1-14: visiting team beat spread by 0.09 points
Week 15-18: home team beat spread by 2.06 points
Playoffs: home team beat spread by 2.85 points
If you consider only home underdogs, the effect persists. This is just the home dogs, from Borghesi's Table 3:
Weeks 1-14: home underdogs beat spread by 0.29 points
Weeks 15-18: home underdogs beat spread by 3.13 points
Playoffs: home underdogs beat spread by 11.33 points
The playoff effect is a huge 11.33 points, but there were only 18 games in the sample.
In Table 4, Borghesi shows that the effect is getting stronger over time. Here are the "late home underdogs" by era:
1981-1985: late home underdogs beat spread by 1.28 points
1986-1990: late home underdogs beat spread by 2.95 points
1991-1995: late home underdogs beat spread by 2.42 points
1996-2000: late home underdogs beat spread by 6.92 points
Why is the effect confined to the later weeks of the season? Could it be the weather? In Table 5, Borghesi defines "cold weather advantage" as a game in which a northern team hosts a southern team. The home advantage is greatest in the cold months:
Aug-Sept: cold weather teams failed to cover spread, by 1.36 points
October: cold weather teams beat spread by 0.84 points
November: cold weather teams beat spread by 1.49 points
Dec-Jan: cold weather teams beat spread by 1.93 points
Remember, none of these tables tell us anything about the relative skills of the teams – just how they did against the spread. Since the spread could (and probably does) adjust somewhat for these effects, we don't actually know whether warm weather teams play worse in the cold. It's possible that, say, the Dolphins actually play a touchdown *better* in Green Bay in December, but the spread corrects by 10 points, making the Packers three points better than the spread. Admittedly that's unlikely, but the point is that we can only draw conclusions about the betting line, and not about the teams themselves.
How much of the "home underdog" effect is caused by the weather? To find out, you'd have to take the total home underdog games, subtract out the weather-related home underdog games, and see what's left. Borghese doesn't give enough data for us to be able to do that, I don't think.
Borghesi then posits a few simple betting rules, and figures out how you'd have done if you'd followed them:
.4905 -- bet on early-season home teams
.5144 -- bet on early-season home underdogs
.5422 -- bet on late-season home teams
.6000 -- bet on late-season home underdogs
.6071 -- bet on late-season home 2+ (points) underdogs
.5789 -- bet on late-season home 8+ (points) underdogs
And, finally, Borghesi runs a couple of regressions. The first one tries to predict the winner based on which team is at home, and the spread. Its predictions are 55% accurate early in the season, and 57% accurate late in the season. However, the regression is run separately for each year, which means the results can take advantage of pretending to "understand" random year-to-year fluctuations. When the same regression formulas are used to predict *next year's* winners, the results drop back to 50%.
However: Borghesi finds that if you run the regression every week, based only on the last month's games, you can predict next week's winners late in the season at a 53% clip. But that's only a 361-317 record, which is less than 2 SDs from .500.
One last regression in the paper tries to retrospectively predict winners based not only on the spread and the home team, but various performance statistics of the two teams. This gives 67% accuracy, but so what? Again, the regression might just be picking up randomness and spurious correlations.
(Why are the regressions there? Maybe (he said, mischievously) you can't get a paper published just by counting and tabulating things, even if you answer the question better than by more involved methods.)
If you ignore the regressions, the paper does present very good evidence that the entire "home underdog" effect happens late in the season, and that weather is at least partially responsible for it.