Do fans care about "uncertainty of outcome"?
One of the arguments in "The Wages of Wins" is that fan interest in a game is proportional to the amount of "uncertainty of outcome" in the game. For instance, two closely-matched teams playing each other should draw higher interest than the Yankees against the Royals.
The TWOW authors illustrate this theory with the example of ESPN Classic, the channel that broadcasts reruns of historic games. Even though these are some of the best games in sports history, the authors argue, ratings are very, very low. According to the authors, that's because viewers value uncertainty, and everyone knows who's going to win the rerun.
I can't fully agree. I don't think the reason few people watch the reruns is just because they know who's going to win – it's because they also know the score, and the sequence of events, and the plot. It's also because they may have seen the game before. It's also that live games have a more "participatory" feel than taped ones. And it's also because live games are "news," while old ones aren't. (Would you watch a game tape-delayed by six hours, even if you had no idea what happened? I wouldn't.)
You can understand that fans don't want to watch an old game without necessarily believing that they care, for a live game, whether a team is a 55% favorite or a 60% favorite.
So I decided to look up the three studies the authors cited. I only found one of them online (via my public library), and I don't find it at all convincing.
It's from the Fall 1992 issue (36.2) of the journal "American Economist." It's by Glenn Knowles, Keith Sherony, and Mike Haupert, and it's called "The Demand for Major League Baseball: A Test of the Uncertainty of Outcome Hypothesis." (I couldn't find it online, even an abstract.)
Knowles, Sherony and Haupert take every game in the 1988 National League. They run a regression to predict attendance based on these variables:
Games Behind (sum of GB for both teams)
Whether the game is a weekend game
Whether the game is a night game
Population of home city
Unemployment rate of home city
Per Capita Income of home city
Distance between the teams' two cities
The game's betting line (quadratic).
Their findings: the two income variables are not signficant. All the other variables are significant in the expected direction (for instance, the bigger the city, the higher the attendance).
And, most important for this study, the authors find that attendance increases up to the point where the home team has a .600 chance of winning; then it declines.
The authors conclude:
"The uncertainty of the game's outcome indeed a significant determinant of attendance. In addition it was shown that attendance is maximized when the home team is slightly favored. ... These results indicate that competitive balance is important in MLB, and that league-wide cooperation in establishing it will increase attendance ..."
But there are obvious problems with the regression.
-- the assumption that attendance should depend on the sum of the two team's GB is not reasonable, since, for the most part, paying customers are fans of only the home team. If the home team is one game out of first place, and the visiting team is 20 games out, you'd expect attendance to be high – the other way around, you'd expect it to be low.
-- GB has different meaning early in the season vs. late in the season, but the regression treats them the same.
-- why should attendance be based on the distance between the two cities? Do you really expect more Yankee fans to come out to see the Twins than the Mariners because Minnesota is only half as far away? An indicator variable for "traditional rivalry" or something would have been more appropriate.
-- while attendance does correlate with city size, the relationship is not perfect. Some large cities, it is argued, don't have as much per-capita fan support for cultural reasons – I've heard Philadelphia described this way. If, in 1988, baseball hotbed teams tended to be close to .500, while less fanatical cities tended to be at the extremes, this could cause the observed effect without anything to do with in-game parity.
-- in 1988, the teams with the best records happened to be the ones with the highest populations – New York and Los Angeles. Suppose that attendance isn't exactly proportional to population, but drops off after a few million people in the Metro area (which kind of makes sense – once a game is sold out, it's sold out, no matter how many people want to attend). Suppose that, for this reason (or any other), the regression overestimates the Mets' expected attendance. Then, the regression might falsely attribute the difference to the fact that the Mets are so good that they play too many "unbalanced" games.
It's possible that despite these problems, the results still hold. To find out, you'd have to fix some of these issues and see if the ".600" result changes. (My intuition is that it would indeed change.)
And there's another argument. Conventional wisdom is that fans want to see their team win, but, also, they come out to see good opposition, and noted superstar players on the opposing team. If both those are true, what should happen?
First, more fans should come out when the home team has a higher chance of winning the game. And, second, fewer fans should come out when the other team doesn’t have any glamour.
This would explain the ".600" result perfectly. Attendance increases as the team improves from .500 – but, when the opposition is a bad team (and therefore the chance of winning increases past .600), some fans decide to stay home.
Under that theory, it may *appear* that fans care about uncertainty of outcome, but what they really care about is watching their good team play another decent team.
And that latter theory seems a lot more plausible to me.