Thursday, April 21, 2016

Noll-Scully doesn't measure anything real

The most-used measure of competitive balance in sports is the "Noll-Scully" measure. To calculate it, you figure the standard deviation (SD) of the winning percentage of all the teams in the league. Then, you divide by what the SD would be if all teams were of equal talent, and the results were all due to luck.

The bigger the number, the less parity in the league.

For a typical, recent baseball season, you'll find the SD of team winning percentage is around .068 (that's 11 wins out of 162 games). By the binomial approximation to normal, the SD due to luck is .039 (6.4 out of 162). So, the Noll-Scully measure works out to .068/.039, which is around 1.74.

In other words: the spread of team winning percentage in baseball is 1.74 times as high as if every team were equal in talent.

------

Both "The Wages of Wins" (TWOW), and a paper on competitive balance I just read recently (which I hope to post about soon), independently use Noll-Scully to compare different sports. Well, not just them, but a lot of academic research on the subject.

The Wages of Wins (page 70 of the second edition) runs this chart:

2.84 NBA
1.81 AL (MLB)
1.67 NL (MLB)
1.71 NHL
1.48 NFL

The authors follow up by speculating on why the NBA's figure is so high, why the league is so unbalanced. They discuss their "short supply of tall people" hypothesis, as well as other issues.

But one thing they don't talk about is the length of the season. In fact, their book (and almost every other academic paper I've seen on the subject) claims that Noll-Scully controls for season length. 

Their logic goes something like this: (a) The Noll-Scully measure is actually a multiple of the theoretical SD of luck. (b) That theoretical SD *does* depend on season length. (c) Therefore, you're comparing the league to what it would be with the same season length, which means you're controlling for it.

But ... that's not right. Yes, dividing by the theoretical SD *does* control for season length, but not completely.

------

Let's go back to the MLB case. We had

.068 observed SD
.039 theoretical luck SD
-------------------------
1.74 Noll-Scully ratio

Using the fact that SDs follow a pythagorean relationship, it follows that

observed SD squared = theoretical luck SD squared + talent SD squared

So

.068 squared = .039 luck squared + talent squared

Solving, we find that the SD of talent = .056. Let's write that this way:

.039 theoretical luck SD
.056 talent SD
------------------------
.068 observed SD
---------------------------------------
1.74 Noll-Scully (.068 divided by .039)

Now, a hypothetical. Suppose MLB had decided to play a season four times as long: 648 games instead of 162. If that happened, the theoretical luck SD would drop in half (we'd divide by the square root of 4). So, the luck SD would be .020. 

The talent SD would remain constant at .056. The new observed SD would be the square root of (.020 squared plus .056 squared), which works out to .059:

.020 theoretical luck SD
.056 talent SD
-------------------------
.059 observed SD
---------------------------------------
2.95 Noll-Scully (.059 divided by .020)

Under this scenario, the Noll-Scully increases from 1.74 to 2.95. But nothing has changed about the game of baseball, or the short supply of home run hitters, or the relative stinginess of owners, or the populations of the cities where the teams play. All that changed was the season length.

--------

My only point here, for now, is that Noll-Scully does NOT properly control for season length. Any discussion of why one sport has a higher Noll-Scully than another *must* include a consideration of the length of the season. Generally, the longer the season, the higher the Noll-Scully. (Try a Noll-Scully calculation early in the season, like today, and you'll get a very low number. That's because after only 15 games, luck is huge, so talent is small compared to luck.)

It's not like there's no alternative. We just showed one! Instead of Noll-Scully, why not just calculate the "talent SD" as above? That estimate *is* constant for season length, and it's still a measure of what academic authors are looking for. 

Tango did this in a famous post in 2006. He got

.060 MLB
.058 NHL
.134 NBA

If you repeat Tango's logic for different season lengths, you'll get the same numbers.  Well, you'll get different results because of random variation ... but they should average somewhere close to those figures.

---------

Now, you could argue ... well, sometimes you *do* want to control for season length. Perhaps one of the reasons the best teams dominate the standings is because NBA management wanted it that way ... so they chose a longer, 82-game season, in order to create some space between the Warriors and the other teams. Furthermore, maybe the NFL deliberately chose 16 games partly to give the weaker teams a chance.

Sure, that's fine. But you don't want to use Noll-Scully there either, because Noll-Scully still *partially* adjusts for season length, by using "luck multiple" as its unit. Either you want to consider season length, or you don't, right? Why would you only *partially* want to adjust for season length? And why that particular part?

If you want to consider season length, just use the actual SD of the standings. If you don't, then use the estimated SD of talent, from the pythagorean calculation. 

Either way, Noll-Scully doesn't measure anything anybody really wants.





Labels: , , , , , , ,

Wednesday, April 08, 2015

How much has parity increased in MLB?

The MLB standings were tighter than usual in 2014. No team won, or lost, more than 98 games. That's only happened a couple of times in baseball history.

You can measure the spread in the standings by calculating the standard deviation (SD) of team wins. Normally, it's around 11. Two years ago, it was 12.0. Last year, it was down substantially, to 9.4.

Historically, 9.4 is not an unprecedented low. In 1984, the SD was 9.0; that's the most parity of any season since the sixties. More recently, the 2007 season came in at 9.3, with a team range of 96 wins to 96 losses.

But this time, people are noticing. A couple of weeks ago, Ben Lindbergh showed that this year's preseason forecasts have been more conservative than usual, suggesting that the pundits think last year's compressed standings reflect increased parity of talent. They've also noted another anomaly: in 2014, payroll seemed to be less important than usual in predicting team success. These days, the rich teams don't seem to be spending as much, and the others seem to be spending a little more.

So, have we actually entered into an era of higher parity, where we should learn to expect tighter playoff races, more competitive balance, and fewer 100-win and 100-loss teams? 

My best guess is ... maybe just a little bit. I don't think the instant, single-season drop from 12 games to 9.4 games could possibly be the result of real changes. I think it was mostly luck. 

-----

Here's the usual statistical theory. You can break down the observed spread in the standings into talent and luck, like this: 

SD(observed)^2 = SD(talent)^2 + SD(luck)^2

Statistical theory tells us that SD(luck) equals 6.4 games, for a 162-game season. With SD(observed) equal to 12.0 for 2013, and 9.4 for 2014, we can solve the equation twice, and get

2013: 10.2 games estimated SD of talent 
2014:  7.0 games estimated SD of talent

That's a huge one-season drop, from 10.2 to 7.0 ... too big, I think, to really happen in a single offseason. 

Being generous, suppose that between the 2013 and 2014 seasons, teams changed a third of their personnel. That's a very large amount turnover. Would even that be enough to cause the drop?

Nope. At least, not if that one-third of redistributed "talent wealth" was spread equally among teams. In that case, the SD of the "new" one-third of talent would be zero. But the SD of the remaining two-thirds of team talent would be 8.3 (the 2013 figure of 10.2, multiplied by the square root of 2/3).

That 8.3 is still higher than our 7.0 estimate for 2014! So, for the SD of talent to drop that much, we'd need the one-third of talent to be redistributed, not equally, but preferentially to the bad teams. 

Is that plausible? To how large an extent would that need to happen?

We have a situation like this: 

2014 talent = original two thirds of 2013 talent 
            + new one third of 2013 talent 
            + redistribution favoring the worse teams

Statistical theory says the relationship between the SDs is this:

SD(2014 talent) squared = 

SD(2013 talent teams kept)^2 +
SD(2013 talent teams gained)^2 + 
SD(2013 talent teams kept) * SD(2013 talent teams gained) * correlation between kept and gained * 2

It's the same equation as before, but with an extra term (shown in bold). That term shows up because we're assuming a non-zero correlation between talent kept and talent gained -- that the more "talent kept," the less your "talent gained". When we just did "talent" and "luck", we were assuming there was no correlation, so we didn't need that extra part. (We could have left it in, but it would have worked out to zero anyway.)

The equation is easy to fill in. We saw that SD(2014 talent) was estimated at 9.4. We saw that SD(talent teams kept) was 8.3. And we can estimate that SD(talent teams gained) is 12.0 times the square root of 1/3, which is 5.9.

If you solve, you get 

Correlation between kept and gained = -0.57

That's a very strong correlation we need, in order for this to work out. The -0.57 means that, on average, if a team's "kept" players were, say, 5th best in MLB (that is, 10 teams above average), its "gained" players must have been 9th worst in MLB (5.7 teams below average). 

That's not just the good teams getting worse by signing players that aren't as good as the above-average players they lost -- it's the good teams signing players that are legitimately mediocre. And, vice-versa. At -0.56, the worst teams in baseball would have had to have replaced one-third of their lineup, and those new players would have to have been collectively as good as those typically found on a 90-win team.

Did that actually happen? It's possible ... but it's something that you'd easily be able to have seen at the time. I think we can say that nobody noticed -- going into last season, it didn't seem like any of the mainstream projections were more conservative than normal. (Well, with the exception of FanGraphs. Maybe they saw some of this actually happen? Or maybe they just used a conservative methodology.)

One thing that WAS noticed before 2014 is that the 51-111 Houston Astros had improved substantially. So that's at least something.

And, for what it's worth: the probability of randomly getting a correlation coefficient as extreme as 0.57, in either direction, is 0.001 -- that is, one in a thousand. On that basis, I think we can reject the hypothesis that team talent grew tighter just randomly.

(Technical note: all these calculations have assumed that every team lost *exactly* one-third of its talent, and that those one-thirds were kept whole and distributed to other teams. If you were to use more realistic assumptions, the chances would improve a little bit. I'm not going to bother, because, as we'll see, there are other possibilities that are more likely anyway.)

------

So, if it isn't the case that the spread in talent narrowed ... what else could it be? 

Here's one possibility: instead of SD(talent) dropping in 2014, SD(luck) dropped. We were holding binomial luck constant at 6.4 games, but that's just the average. It varies randomly from season to season, perhaps substantially. 

It's even possible -- though only infinitesimally likely -- that, in 2014, every team played exactly to its talent, and SD(luck) was actually zero!

Except that ... again, that wouldn't have been enough. Even with zero luck, and talent 10.3, we would have observed an SD of 10.3. But we didn't. We observed only 9.4. 

So, maybe we have another "poor get richer" story, where, in 2014, the bad teams happened to have good luck, and the good teams happened to have bad luck.

We can check that, in part, by looking at the 2014 Pythagorean projections. Did the bad teams beat Pythagoras more than the good teams did?

Not really. Well, there is one obvious case -- Oakland. The 2014 A's were the best team in MLB in run differential, but won only 88 games instead of 99 because of 11 games of bad "cluster luck". 

Eleven games is unusually large. But, the rest of the top half of MLB had a combined eighteen games of *good* luck, which seems like it would roughly cancel things out.

Still ... Pythagoras is only a part of overall luck, so there's still lots of opportunity for the "good teams having bad luck" to have manifested itself. 

-----

Let's do what we did before, and see what the correlation would have to be between talent and luck, in order to get the SD down to 9.4. The relationship, again, is:

SD(2014 standings)^2 = 

SD(2014 talent)^2 + 
SD(2014 luck)^2   + 
SD(2014 talent) * SD(2014 luck) * correlation between 2014 talent and 2014 luck * 2 

Leaving SD(2014 talent) at the 2013 estimate of 10.2, and leaving SD(2014 luck) at 6.4, we get

Correlation between 2014 talent and luck = -0.39

The chance of a correlation that big (either direction) happening just by random luck is 0.033 -- about 1 in 30. That seems like a big enough chance that it's plausible that's what actually happened. 

Sure, 1 in 30 seems low, and is statistically significantly unlikely in the classical "1 in 20" sense. But that doesn't matter. We're being Bayesian here. We know something unlikely happened, and so the reason it happened is probably also something unlikely. And the 1/30 estimate for "bad teams randomly got lucky" is a lot more plausible than the 1/1000 estimate for "bad teams randomly got good talent."  It might also be more plausible than "bad teams deliberately got good talent," considering that nobody noticed any unusual changes in talent at the time.

------

Having got this far, I have to backtrack and point out that these odds and correlations are actually too extreme. We've been assuming that all the unusualness happened after 2013 -- either in the offseason, or in the 2014 season. But 2013 might have also been lucky/unlucky itself, in the opposite direction.

Actually, it probably was. As I said, the historical SD of actual team wins is around 11. In the 2013 season, it was 12. We would have done better by comparing the "too equal" 2014 to the historical norm, rather than to a "too unequal" 2013. 

For instance, we toyed with the idea that there was less luck than normal in 2014. Maybe there was also more luck than normal in 2013. 

Instead of 6.4 both years, what if SD(luck) had actually been 8.4 in 2013, and 4.4 in 2014?

In that case, our estimates would be

SD(2013 talent) = 8.6
SD(2014 talent) = 7.6

That would be just a change of 1.0 wins in competitive balance, much more plausible than our previous estimate of a 3.2 win swing (10.2 to 7.0).

------

Still: no matter which of all these assumptions and calculations you decide you like, it seems like most of the difference must be luck. It might be luck in terms of the bad teams winding up with the good players for random reasons, or it might be that 2013 had the good teams having good luck, or it might be that 2014 had the good teams having bad luck.

Whichever kind of luck it is, you should expect a bounceback to historical norms -- a regression to the mean -- in 2015. 

The only way you can argue for 2015 being like 2014, is if you think the entire move from historical norms was due to changes in the distribution of talent between teams, due to economic forces rather than temporary random ones. 

But, again, if that's the case, show us! Personnel changes between 2013 and 2014 are public information. If they did favor the bad teams, show us the evidence, with estimates. I mean that seriously ... I haven't checked at all, and it's possible that it's obvious, in retrospect, that something real was going on.

------

Here's one piece of evidence that might be relevant -- betting odds. In 2014, the SD of Bovada's "over/under" team predictions was 7.16. This year, it's only 6.03.*

(* Bovada's talent spread is tighter than what we expect the true distribution to be, because some of team talent is as yet unpredictable -- injuries, trades, prospects, etc.)

Some of that might be a reaction to bettor expectations, but probably not much. I'm comfortable assuming that Bovada thinks the talent distribution has compressed by around one win.*

Maybe, then, we should expect a talent SD of 8.0 wins, rather than the historical norm of 9.0. That's more reasonable than expecting the 2013 value of 10.2, or the 2014 value of 7.0. 

If the SD of talent is 8, and the SD of luck is 6.4 as usual, that means we should expect the SD of this year's standings to be 10.2. That seems reasonable. 

------

Anyway, this is all kind of confusing. Let me try to summarize everything more understandably.

------

The distribution of team wins was much tighter in 2014 than in 2013. As I see it, there are six different factors that could have contributed to the increase in standings parity:

-- 1. Player movement from 2013 to 2014 brought better players to the worse teams (to a larger extent than normal), due to changes in the economics of MLB.

-- 2. Player movement from 2013 to 2014 brought better players to the worse teams (to a larger extent than normal), due to "random" reasons -- for instance, prospects maturing and improving faster for the worse teams.

-- 3. There was more randomness than usual in 2013, which caused us to overestimate disparities in team talent.

-- 4. There was less randomness than usual in 2014, which caused us to underestimate disparities in team talent.

-- 5. Randomness in 2013 favored the better teams, which caused us to overestimate disparities in team talent.

-- 6. Randomness in 2014 favored the worse teams, which caused us to underestimate disparities in team talent.

Of these six possibilities, only #1 would suggest that the increase in parity is real, and should be expected to repeat in 2015. 

#3 through to #6 suggest that 2013 was a random aberration, and would suggest that 2015 would be more like the historical norm (SD of 11 games) rather than like 2013 (SD of 12 games). 

Finally, #2 is a hybrid -- a one-time random "shock to the system," but with hangover effects into future seasons. If, for instance, the bad teams just happened to have great prospects arrive in 2014, those players will continue to perform well into 2015 and beyond. Eventually, the economics of the game will push everything back to equilibrium, but that won't happen immediately, so much of the 2014 increase in parity could remain.

------

Here's my "gut" breakdown of the contribution each of those six factors:

25% -- #1, changes in talent for economic reasons
 5% -- #2, random changes in talent
10% -- #3, "too much" luck in 2013
20% -- #4, "too little" luck in 2014
10% -- #5, luck favoring the good teams in 2013
30% -- #6, luck favoring the bad teams in 2014

Caveats: (1) This is just my gut; (2) the percentages don't have any actual meaning; and (3) I could easily be wrong.

If you don't care about the reasons, just the bottom line, that breakdown won't mean anything to you. 

As I said, my gut for the bottom line is that it seems reasonable to expect 2015 to end with a standings SD of 10.2 wins ... based on the changes in the Vegas odds.

But ... if there were an over/under on that 10.2, my gut would say to take the over. Even after all these arguments -- which I do think make sense -- I still have this nagging worry that I might just be getting fooled by randomness.



Labels: , , ,

Sunday, September 19, 2010

Does MLB payroll matter less than it used to?

In MLB this year, team payroll barely matters. Money is less important in 2010 than in any season since 1994.

That's according to an article a few days ago in the Wall Street Journal. Unfortunately, I don't think that's right ... or at least, I can't reproduce the result.

The article says,

"According to estimated payroll figures released throughout the season, the correlation beteween a team's player payroll and its winning percentage is 0.14, a number that makes the relationship almost statistically irrelevant. That figure is 67 percent below last year's mark and is easily the lowest since the strike."


However, when I run the same correlation, I get a correlation of .36. Where does the .14 come from? My best guess is that it's the r-squared, since .36 squared equals about .13.

It's also possible that the author of the article used different salary data than what I used -- mine is USA Today data from the beginning of the season. But could that data be so wrong as to turn a .14 into a .36? I doubt it, especially considering USA Today has numbers very similar to Baseball Reference.

The 2010 numbers are roughly in line with what I get for 2008 and 2009:

2010: correlation of .36
2009: correlation of .48
2008: correlation of .29

It seems to me that 2010 is fairly normal. BTW, because the 2010 season isn't over yet, it's actually a bit lower now than it will likely wind up -- but only by a point or two.

The actual measures of payroll vs. wins, obtained from the regressions, are also similar:

2010: $ 8.9 MM in payroll associated with one win
2009: $ 6.2 MM in payroll associated with one win
2008: $12.6 MM in payroll associated with one win

These differences might look large, but they're really not, because of the wide confidence intervals associated with the estimates. For instance, the 2009 estimate has a 95% confidence interval of anywhere between $3.6 MM and $20.4 MM per win. The 2008 estimate is not even that different from zero wins per dollar spent, with a p value of .085.

---

Also, these results don't mean that a free-agent win actually costs this much ... other studies have shown the correct number is about $4.5 million per win. These numbers are higher because they look at team payrolls overall, and there are other ways to get wins other than free agents. Therefore, the connection between salary and wins is looser overall than if everyone were a free agent.

For instance, suppose team A has $50 MM worth of arbs and slaves good enough for 80 wins, while team B has $45 MM worth of arbs and slaves only good for 75 wins.

Team A buys 5 free-agent wins for $25 MM, bringing it to 85 wins. Team B buys 15 free-agent wins for $75 MM, bringing it to 90 wins. Overall, team B spent $45 MM more than team A, but has only 5 more wins to show for it. The regression shows that wins are associated with $9 MM in spending, when in reality free-agent wins cost only $5 MM each.

There are probably other scenarios that would give you a similar result.

---

The article says that back in 1998, the correlation between payroll and wins was a huge .71. Yup ... I ran a regression, and got .76 (the difference is probably because I used a different data source than the WSJ). The 1998 list is scary. Of the top 15 teams by payroll, only two finished below .500. And of the bottom 15 teams, only one finished above .500. In 1998, payroll was pretty close to destiny.

But that season may have been an anomaly. The WSJ article has a little graph of the trend, and 1998 was chosen for a mention because it's the highest point on the curve. Still, there's an obvious decline in correlation that takes place around 1999-2000. What might have caused that?

As I've mentioned before, a change in correlation doesn't necessarily mean that there's a real change in the relationship between the variables. If teams suddenly decide to all spend similar amounts, that will cause an apparent drop in correlation even if money is just as important as ever. (For instance, if you drop the seven highest-spending teams from the 2010 regression, as well as the seven lowest-spending teams, the correlation drops from .36 all the way down to .10, but the "dollars per win" value stays roughly the same, moving only from $9MM to $12MM.)

But there hasn't really been any payroll compression. In 1998, the SD of payroll was 43 percent of the mean. In 2008, 2009, and 2010, it was 44 percent, 38 percent, and 42 percent, respectively.

So what else could it be?

Was there a change in the labor agreement around then that somehow created more slaves and arbs? That would do it, because, the easier it is for poorer teams to keep cheap players, the easier it is for them to compete with a low payroll.

Or, are slaves and arbs better players now than they were then? Joey Votto will earn only $500,000 this year ... if there are more Vottos than there used to be, scattered around the league, that would weaken the link between payroll and success.

Or, maybe with the crackdown on PEDs, older players are retiring earlier, and so slaves and arbs are getting more playing time? I like this theory, but it doesn't really explain the low correlations during the steroid years of 2000-2003.

Any other ideas?



Labels: , , ,

Monday, August 30, 2010

Why are the Yankees willing to give up so much profit?

Last week, someone leaked the financial statements of several Major League Baseball teams. It turned out that two of the worst teams in baseball, the Pirates and Marlins, regularly turned a profit. They did that, in part, by pocketing MLB's revenue sharing payments, and simultaneously keeping their payroll very low.

Actually, the leaked statements didn't tell us a whole lot that we didn't already know. Every year, Forbes magazine comes out with their estimates of baseball teams' financials, and every year there is some criticism that the Forbes estimates are inaccurate. But if you compare the Forbes revenue figures to the teams' numbers on the leaked statements, you'll find that Forbes is pretty close -- in a couple of cases, they're right on. Forbes' *profit* numbers, as opposed to revenue numbers, aren't quite as accurate, but that's to be expected: a 5% discrepancy in revenues can easily lead to a 100% difference in profits.

(Forbes 2010 numbers are here; for other years, Google "forbes mlb 2009" (or whatever year you're looking for)).

Previously, I argued that the small-market teams will never be able to compete with teams like the Yankees and Red Sox, simply because their revenue base is too small. A win on the free agent market costs somewhere around $5 million (Tango uses $4.4 million, which may be more accurate), but brings in a lot less than $5 million in additional revenues for he Pirates or Marlins. And so, they maximize their profit by keeping their payroll down. Long term, teams like those aren't able to compete.

One obvious solution to this problem of long-term competitive imbalance would be to share all revenues equally, and force each team to spend roughly the same amount on payroll. However, that solution has its own problem -- the Yankees have double the revenues of most of the other teams in both leagues. Why should they suddenly be willing to share?

That is: suppose you bought the Yankees for $1.5 billion, thinking you'll pull in $400 million in revenues and make $50 million in profit (numbers made up). If MLB suddenly decides all revenue has to be shared, then, suddenly, you're pulling in only $200 million in revenues. You have to stop signing free agents, and, after everything shakes out, you now have only $25 million in profit.

Since businesses are valued on profits, and your profits are permanently cut in half, your $1.5 billion investment is now worth only $750 million. No matter how rich you are, you won't want to take a bath of $750 million even if it does make baseball better. Seven hundred and fifty billion dollars is just too much money.

Or is it?

The thing is, revenue sharing has been around in MLB for almost a decade now. It's not full sharing, which is what I described in the above example, where every team winds up the same. It's just partial sharing. Every team contributes 31% of its revenues to a common pool, which then gets split among all 30 teams. Effectively, if you're above average, you lose 30% of the amount by which you're above average (you pay 31%, but get 1% back as your 1/30 share). If you're below average, you gain 30% of the amount by which you're below average.

That's still a lot of money, then, that the Yankees are losing. In 2009, according to Forbes, the Yankees had revenues of $441 million, as compared to the league average of (I'm estimating just by eyeing the chart) about $200 million. However, I think that $441 million is *after* revenue sharing payments. (Why do I think that? Because in the cases of the actual leaked statements, the Forbes estimates are significantly closer to the revenue statements *after* adjusting for revenue sharing.)

So, the Yankees probably had about $549 million in gross revenues; they then paid $170 million into the pool, and received $62 million back, leaving $441 million.

That is: revenue sharing cost the Yankees $108 million in cash last year.

That's still very large. Forbes values the typical large-market team at 2.5 to 3 times revenues (the Yankees are above 3, almost 4). Assume the Yankees' market price really is 3x their annual revenues. (I'm not convinced -- I prefer a valuation based on profit -- but never mind). That means that, if they continue to consent to the MLB revenue sharing plan, and if they continue to spend the way they do, the Yankees have effectively agreed to hand $324 million to the other 29 clubs.

Looked at another way: according to Forbes, the Yankees have lost money every year from 2002 to 2008:

2009: $24.9 MM profit
2008: $ 3.7 MM loss
2007: $47.3 MM loss
2006: $25.2 MM loss
2005: $50.0 MM loss
2004: $37.1 MM loss
2003: $26.3 MM loss

If you added back in the Yankees' revenue sharing payments for those years, that would probably turn every loss back into a profit. And, in 2009 alone, without revenue sharing, the Yankees would have made *five times* as much money -- $133 million instead of just $25 million.

So what's going on? Some possibilities:

1. George Steinbrenner is so rich that he doesn't mind subsidizing the other teams. He was old enough when revenue sharing came in to being that he knew he'd never spend his wealth before he died. And so, he figured, whatever is best for baseball was fine with him, regardless of cost, so long as he could keep winning.

2. The Yankees believed that, without revenue sharing, competitive balance would be so bad, and the Yankees so much better than the other teams, that the fans would stay away and the Yankees would wind up being worse off. There may be something to that: according to Forbes, the value of the Yankees doubled since 2001, while the other teams increased in value by only maybe 50% (eyeballing again). So maybe revenue sharing is bad for the Yankees bottom line in the short term, but better in the long term.

That is: maybe by creating a league where teams like Tampa Bay might be able to compete once or twice every 20 years, fan interest rises to the point where the Yankees' investment in revenue sharing pays for itself, by increasing interest not just in the Yankees, but in baseball in general -- resulting in more revenue from the website, a bigger TV contract, and so on.

3. Maybe the Yankees (and Forbes), know that the operating losses are temporary, caused by Steinbrenner's desire to win at any cost. Maybe they figure, correctly, that they can cut down their free agent spending whenever they want, and start making significant profits.

That's in keeping with standard models of business valuation: you assess the value of a business on its *future* earnings potential, not its past.


4. Maybe revenue sharing drops the price of free agents enough that, in combination with some of the other factors, it makes revenue sharing profitable.

Suppose a free agent will increase the Yankees' revenue by $10 million. Then, if the guy costs less than $10 million, the Yankees will sign him. But, with revenue sharing, the Yankees only get to keep $7 million of the $10 million. And so, they're only willing to sign the player if he costs $7 million or less.

With the 31% revenue "tax", every team is in the same position. That depresses demand for free agents, which keeps prices down. And since the Yankees are the largest consumers of free agents, they get the largest benefit from the price decrease. The question: is the effect enough to pay for itself? I bet the answer is no -- it helps, not enough to make up for the tax. But that's just a gut feeling. Any economists out there able to estimate the size of the effect?

-----

My guess is that it's partly desire to win at any cost, and part rational economic calculation. That is, part of it is Steinbrenner's willingness to spend part of his fortune on fame. The other part is numbers 2, 3, and 4: the Yankees are doing what they have to do to make MLB attractive to fans in general, and are willing to lose money temporarily to be winners. I'm not so willing to believe #1, that George Steinbrenner is willing to give away half the value of his team just like that.

If that's correct, my prediction is that, eventually, when the new owners of the Yankees decide they're not as willing to spend their entire profit, and more, to make the playoffs every year, they'll cut down on their player spending. Instead of winding up in a class by themselves, with a payroll 50% higher than the average of the next seven highest-spending teams, they'll join those other teams, and wind up looking more like the Red Sox and Cubs. They'll still be highly successful, but not so much so that they make a mockery of the rest of the league.

Or not. There could be other, better explanations for what's going on. Any other ideas?



Labels: , , , ,

Thursday, February 04, 2010

Does it matter that the Yankees keep buying pennants?

As most baseball fans are aware, the New York Yankees have been spending more money on payroll than any other team in the major leagues, by a long shot. In 2009, for instance, the Yanks spent $201 million, about two-and-a-half times the average, and $76 million more than the next highest team (the Mets).

And so, as you would expect, the lavish-spending Yankees have been very successful. The Yankees made the post-season every year but one since 1995. That's 14 out of 15.

In an excellent post in November, Joe Posnanski wondered why fans are willing to put up with this. He gave two reasons:

1. In baseball, unlike football and basketball, a truly dominant team still wins only about 60% of its games. This tends to hide the extent of the dominance:

"I would bet if the Indianapolis Colts played the Cleveland Browns 100 times, and the Colts were motivated, they would probably 95 of them — maybe even more than that. But if the New York Yankees played the Kansas City Royals 100 times, and the Yankees were motivated, I suspect the Royals would still win 25 or 30 times. That’s baseball.

"So you have this sport that tends to equalize teams. That helps blur the dominance of the Yankees. If the New England Patriots were allowed to spend $50 million more on players than any other team, they would go 15-1 or 16-0 every single year. And people would not stand for it. But in baseball, a great and dominant team might only win 95 out of 160, and it doesn’t seem so bad."


And, given that the Yankees should only be expected to win 97 games or so, there will likely be other teams that come close to them, so it winds up looking like the Yankees are one of many quality teams. Of course (and now this is me, not Posnanski), the Yankees are expected to do it every year, whereas whatever team challenges them is probably just a random team that got lucky. But you can't tell that just by watching, so that Yankees don't look all that special in any given season.

2. Under the new, post-1995 playoff system, a team has to win three rounds to win the World Series. But in a short series, anything can happen, and the better team will lose with pretty high frequency.

A team with a 60% chance of winning each game will only win a best-of-five series about 68 percent of the time, and a best-of-seven series 71 percent of the time. (If I've got the numbers right.) So the chance of winning three consecutive rounds, and the World Series, is .68 * .71 * .71, which is about 34 percent.

So even if the Yankees are 60% favorites every game of the post-season -- the equivalent of 97-65 against three of the best other teams in baseball -- they'll win the World Series only about one year out of three. Posnanski:

"And in that way the expanded playoffs have been genius for baseball — not only because they are milking television for every dime but because the short series have been baseball’s one Yankee-proofing defense against the ludicrous unfairness of the New York Yankees. ... They are the best team with the best players every year — that sort of big money virtually guarantees it.

"So, you create a system where the best team doesn’t always win. In fact, you create a system where the best team often doesn’t win. For years the Yankees didn’t win. They lost to Florida. They lost Anaheim. They blew a 3-0 series lead against Boston. They lost to Anaheim again and Detroit and Cleveland — and how could you say that baseball is unfair? Look, the Yankees can’t win the World Series! See? Sure they spend $50 million more than any other team and $100 million more than most. But they haven’t won the World Series! Doesn’t that make you feel better?"


------

Last week, at the Sports Economist blog, Brian Goff agreed and disagreed with Posnanski's analysis. His agreement was that Posnanski got it right in terms of understanding why MLB did what it did with the expanded playoffs. His disagreement was that, while Posnanski thinks it's a bad thing for the fans, Goff thinks it's a *good* thing.

Why? Because Yankee-haters get a lot of satisfaction out of seeing the Yankees lose. And so MLB's strategy is win-win. Yankee fans get to see their team in contention every year, which creates a lot more revenue for the league and utility for fans (since the Yankees have the largest fan base in MLB). And then, Yankee-haters get to see their least-favorite team defeated two years out of three, which makes *them* feel good and open their wallets. MLB deliberately designed the system this way to squeeze more money out of its fans.

That may be true, but I'm not so sure the strategy is still in baseball's long-term interest. The sports economists I've read note that fans spend more money when their team is successful, and, from that, they conclude that it maximizes profit for the league to ensure the cities with the most fans win the most often.

I'm not convinced. That may work in the short run, when the fans still have memories of when payrolls were more even, and playoff berths were earned more by other means than money. But what happens longer term, when the Yankees make the playoffs for 28 of the next 30 years, and it becomes more and more obvious that the Pirates and Royals will seldom (if ever) be able to compete? And what happens when even Yankees fans start to get uncomfortable noticing that there's a lot less to be proud of when your management is just buying all the best players, and a playoff berth is just being purchased every year?

Maybe it's just me, that it's my personal taste that I'd rather all teams have an equal payroll, and that success on the field be "bought" with intelligence, strategy, and luck, rather than money. I've been a fan of the Toronto Maple Leafs all my life, but if the Leafs finally won the Stanley Cup again, but by spending three times as much as any other team ... well, I don't think I'd really care that much. And I'm sure there are many more like me. And so I wonder if a "we make more money when we rig the system so the Yankees win more often" strategy might backfire.

If you asked me a few months ago, I'd say for sure it would backfire, and fans would never put up with years and years of the Yankees buying pennants. But, after reading "Soccernomics," I'm not so sure. What I learned (pp. 48-49) was that, in the English Premier and Championship Leagues, there is a huge tendency to purchase wins. From 1998 to 2007, Manchester United had three times the average team payroll, and finished second, on average. That's second out of 58 teams, not second out of five teams in the AL East. Moreover, that's not second one year and then tenth the next -- it's an *average* of second, over ten years. They finished first five times, second twice, and third three times.

And they weren't even the highest-spending club ... that was Chelsea, who spent 3.5 times as much as the league mean, and had finished third on average.

The flip side of Man U is a club called "Brighton & Hove Albion," which spent 1/7 the average payroll (and finished 42nd, on average). So, in English soccer, you have the biggest team spending 23 times as much as the smallest team. Compare that to MLB, where the ratio was only 6 times for 2009, and is probably a lot smaller than that when you average out 10 seasons.

Moreover: in baseball, the Yankees stand alone in payroll: last year, they spent almost 50% more than the second-highest paid Mets. In English soccer, there were four teams at double the average (compared to one in MLB), and 13 teams at less than a quarter of the average (compared to none in MLB). And, again, these are ten-year trends in soccer, compared to a single year in baseball, which makes then even more shocking.

(One disclaimer: the soccer teams are, technically, divided into two leagues: the (first-tier) Premier League, and the (second-tier) Championship League. You'd expect that teams in the lower league would pay less. However, every season, as I understand it, the three best teams in the second tier swap places with the three worst teams in the first tier. So, theoretically, even the lowest paid second-tier team has a hope of being the overall champion two years from now. In that sense, it's really one league.)

But, despite the payroll and standings disparities, Man Utd still has a rabid fan base, and, as a result, the club is valued at $1.87 billion, even more than Forbes' appraisal of the Yankees at $1.5 billion.

So, what I'm thinking is: if British soccer fans can tolerate huge pay differences, and accept the fact that it's almost always going to be one of the richest teams that win ... well, maybe baseball fans can accept that too, especially since it's on a much smaller scale. Maybe the New York Yankees can become baseball's Manchester United, the Red Sox can become Chelsea, and fans of the Marlins and Padres can hope to fluke into the postseason and engineer an upset.

Major League Baseball might very well lose me as a fan if they do that, but if they can make it up in revenues from everyone else, who am I to say they're wrong?



Labels: , , , ,

Monday, October 05, 2009

Stacey Brook on salary caps and competitive balance

You'd think that when a sport introduces a salary cap, it would lead to greater competitive balance in the league. That would make sense; with a cap, you won't have teams like the Yankees, who spend two-and-a-half times as much on players as the average team, and about five times as much as the Marlins. If you forced the Yankees to spend only the league average, they would have to get rid of many of their expensive star players, and they'd win fewer games.

In theory, if every team had to spend the same amount, they'd all start the year with equal expectations. I say "in theory" because, in practice, different teams would have different philosophies, some of which might work better than others. Certain teams might spend more on scouting, wind up drafting better, and win more games with the same payroll (at least until the draftees reach free agency). But, generally, you'd expect more balance among teams.

It seems that Stacey Brook, co-author of "The Wages of Wins," doesn't think that's true. He thinks that the salary cap (and floor) the NHL instituted in 2005 has had no effect on competitive balance.

Here are Brook's "Noll-Scully" measures of competitive balance for the last few years of the NHL (lower numbers = more balance):

2000-01 1.858
2001-02 1.581
2002-03 1.592
2003-04 1.633
-----------------
salary cap begins

-----------------
2005-06 1.637
2006-07 1.600
2007-08 1.037
2008-09 1.369

It does seem, Brook acknowledges, that competitive balance has improved the last couple of years. But, he says, that's part of a trend that's been going on for a long time. For one thing, there was virtually no change in the Noll-Scully the first two years after the cap. For another, balance has been improving since at least the 1970s:

1970s 2.557
1980s 1.969
1990s 1.796
2000s 1.538

Since competitive balance has been increasing even through most of hockey history that had no salary cap, he argues, it's just a continuation of the trend, and the salary cap doesn't have anything to do with the recent decline. He writes,


"As we argue in The Wages of Wins, and detail in our paper - The Short Supply of Tall People - competitive balance is declining not because of changes in league institutional rules - such as payroll caps - but rather due to the increasing pool of talent to play sports, such as hockey."


But that doesn't make logical sense. Sure, there's already a decreasing trend, for whatever reason, but that doesn't mean a change to the rules can't contribute to the trend. Does having the ability to send text messages lead to people using their phone more? Of course it does! But if you apply the same argument, you get something like, "well, cell phones were becoming more and more popular even before text messaging, so text messaging can't have anything to do with it." That's not right.

And, indeed, it contradicts their own findings in "The Wages of Wins" itself. The authors found that there was an r-squared of .16 between salary and performance in MLB. Which means that if you were to flatten out salaries, so that each team paid an equal amount, it would reduce the variance of wins by 16%. So, absent any compensating factors, "The Wages of Wins" is argues a salary cap MUST reduce the Noll-Scully measure!

----

By the way, take a look at the value of 1.037 for 2007-08. That's really, really low; the lowest you can expect Noll-Scully to be is 1.000, and that's when every team is of exactly equal talent. A value so close to 1 suggests a combination of (a) the league being really balanced that year, and (b) teams, by luck, playing closer to .500 than their talent suggested.

If you look at the standings, you see the usual suspects at the top of the conferences, so it doesn't really seem like all the teams were equal that year. Could it be that Brook used a formula for Noll-Scully that didn't consider the extra point for an overtime loss?

----

But what about Brook's (and Berri's) argument that balance has increased because players' skills are becoming more equal? Well, sure, that's been part of it, no question. But effects often have more than one cause. You may be earning more money because you're working overtime, but that doesn't mean winning the office hockey pool will *also* make you richer. Whatever was causing the levelling of team talent before might still be there ... but, now, there's an additional effect, the salary cap effect.

Now, maybe I'm not interpreting Brook's argument correctly. Maybe he's thinking that the salary cap does contribute to balance, but so much less than the other effect (players getting more equally talented) that it's not worth considering. But I think it's the other way around. With a salary cap, it doesn't matter much how the players' talent is distributed.

Suppose players vary a lot in talent, 100 players equally spaced from 0 to 100, with an average of 50. A team that has lots of money might buy players with an average of 70, and a team owned by Harold Ballard might buy players with an average of 30. Big difference.

Now, suppose the talent pool gets bigger, and competition gets tougher, and now the players are all spaced between 40 and 60. Now, no matter how much you want to spend, you can't get above 60. And no matter how cheap you are, you can't get below 40. But the league average is still 50.

So, yes, Brook is correct, a narrower range of talent leads to more competitive balance.

But, now, suppose that every team has a salary cap and a floor: they all have to spend exactly the same amount of money. Now, it doesn't matter how the talent is distributed: assuming every team is equally good at evaluating players, they'll all sign a team with an average of 50. Even if the distribution of talent is like it was in the 1970s, with lots of spread, it doesn't matter -- because even if there are lots of players in the 90s and 100s, no team can afford to sign more than one or two. The more talented the player, the more likely a team who signs him will have to sign *less* talented players to stay within the cap.

Even if you have the Babe Ruth of hockey, a player who's (say) a 500 when the other players top out at 100, it won't matter, because the teams will bid up the price of his services until they pay him what he's worth. The team who gets him will have less money to spend on other players, and it all evens out in the end.

What's happening is this: in the past decades, competitive balance decreased steadily for many reasons, including the increase of the talent pool that Brook cites. But, now, with a salary cap and floor, most of that stuff doesn't matter much any more!

It matters a bit, because not everyone is a free agent. The distribution of talent does matter for draft choices, because the top draft choice doesn't cost that much more than the others (but can be a whole lot better, as in Sidney Crosby).

---

Of course, NHL hockey teams are more than collections of free agents priced at market value, so we shouldn't expect competitive balance to be perfectly level. There are some factors that might cause the Noll-Scully to actually rise a bit from the theoretical bottom created by the salary cap.

For instance: the first draft choice goes to a team near the bottom of the standings. Back in the days of less competitive balance, that went to a team that was probably legitimately awful. Now, with teams closer in talent, it could go to a team that was just unlucky. If the team that gets the next Sidney Crosby is an average team, rather than a bad team, that won't reduce competitive balance the way it used to.

Also, scouting: an investment in scouting now pays off more than it used to. Before, if you were a low-spending team, maybe a better draft choice might move you from .400 to .450. Now, if all teams are medium-spending, maybe it'll move you from .500 to .550, and give you a legitimate shot at the Stanley Cup. So more teams should be willing to spend the money to improve their drafting. And so, the rich teams could "buy" better players, not by spending to pay them, but by spending to identify them better.

And there are probably other ways to get around the cap: didn't companies introduce employee health plans to get around wage controls in World War II? If a superstar free agent has knee problems, and I wanted to sign that player, I'd offer to hire the best knee doctor in the business and keep him on staff. Whatever he costs, it's not going to count against my cap. That may not actually be practical, but I'm sure rich teams will figure out ways to buy better teams, one way or another.

My point is not to say that these factors will push inequality back to where it was when teams could sign all the free agents they were willing to pay for, just that there may be other theoretical reasons that Noll-Scully may bounce back up a little bit. I think all those factors will be minor, and as long as the salary cap and floor stay within roughly the same range of each other, we'll continue to see a balanced league, regardless of how the talent pool changes.



Hat tip: The Wages of Wins



Labels: , ,

Tuesday, April 22, 2008

Regular-season performance and playoff success

Alan Reifman, author of the "Hot Hand" blog, comments on NHL and NBA performance in a recent New York Times article.

He tells us that when you're trying to predict playoff success, regular season performance is a much better indicator in the NBA or the NHL. Reifman ran correlations between regular season points and playoff rounds won. Here's the NHL:

2007 NHL Playoffs: r = .50
2006 NHL Playoffs: r = .04 (.22 excluding Detroit)
2004 NHL Playoffs: r = .31
2003 NHL Playoffs: r = .33
2002 NHL Playoffs: r = .50


And the NBA:

2007 NBA Playoffs: r = .33 (.58 excluding Dallas)
2006 NBA Playoffs: r = .67
2005 NBA Playoffs: r = .71


All this is as expected: NBA games are more predictable in the sense that the better team wins more of them. If NBA games were shorter, or had fewer possessions, the numbers would be closer.

Also affecting these correlations is the relative strengths of teams in playoff matchups, but my impression is that these are roughly equal between the two leagues.

Labels: , ,

Thursday, January 31, 2008

Is NFL defense mostly luck?

My last post linked to a study by Brian Burke that showed the Patriots scored a touchdown twice as often as the Giants, but that the Patriots' defense *prevented* a touchdown only slightly more often than the Giants'.

I wondered whether this meant that defense didn't vary between teams as much as offense does, and commenter "w. jason" confirmed that.

Now there's another confirmation, from
this study at pro-football-reference.com (by Doug Drinen?). Near the end, Drinen found that the year-to-year correlation of NFL teams' defensive stats is ... *negative*:

-.10 Turnovers forced
-.11 Touchdowns

I assume it's just random chance that these numbers are negative, unless you think there's some reason teams that are above-average this year should be below-average next year. If you had enough data, you'd probably find a positive, but small, correlation.

Does this mean that defense doesn't matter much? Is any player pretty much as good as any other on your defensive line?

That's possible. It's also possible, for instance, that a defense is only as good as its weakest link, and it's your *worst* players that make the difference, not your best. Or, it could be that teams can't tell the good defenders from the bad, and wind up with a random assortment.

In any case, shouldn’t this imply that you shouldn't pay a premium for defenders? I assume teams pay more for their offensive squad than their defensive, but probably not as much as these results say they should. I'll check it out if I have a chance.

That Drinen link, by the way, came from
today's "The Numbers Guy" article by Carl Bialik. He notes that the outcomes of sporting events are hard to predict. An organization called "Accuscore" is able to predict only 63% of NFL games, 57% of baseball games, and 68% of basketball games. (Hockey gets screwed again – not even mentioned.) Bialik attributes the difference to the amount of information known about the teams, but I think it's actually that basketball is intrinsically less random than baseball – as I have argued here and elsewhere.

Bialik also debunks one of the naive arguments against sabermetrics: that since statistical analysis thought the Giants should have lost all three games, and was wrong three times, the analysis must be wrong. I've seen that argument a couple of times lately, and it goes beyond silly. Even the best analysis only gives you a probability estimate, and even low-probability events happen sometimes.

And one last interesting tidbit: For NFL games, Accuscore has 54% accuracy against the spread. That seems pretty impressive to me.


Labels: , ,

Tuesday, November 06, 2007

"Homegrown players" -- a viable strategy?

In 2007, the four teams in the league championship series had 187 of their wins (as measured by Win Shares) contributed by players "homegrown" by the respective team. That's up 68% from last year, and 43% more than the "recent average."

These numbers come from a recent Wall Street Journal article by Russell Adams, "
Baseball Promotes From Within."

Adams doesn't make an explicit argument, but the implication is that teams are focusing on player development, rather than on signing free agents (who are getting very expensive), and that the strategy is working.

I'd argue that the strategy is not so much to concentrate on homegrown players, but perhaps to concentrate on *cheaper* players. After all, if you have a star in his "slave" years earning only $380,000, it doesn't matter whether he came from your farm system or someone else's. Either way, he's going to help you win equally.

The Indians, Diamondbacks and Rockies were all in the bottom eight
payrolls for 2007. They won because their low-priced players performed well, not necessarily because their homegrown players performed well.

However, there is an argument that homegrown players are a better investment:


"Executives say promoting your own players makes sense not only because they are familiar, but because everyone in the organization knows how they've been trained. Instructors in the Phillies' farm system, for instance, follow a manual that describes the "Phillies' way" of doing everything from warming up a pitcher's arm to defending a bunt. Promoting from within is "a safer way to go," says the team's assistant general manager Mike Arbuckle."
Even if you don't accept that the "Phillies' way" is better than the "Brewers' way" or the "White Sox' way," it's still possible that bringing the player up yourself can benefit the team. You'd expect that the team that knows the player best would be the best judge of his major-league expectation. By watching the player carefully, perhaps the Phillies can avoid the mistake of bringing a player up too early. But if they got the guy in trade from the Astros, they might not know enough to make a proper judgment. (I don't know of any evidence either way.)

But still, there's nothing to stop other teams, even free-spending ones, from also developing homegrown players. Even high-spending teams have a budget. If the Red Sox, for instance, find a gem in their minor-league system, they can trade away the expensive free agent at his position, and use the money for someone else.

For teams with little money to spend on salaries, there is an obvious strategy, one that's also used in Rotisserie. You trade your expensive players for young minor-league talent. Eventually the acquired players are ready for the big leagues, and you get three years of free service out of them (and a couple of still reasonably-priced arbitration years). If that's what these teams are doing, then, again, it's not the "homegrown" factor at work – it's the "cheap" factor.

Labels: ,

Thursday, November 01, 2007

Playoff "closeout" games

At "The Sports Economist," Brian Goff argues that "hardly any game matters" in the NBA because:


Only 30% of playoff games were "closeout" games where a team could win or lose the series

Only 25% involved games where both teams were 2 or fewer wins away from winning the series

Only 17% of 7-game first round series and NBA finals met the 2 or fewer wins from winning situation.

On the other hand, Goff argues, in the NCAA and NFL, every game meets the first two conditions. So maybe, he says, the NBA could go to best-of-3 for the first round, or something.

My take:

1. Just because a game can't close out a series doesn't mean it's not important. In truth, if you go by "series win probability added," the 0-0 game is much more important than the 3-0 game.

2. Why assume that fans care so much about series-ending or near-series-ending games? Maybe they like a mix of close series and blowout series.

3. Might fans not be upset when their 60-22 team loses to a 41-41 team in the first round? That'll happen fairly often. A best-of-three does indeed make the series less predictable -- but, it seems to me, at the cost of fairness.

4. The first round has only half the number of "both teams with two wins" games, which is almost certainly because the teams are mismatched. Instead of tinkering with the series length, why not just allow fewer teams in the playoffs?

5. The maximum possible proportion of possible "closeout" games comes when team A wins the first three games, and team B wins the next three. When that happens, 4/7 of the games are possible closeouts. If you want that, just change the rules so that the home team wins 99.9% of games. Make sure that team A is at home for the first three games, and team B the next three. Then sit back and enjoy! :)


Labels: ,

Wednesday, October 17, 2007

Bill James on competitive balance research

Ten days ago, Bill James wrote an article for the Boston Globe on what he thought would be the next big things in sports research. Bill's main answer: competitive balance. Is it bad for basketball that the best teams win games, divisions, and championships more often than other sports? How can the games be made more competitive? That, according to Bill, is where research is going in the next generation.

Economists, who have been looking at these issues for a while now, were a little upset that Bill didn't seem to know about them.
Dave Berri wrote,


" ... although James believes he is presenting “new” questions, much of his column focuses upon issues that are “old hat” to economists who have studied sports over the past few decades."

At The Sports Economist, Skip Sauer wrote,

"But while the answers are elusive, the study of competitive balance is not "virgin territory." Anyone answering the call of Bill James (and perhaps Bill himself) might profit from using google scholar, a fabulous little tool. The result it delivers is not consistent with the notion that this is 'virgin territory' ... "

And Sauer then gives us an actual
screen print from Google, showing some 3300 academic papers containing the words "competitive balance."

Over at "The Book",
Tangotiger argued that


"Bill James is a self-confessed non-follower of research, publicly stating that he doesn't keep up. He really shouldn't then be commenting on what has or has not been studied, since most would assume that he keeps up with the field."

---

I have to agree with these comments. Bill James did indeed err in implying that competitive balance is virgin territory for sports research. It most clearly is not. However, I disagree with some of the other comments from the sports economists, the ones that argue that the research has come up with strong answers to Bill's questions.

To summarize Bill's comments on basketball, he argues that the NBA has a competitive balance problem because

(a) players don’t try hard in the regular season, knowing that the better team wins so often that one play probably won't make a big difference;

(b) the playoff picture is decided in December, which reduces fan interest in the second half of the season;

(c) with playoff series as long as they are, the better team is favored so overwhelmingly, and upsets are so rare, that the sport becomes less interesting to the fans; and

(d) it is possible to correct these problems by changing the rules of the game and season.

At "The Wages of Wins," David Berri acknowledges that competitive balance in basketball is lower than in other sports. But he questions whether or not fans care. His reasons?

(a) attendance and revenues in the NBA are way up in the past ten years or so.

And that's it. Berri does mention that the issue of competitive balance and attendance has been studied – and he gives a few references – but he doesn't mention any of the results. He doesn't address Bill's discussion of whether you can increase competitive balance by changing the rules of the game. (Indeed, he has previously implied that such a thing is impossible, arguing that basketball has an imbalance because of "the short supply of tall people," rather than by the fact that basketball has so many possessions with a high chance of scoring on each.)

At Skip Sauer's blog, the argument is similar. Indeed, Sauer quotes Berri in noting that "The NBA does not ... have a problem with attendance." To his credit, though, Sauer notes explicitly that we don’t actually know much about fans' demand for balance; and, again to his credit, he does quote "some well-known results:"


"One tentative conclusion from people who have been thinking about this issue for some time (i.e. most of us), is that while competitive balance is clearly essential in some degree, the payoff function around the optimum may be really flat. The two most successful leagues in the world, the NFL and EPL, have vastly different degrees of balance, suggesting other factors are likely much more important in generating fan interest ... "

And that's fair enough. But that doesn't answer Bill's question, which was, "what level of competitive balance is best for the league?" Sauer is answering a completely different question: "how important is competitive balance compared to other factors?"

And, with respect to Sauer and Berri, the fact that NBA attendance is increasing doesn't mean that competitive balance is unimportant. McDonald's is doing well even though Big Macs cost more than they did ten years ago. Should we infer that customers don’t care about price? People are buying relatively fewer Buicks than in the 60s, even though Buicks are much better cars than they used to be. Does that mean buyers don't care about quality?

The logic here just doesn't make sense to me, especially coming from economists. Isn't it possible that attendance in the NBA might be even stronger if you tweak the game a little more to the fans' liking? Isn't it a bit naive to look at increasing attendance and blindly conclude that the current level of balance must be exactly what the fans want?

I agree with Bill that we don’t know how fans react to different levels of balance, in the short or long term. On the one hand, I can see how some fans don’t like it when the better team is almost certain to win. On the other hand, I kind of enjoyed it last Monday when the Cowboys were 6:1 favorites over the Bills and almost lost. And I also enjoy an occasional blowout. It's reasonable to assume that some fans prefer imbalance, while some prefer balance, isn't it?

So which set of fans is more important? Should leagues cater to one set over the other? Should they try to balance the two somehow?

Is there any study that tries to figure out the optimum? There might be, but I haven't seen it. Indeed, I have seen studies that beg the question, by assuming the more uncertain the outcome, the more the fans like it.
That doesn't make sense to me.

That's "within game" competitive balance. What about competitive balance within a season? Or competitive balance over a number of seasons?

MLB attendance and revenues are way up. But, from my standpoint, I'm less interested in baseball than I used to be. There are many reasons for this – one is the fact that baseball cards are so expensive that I don't know the players anymore. But another is that team spending is so unbalanced that I feel like I'm watching payrolls more than players. This year, the Yankees had an amazing second half and made the playoffs for the Nth consecutive year. Am I impressed? Well, no – they spend so much more than any other team that *of course* they keep making the playoffs. If I were a Yankee fan, how could I have pride in my team, knowing that they win through the pocketbook?

In the past, when my team won, I could be proud of them for drafting well, or judging talent, or making good trades, or even just putting it all together and having a good year. These are, perhaps, weak reasons for feeling pride in the accomplishments in a bunch of strangers, and maybe I'm not typical. Maybe most fans can feel just as much pride that their ownership is successful enough in their shipbuilding business that they can spend some big bucks on their team. To each his own.

But which type of fan is dominant? And will that change over time? Right now, MLB might be making lots of money because the best teams happen to be in the biggest cities. But over the long term, will that get boring? Will Yankee fans be more likely to lose interest when it sinks in more and more that their team's success is just being bought? Will other cities lose interest for the same reason?

Will a salary cap make fans more loyal, if they know that money is taken out of the equation? It would to me. I've been waiting 40 years for my Leafs to win the Stanley Cup – but if they were to have bought one, by spending three times as much on salaries than any other team, I would have probably been too disgusted to celebrate. It seems to me that to take pride in a team, you need them to have won a fair fight. Now, as I said, it could be that other fans aren't like me – but isn't this something that someone should be studying?

Taking this a bit further, is it just coincidence that the cities with the most rabid fans appear to be the ones with a history of failure? Will interest in the Red Sox start to drop once the World Series becomes a distant memory? Is it possible that it's actually in the long-term financial interest of the Cubs and Leafs to *not* win a championship, and milk their fans' longing for another few decades? If so, perhaps competitive balance in *winning seasons* is a good thing, but competitive balance in *championships* isn't. Who knows?

I haven't read all the studies or
blog posts that Berri and Sauer (here are some of Sauer's) listed on the topic of competitive balance. But the ones I *have* seen don’t really address the more complex questions. Some of them concentrate on the math – which sports have more competitive balance, and which less? Some of them discuss what certain changes – say, a luxury tax – will have on the level of balance. Some of them do some regressions on balance vs. attendance. All reasonable issues, but none that put much of a dent in Bill's question – "what makes a league succeed?"

"The issue of what is good for leagues is virgin territory," Bill wrote. That's not correct – the issue is out there, and sports economists have produced a significant literature on the topic. But what have we learned from that literature?


I would argue that Bill is almost correct: the *questions* are not virgin territory, but the *answers* are.


Labels: , , ,