Friday, April 30, 2010

Why are Yankees/Red Sox games so slow?

There's been lots of talk lately about how Yankees and Red Sox games take too long and move too slow.

Part of the "take too long" part is that those games tend to have lots of plate appearances and pitches. But another part is just that Red Sox and Yankee players tend to play slowly -- Derek Jeter appearing to be the worst of many offenders.

I figured that out with a study that's basically a large regression (thanks to WSJ's Carl Bialik for requesting it; Carl wrote about it earlier this week).

Here's what I did. I took every game from 2000 to 2009, and tried to predict game time from a bunch of different factors -- the number of pitches in the game, the number of innings, how many of the last few innings were close, how many steal attempts there were (to try to account for pickoff throws), the attendance, how many relievers were used, how many plate appearances there were, and how many runs were scored.

Those coefficients were mostly as you'd expect -- every extra pitch took about 23.3 seconds extra. Every half-inning that was close (as opposed to when the score was a blowout) added 47 seconds. Every reliever added 2:13 (probably the time to warm up the pitcher, if the change was mid-inning). And so on.

I also adjusted for the season in which the games took place. The results surprised me a bit; I hadn't realize that, all else being equal, games were almost four minutes slower in 2000-2001 than they were in 2009. But 2009 was the slowest game time since 2003. The fastest games of the decade were in 2004, when they were 4:54 faster than 2009, all else being equal.

I checked months, too, and April and September are fastest. Summer games are about two minutes longer than April. Maybe everyone hurries a bit more to get out of the cold?

Then, the fun part: For 1105 different players, I assigned each a dummy variable, which represented whether or not he was in the starting lineup that day (I was using Retrosheet game logs, so starting lineup was all I could get without digging into play-by-play data). I included any player who had at least one season of 250 AB between 2000 and 2009, or, for pitchers, at least one season of 25 games started.

Finally, I calculated one factor separately for every team, after adjusting for the players in the starting lineup. Those weren't that interesting. I think most of what they represent is how fast the *other* players are -- spot starters, relief pitchers, September callups who never got to 250 AB.

However: Boston and the Yankees were still among the slowest: their games were a minute or two longer than the average other team, even after adjusting for individual players. I wonder if that means there's something else going on: maybe the Yankees and Red Sox announcers are really slow, and the batters have to wait longer to get started? More research is required there.

Anyway, after doing all that, I got estimates for the effect of every player separately -- that is, how much longer or shorter games were with him in the lineup, compared to a player in the lineup instead who wasn't one of the 1,104 others in the study. It turns out that the missing players and relievers are faster than the regulars and starters, by about 26 seconds a batter, 11 seconds a catcher, and 19 seconds a pitcher. Because of that, I adjusted every regular by subtracting the average of his group, because it makes more sense to compare him to the other 1,104 regulars instead of the September callups and relievers.

So the final result gives a kind of "with or without you" factor for every player. For instance, take Derek Jeter. He was the second-highest in "delay of game" factor among batters (excluding catchers). All else being equal, the regression tells us that a game with Derek Jeter in the starting lineup was 3 minutes and 30 seconds slower than the exact same game where he wasn't in the lineup. It turns out that Jeter was the second-slowest batter in the study.

3:30 seems like a lot to me. How much is Jeter really involved in the play? Maybe 4 or 5 plate appearances a game, which is 15 or 20 pitches? That works out to between 10 and 15 seconds a pitch.

Does Derek Jeter really take an extra ten seconds between pitches than the average batter? I haven't watched him bat that closely, but maybe you guys can let me know if that's a reasonable estimate. There is some randomness involved in the regression, and, since Jeter was the second highest in the league, you might want to regress his number to the mean a little bit. But still -- his game factor of 3:30 was 3.6 standard deviations from zero, so it's almost certain that he's pretty slow.

I suppose that theoretically, it could also be his defense ... when he catches a pop up, does he do a little 30 second dance before throwing the ball? Doesn't seem likely to me. Baserunning seems like a better candidate ... maybe Jeter draws a lot of throws when he's on first base, but doesn't steal much (steal attempts appear in the regression). I should have had the regression account for pickoff throws, but I didn't think of it until now.

As in any regression, there could be something else going on. It could be that there's something special about the games that Jeter misses that make them a lot faster than otherwise. For instance: when I did my first pass at this, I found that Jeter was slow by almost six minutes, rather than three-and-a-half. Why? Because my first pass didn't control for season. And about half the games Jeter missed in the decade came in 2003, when games were about four minutes shorter than normal. So the games he missed were faster than the games he played for reasons other than his slowness.

So, if you want to take a guess at other reasons Jeter's number might be too high, you need something like that. Something that makes games faster, that would be disproportionately applicable to games that Jeter missed. And that "something" has to be something not controlled for in the study -- something other than year, month, attendance, other players in the lineup, and so on.

I couldn't come up with anything, but that doesn't mean that you won't.

-----------

Anyway, let me show you the top ten fast and slow players, and you can decide for yourself if the results seem reasonable. Here are the slow batters. Minutes are in decimals (4.5 equals four minutes thirty seconds) because I'm too lazy to convert:

+4.31 Denard Span
+3.51 Derek Jeter
+3.28 Miguel Tejada
+2.87 Rickie Weeks
+2.72 Albert Belle
+2.45 Dustin Pedroia
+2.43 Dante Bichette
+2.25 Greg Dobbs
+2.22 David Segui
+2.21 Reggie Abercrombie

And here are the fast batters:

-3.41 Chris Getz
-3.12 Kevin Jordan
-2.97 Nick Markakis
-2.79 Jake Fox
-2.74 Mark Ellis
-2.62 Will Venable
-2.49 Jose Lopez
-2.29 Mark McGwire
-2.23 Warren Morris
-2.19 Chris Davis

I did catchers separately, because you can't know how much of their speed is due to their batting, and how much is due to their catching. If a guy catches 140 pitches a game, and takes an extra half-second to throw each one back to the pitcher, that's an extra minute he's adding to the game. So you'd expect the catchers to have more extreme numbers than the other batters, and they do. Here are the slow catchers:

+6.01 Gary Bennett
+5.63 Benito Santiago
+4.73 Einar Diaz
+4.43 Tom Wilson
+4.12 Ryan Hanigan
+3.76 Doug Mirabelli
+3.05 Javier Valentin
+2.87 Eliezer Alfonzo
+2.44 Kelly Shoppach
+2.40 Mike Piazza

And the fast catchers:

-4.67 Eddie Perez
-4.09 Josh Bard
-3.88 Omir Santos
-3.88 Chris Coste
-3.58 Jeff Clement
-3.35 Ken Huckaby
-3.33 Charles Johnson
-2.97 John Flaherty
-2.97 Tom Lampkin
-2.61 Ben Davis

Finally, starting pitchers. These guys have a huge impact on game time ... I guess they vary a lot in how much time they take to get ready for the next pitch. Slow pitchers:

+7.32 Gil Heredia
+7.14 Steve Trachsel
+6.98 Matt Garza
+5.49 Armando Reynoso
+5.32 Jason Johnson
+5.22 Kevin Appier
+5.04 Chien-Ming Wang
+4.98 Ross Ohlendorf
+4.88 Edinson Volquez
+4.86 Elmer Dessens

And the fast pitchers. It's ironic that the guy who throws the slowest actually pitches the fastest. (Well, maybe not *that* ironic, but certainly more ironic than rain on your wedding day.)

-7.71 Tim Wakefield
-7.36 Kevin Tapani
-6.69 Glendon Rusch
-5.88 Steve Sparks
-5.39 Kirk Rueter
-5.13 James Baldwin
-5.08 Joe Blanton
-5.02 Ben Sheets
-4.99 Mark Buehrle
-4.98 Matt Morris

Tim Lincecum is 16th fastest, by the way, at -4.24.

------------

Now that we know the slow and fast players, we can do teams by adding up all the players. I'll just do a version of the Yankees and Red Sox, to see if those guys really do slow down the game. Here's the starting lineups from the Red Sox/Yankees game of April 4, 2010:

+3.5 Derek Jeter
-0.8 Nick Johnson
-0.7 Mark Teixeira
+0.9 Alex Rodriguez
+1.9 Robinson Cano
+1.3 Jorge Posada
-0.9 Curtis Granderson
+0.0 Nick Swisher
+1.1 Brett Gardner
-0.4 CC Sabathia
-----------------------
+5.9 Yankees total

-0.2 Jacoby Ellsbury
+2.5 Dustin Pedroia
+1.2 Victor Martinez
+0.6 Kevin Youkilis
+0.4 David Ortiz
+0.3 Adrian Beltre
-0.6 J.D. Drew
+0.2 Mike Cameron
+1.2 Marco Scutaro
+3.6 Josh Beckett
----------------------
+9.1 Red Sox total

So, our estimate is that the game took 15 minutes longer than it would have if average teams had been playing, instead of Boston and New York. That seems like a lot to me.

As it turns out, it was a 9-7 slugfest that went 3:46.

----------

On August 18, 2006, the first game of a doubleheader, the Red Sox beat the visiting Yankees 10-4, in 3 hours and 55 minutes. The starting lineups for those two teams featured players who would be expected to be 24.4 minutes slower than average. That was the slowest-playered game in the decade; of the 20 men in the combined starting lineups, 18 of them were slow. Only Jason Giambi and Craig Wilson were faster than average, by a combined 50 seconds.

The game with the fastest players last decade took place on April 16, 2008. Seattle beat Oakland 4-2. The regression predicted that the game should have taken 20.2 minutes less than normal. It was indeed a very fast game, at 2:09, but, of course, that's partly because it didn't turn out to be much of a slugfest.

---------

Keep in mind that these estimates for individual players really aren't all that precise. The standard error of a typical player is between half a minute and a minute. When Kevin Youkilis comes in at +0.6 minutes slower than average, but his standard error is also 0.6, there's a pretty good chance (about 1 in 6) that he could very well be *faster* than average.

You're on more solid ground when you assume that an extreme player (like Jeter or Markakis) is fast or slow, or when you add a bunch of players together.

--------

I've put the data up on my website, in an Excel spreadsheet. It contains two worksheets: one that gives you slowness estimates for all the players, and another that's the full regression results. I'll annotate that one later so it's easier to understand, and I'll come back and update this post.

I might also rerun this for the 1980s ... if only to see just how slow Mike Hargrove actually was.


Labels: ,

Wednesday, April 28, 2010

Why teams pay equal prices for free agents, Part II

The last couple of posts, I argued about why every team should pay the same amount for a free agent win, and how teams decide how many wins to buy. In those posts, I made some simplifying assumptions about how free agents are purchased. Now, let me come back to those points and argue that they don't matter much.


1. Slaves and Arbs

Previously, I explicitly assumed that every player was a free agent. But, of course, that's not true in real life. Many players are "slaves" (where the team can set their salary) or "arbs" (arbitration-eligible players who earn more than slaves, but less than free agents). Does the existence of slaves and arbs change the price of free agent wins?

I don't think so. Look at it this way: suppose team X wants to wind up at a talent level of 42 WAR (about 89 wins). Now, suppose they already have 20 WAR in slaves and arbs. Will that change the amount they're willing to pay for free agents?

No, I don't think it will. It's true that they only have to buy 22 WAR on the market, rather than the full 42. But, at the margin, the revenue value of each of those WAR is exactly the same as before. If the 89th win nets them $4MM in revenue, it nets them $4MM in revenue, regardless of where the other 88 wins come from.

But won't they be willing to spend more on a win if they have more money to spend? Don't they have a budget?

Again, I don't think a budget enters into it. If a team can make $5 million from a win, and they can get it for $4 million, they're going to find a way to buy it, whether they have the money or have to borrow it. And no matter how much money they have burning a hole in their pocket, they're not going to buy a win for more than it's worth to them and deliberately lose money.

On the free agent market, having 20 WAR in slaves is worth up to $80 million. Having 20 WAR in slaves, for purposes of team strategy, is exactly the same as having 0 WAR and $80 million in cash.

Here's an analogy. Every week, you buy 10 gallons of gas for your car, at $3 a gallon. One day, your local gas station tells you, "you're such a loyal customer, we're going to give you one gallon a week at the "slave" price of $1, and another gallon at the "arb" price of $2."

Does that change the price of the other 8 "free agent" gallons you buy? No, it doesn't. It saves you $3 a week, for which you're grateful, but you're still going to buy the other eight gallons at $3 each. The fact that your first two gallons were cheap shouldn't affect how many gallons you buy, any more than if you saved $3 on broccoli at the supermarket it would change how many gallons you buy.

Having said all that, I should add that there's one exception: if a team has more wins from slaves/arbs than it wants in total. Suppose that the Pirates were planning on buying free agents only up to 68 wins, because the 69th win would cost $4 million but only bring in $3.9 million. But when they look at their roster, they realize that they have 73 wins in slaves without having to pick up any free agents at all.

In that case, we should expect the Pirates should sell some of their players: they have five wins that other teams value at $4 million, that the Pirates value at less than $4 million. They can make more profit by selling off a couple of players.

But maybe they can't do that, for whatever reason (maybe the fans would revolt), and they're stuck with the extra wins. What that means is that now, with five wins "wasted" in Pittsburgh, there are five fewer free-agent wins available on the market. But demand from the other teams hasn't changed. And so, the price of free agent wins will rise a little bit, in order to price five wins out of the market for the other 29 teams.

Does that actually happen, that teams wind up slave/arb wins that they'd like to get rid of but can't? I don't know, but, even if it does happen, I think the effect would be quite small.


2. Valuation

Another thing I assumed is that teams can evaluate talent perfectly. If free agent X is a 4.3 WAR player, all teams know he's a 4.3 WAR player.

Obviously, that's not true ... teams will often sign players to contracts that don't appear to make sense to others. The obvious explanation is that the signing team has a higher expectation of the player's future performance.

There's a principle called "The Winner's Curse," which suggests that a party that wins an auction will have tended to overpay. Suppose a free agent comes along, and all 30 teams try to figure out what he's worth. The average estimate is 3.0 WAR, which happens to be exactly right. But not all teams come in at 3.0 WAR. Some teams are too low, and estimate 2.5. Some teams are too high, and estimate 3.5. One team is the highest, and estimates 3.6. The team that guessed 3.6, the one that overestimated the worst, is obviously most likely to be the team that winds up signing the player, because they're the ones who will offer the most money. And so, unless teams take into account that they might be "cursed," and lower their estimates accordingly, they all will overpay.

That might very well be happening. If it is, it will manifest itself in higher prices per win. A team will want to pay $3 million a win, and so it buys what it thinks is a 2.0 WAR player for $6 million. But because of the curse, it turns out he's only a 1.5 WAR player. And so, we observe that the team paid $4MM, and we conclude that's what it thought the win was worth.

So if all teams are doing that, it would be hard to tell whether teams were really valuing wins at $4MM, or whether they were valuing wins at $3MM and just screwing up. We'd maybe have to look at the details of their business, to find out that wins really *are* only worth $3MM, and then maybe we could guess that the Winner's Curse accounts for the remaining million. But unless we work for their accounting firm, how can we know?


3. Hometown Players

Maybe some players are more valuable to only a certain team, because of some connection with the city. Suppose Joe Mauer is a 6 WAR player, so his wins are worth $24 million, but with the Twins he's worth an extra $2 million because he's a Minnesota boy and the fans love him so much.

What happens? Well, when free agent time rolls around, 29 teams bid his price up to $24 million. Then Minnesota bids $24 million and one dollar. The Twins win the auction, and make an extra $2 million minus 1 dollar in profits that year.

Well, not really ... Mauer's agent also knows about the extra $2 million, and negotiations ensue. The Twins and Mauer finally settle on an amount between $24 million and $26 million, and each of them makes more money than if Mauer had gone elsewhere.

But that doesn't affect the price of free agents in general. At least, not unless Mauer affects the Twins' revenue curve somehow. I suppose it's possible ... it might be that fans are less likely to care about wins if Mauer is on the team ... and so instead of the 88th win being worth $4 million, it's worth only $3.95 million. And so the Twins buy only up to 87 wins instead of 88, which pushes the value of a win down a little bit.

It's possible, I suppose, but I just made it up and there's no real reason to believe it's true, much less that it's significant.


4. The Yankee Effect

The Yankees spend two-and-a-half times as much on salaries as the average team. How much does that push up the price of free agents?

Well, suppose the Yankees have been buying 15 wins more than average: 49 WAR instead of 34. And suppose they suddenly decide to stop, and drop back down to average. The supply of free agents goes up. How much does their value drop?

Well, the other 29 teams had been buying 951 WAR (1000 minus the Yankees' 49). Now they're going to buy 966. That's about a 1.5% increase. How much does the price of free agents have to drop in order to increase the quantity purchased by 1.5%?

An obvious estimate is that for the quantity to rise by 1.5 percent, the price would have to have dropped by about 1.5 percent. (That is, a demand elasticity of 1, as economists would say.) That's not much: a win goes from $4 million to $3,940,000. You wouldn't even notice that.

But who says the elasticity is exactly 1? It's a reasonable guess -- it's necessarily true that if everything in the world dropped in price by 1.5%, we'd be able to buy 1.5% more stuff, on average. But instead of buying 1.5% more apples and 1.5% more oranges, we might choose to buy 2% more apples and 1% more oranges. So if you want the elasticity of wins, you have to figure it out empirically. And I don't think there's the data to do that.

But, suppose we *did* have the data. Specifically, suppose that we knew for sure that the curves I drew in the previous post are accurate. Here's the one for the "average" team:



Now, at $4MM, we see that the average team might have bought 96 wins. From the graph, the 97th win is worth $3.5 million. So the team would buy an extra win if the price dropped to $3.5 million.

But wait! Teams don't have to buy even numbers of wins: they also come in fractions. It looks like if the team bought half an extra win, that half a win would be worth about $3.9 million per win. So for this team to go from 96 wins to 96.5 wins, the price would only have to drop from $4MM to $3.9MM.

Suppose all other 29 teams are exactly average. Then, a price drop to $3.9 million would get every one of the 29 teams to buy an extra half win. That's 14.5 wins -- almost exactly the 15 wins the Yankees stopped buying!

Now, more realistically, suppose all other 29 teams are *not* exactly average. If we drew the curves for all 29 teams separately, and did the same kind of analysis, I suspect the result wouldn't be much different. I suspect the market-clearing price, the price which would induce teams to buy an average of half an extra win each, would be not that much lower than $4 million. Whether it would be $3.94 million or $3.83 million I don't know, but I'm pretty sure it wouldn't be anywhere near as low as $3.5 million.

I guess my summary is: as successful as the Yankees are, they're only buying an extra 1.5% of the total talent out there, so their effect on prices isn't as big as you might think.


Labels: , ,

Saturday, April 24, 2010

The marginal value of a win in baseball

Last post, I went through the logic that suggests that teams should wind up paying the same amount per win, regardless of whether it's a big-market (rich) or small-market (poor) team.

That method started off with a guess at how a team's revenue increases with its wins. That was "Table 1" in the previous post. The actual numbers I used in the post, however, were not realistic for baseball. They assumed that each win increases a team's revenue -- which *is* reasonable -- but that the increase drops the more wins a team achieves. So even though the team will make more money when it wins 85 games than when it wins 67, the 86th win is worth less than the 68th win.

That simplification was to make the explanation easier (and that explanation is correct, even though the revenue curve is unrealistic). This post, I'm going to try to fix the revenue curve somewhat, and argue for what a team's "real" curve might look like.

(Unlike the other post, I'm going to talk in terms of what a single win is worth, instead of what the total revenue is at a certain win level. So, for instance, instead of saying that a team's revenue might be $100MM at 20 WAR and $107MM at 21 WAR, I'm just going to suggest that the 21st WAR is worth $7MM -- which is pretty much the same thing, just said in a different way. I'm also going to talk about actual win totals instead of wins above replacement, so instead of talking about how much the 21st WAR is going to be worth, I'll just talk about what the 68th win is going to be worth.)

As I said, the model I used, which assumes each additional win is worth less than each previous win, obviously doesn't hold. And the most obvious reason for that is the post-season. Around 85 wins is where a team sees its playoff chances take a big leap forward, and that's where fan interest increases accordingly. Beyond about 100 wins, though, it would seem reasonable to assume that wins aren't worth that much. You wouldn't expect a team to make a lot more money when it wins 110 games than when it wins only 105 games. Because, after all, you might assume that at 105 wins, fans are as much into the team as they're going to be, and the extra five wins at that point aren't really going to change how the fans perceive the season to be going (very, very well).

So there should be a bump in the value of a win in the playoff range. According to Nate Silver, in chapter 5.2 of "Baseball Between the Numbers," the 71st win is worth less than $1 million in revenue, the 105th win is worth less than $1 million in revenue, but the 90th win is worth $4.5 million. Here's a copy of Silver's graph of how much each additional win is worth, stolen from a post at the Baseball Prospectus website:



There are two quick adjustments we need to make to this chart.

First, there's inflation to consider, since Silver created his graph before the 2006 season. I'm just going to bump all the dollar values up a bit.

Second, and more importantly, Silver's graph tells us how much an *actual* win is worth. But, before the season starts, a team can't know how many wins it will achieve with that kind of precision. Even if it's perfectly omniscient about how much *talent* its team has, there's still a standard deviation of about 6 wins between talent and achievement. A team that's created to be perfectly average in every respect should go 81-81 -- but, just by random chance, it will win fewer than 75 games about one time in six, and it'll win more than 87 games one time in six.

And so, we have to adjust Silver's graph so that the hump is wider and shorter, in keeping with the reality that luck causes wins to be more widely dispersed.

I'll make those changes later, because, first, I want to explain that there's still one big problem with this chart: in economic terms, it doesn't make sense. It doesn't lead to teams acting in ways that we actually observe in the real world. That doesn't mean that Silver is wrong -- it could be that teams just don't try to maximize profits as I think they do. But, as I'll argue, one change to the graph might make it work out.

First, let me explain what doesn't make sense.

Suppose a team has 60 wins. Looking over at Nate's chart, the 61st win (to an average team) is worth another $750,000.

Now, we know from real life that a single win costs more on the free-agent market than that, even after adjusting for inflation. So a team that buys enough wins to get to exactly 61 wins will make less money (or lose more) than a team that stops at 60 wins.

If we assume a win costs $2 million or so (in Silver's 2005 dollars), then the same is true for the 62nd win, the 63rd, win, and all the way up to about 85 wins -- the win costs more than the revenue it generates. So no team should aim for finishing with between 61 and 85 wins. If they did, they could always have made more profit by, instead, aiming for only 60 wins and saving their free-agent dollars.

What about 86? Well, the 86th win costs the same as the 85th win, but results in more marginal revenue than the 85th win ($2.2 million for the 86th win, compared to $1.8 million for the 85th win, according to the graph). So number 86 is always a better bargain than number 85. So if a team is willing to buy win 85, it should also be willing to buy win 86. What about 87? Same argument: the 87th costs the same as the 86th, but is worth more. And so on, up to the peak of 90 wins.

Now, the 91st win is worth less than the 90th win, at a bit over $4 million, but still only costs $2 million. So the average team should keep buying. The 92nd, 93rd, and 94th wins are also worth more than their cost of $2 million. But, the 95th win costs $2 million, but returns only about $1.7 million, so the team will stop buying there.

See what all this means? It means that an average team should NEVER stop at any point between 60 wins and 94 wins. And, I think, if Silver's curve went left of the 60-win mark, it would still be that horizontal line. So the conclusion is that an average team maximizes its profits by buying exactly 94 wins. (More accurately, since we assumed replacement level is 47.7 wins, the average team should buy 46.3 free-agent wins.)

But, wait! We can repeat the logic for a smaller-market team. We don't have Silver's curve for that smaller market team, but we can guess that although the curve won't be as high, but it will probably be the same shape. And, therefore, the same logic should convince you that the breakeven point for the small-market team has to be somewhere between 90 wins and 95 wins. Just go through exactly the same logic. The 61st win is still a money-loser for the small-market team -- it costs $2 million but earns it less than $750,000. The same for the 62nd win, 63rd, and so on. Furthermore, from the shape of the curve, we can tell that if a win becomes profitable anywhere between 61 and 90, it is profitable all the way up to 90. So if the team is willing to buy the 61st win, it has to be willing to buy all wins up to 90.

So, in summary, every team in MLB should either stop before 60 wins, or buy at least 90 wins.

Does that happen? I don't think it does. If it did, you'd see a two-humped distribution of actual wins in the major leagues: one group around 60 or less, and another in the low 90s. I don't think that's the case. I think you see a lot of teams in the mid-to-high 60s and 70s.

So what's wrong?

I think what's missing from Silver's graph is what I suggested in the previous post: that the first few wins are very valuable, because they convince the fans and the world that the team is a reasonable contender. They keep the fans' faith.

Nate Silver has the 61st win worth only $750,000. I think it's worth more than that. I think that if you were to stop buying players when you got to 60-102, you'd be a laughingstock. There would be calls for salary floors and cancelling of revenue sharing and other rule changes. Even the big-market teams would complain -- it reflects badly on the Yankees credibility when it gets to 100 wins by beating up on teams who don't care about signing players.

Let's adjust Nate's graph to give lots of value to the wins at the bottom of the scale. While I'm doing that, I'll adjust it to account for inflation, and to spread out the 80-100 hump to reflect luck. That gives us something like this:



You can quibble about lots of things here ... maybe the straight line on the left needs to be curved, and maybe the hump needs to be a little taller or shorter or less steep. And the dip at 80, maybe you could argue that should be at 77 or so. But just concentrate on the basic shape.

What does it make sense for the average team to do now? Well, suppose wins cost $4 million. Now, it's obvious the team should buy all the wins from 60 to 70. They're all above the $4 million mark -- some of them far above the $4 million mark -- so they're profitable. But the 71st win costs $4 million, and returns only about $3.9 million. So there's an argument to be made that the team can logically stop at 70 wins.

But wait! There's a bunch of money to be made if the team keeps buying wins and winds up somewhere in the playoff hunt. If it does that, it has to lose money from 71 to 83, but then wins are worth more than $4 million right up to 96 wins. (The 97th win costs $4 million but results in only $3.5 in revenue, so it's not worth buying.)

So the logic is that maybe the team should stop at 70 wins, or maybe it should keep going all the way to 96 wins.

What gives the team more profit? That depends on the shape of the curve. I've redrawn the curve with two areas filled in: red for loss, and green for profit:


If the team buys wins 71 to 96, the first 13 or so wins give it a loss, the red area. The next 15 wins give it a profit, the green area. Which is bigger? You can do the math to find out, but, to me, they look about the same. And so the team should be reasonably indifferent to which strategy it chooses.

(By the way, none of these conclusions change much even after you consider that some players are cheaper than free agents (being slaves or arbs). But more on that in a future post.)

That was an average team. But, now, a small-market team, that's different. The curve will be lower everywhere,and so the red area will be a lot bigger than the green area, something like this:



The loss from 66 to 85 is huge, much bigger than the subsequent maximum profit from 85 to 95. So, obviously, this team will stop at 66 wins.

And here's a high-revenue team:


This is like the Yankees ... it'll obviously buy wins all the way to 96, every year. The green area is obviously much bigger than the red area.

So that's my theory, that wins are valuable when you're very low in the standings, and that's what drives small-market teams to at least try to buy *some* wins, rather than lose 110 games every year. Those teams should finish low in wins. Average teams might sometimes find it most profitable to stop in the 70s, and sometimes find it most profitable to try to win 90 or more. And large-market teams should be shooting for 90+ wins every year.

If this theory is roughly correct, what should be observe?

1. High-revenue teams buy lots of wins, low-revenue teams buy few wins.

2. There will be teams who buy low numbers of wins, teams that buy medium-low numbers of wins, and teams that buy high numbers of wins. There should be no teams who try to buy between 81 and 89 wins, because, no matter what your team, 90 wins is always more profitable than anything in the 80s.

3. Notwithstanding that, the curve is close horizontal around 90, so teams might decide to by 87 wins instead of 93 because it requires less investment and produces not that much more profit (look at 87-92 on the Yankee curve -- you could shave that off the green area and barely even notice).

4. Because, for many teams, both 70 and 90 wins are almost the same in terms of profit, some teams might find it advantageous to switch from being a bad 70 win team to an excellent 90 win team, and vice-versa.

5. Hardly any teams should find it most profitable to aim to lose 100 games or more, or to win 100 games or more.

I think all these are roughly consistent with what actually happens in MLB. The biggest discrepancy is that for #2 and #3, Bill James once observed that if you plot a graph of actual wins, the peak is not at 81-81, as you might expect, but, rather, at 83 or 84 wins. That's the area where we expect teams NOT to live. But, that Bill James study was 25 years ago, and salaries were much smaller then, compared to revenues, so things might have changed since then.

Also, there are things this analysis doesn't take into account. For instance: the excitement of a wild card race is good for revenues, regardless of whether the team actually winds up making it. That would mean that the 80-100 hump would be skewed, perhaps with the peak being at 87ish instead of 90, and a very sharp dropoff after that. That would help explain why there are many teams with actual wins in the low-80s. (And I do think the left side of the graph should be curved instead of straight, flattening out in the 60 range, but that doesn't change any of the conclusions, really.)

So in order to get everything to work out better, you want to ask: what does the curve *actually* look like? I'm sure that if you did some math, or just trial and error, you might be able to come up with a curve that would better match reality than what I've done here.

In any case, it seems to me that this is roughly the right answer for what goes on in real life. But I might be wrong -- as always, let me know if my logic is missing something.


Labels: , ,

Tuesday, April 20, 2010

Why teams pay equal prices for free agents

Big-market teams like the Yankees will sign more good free-agent players than small-market teams like the Royals. That's because wins attract more fans, and, also, existing fans get more excited and spend more money with the team. It seems like the Yankees, who have more fans, will get more benefit from the extra wins than (say) the Royals will. More fans to spend money means more dollars.

Now, suppose a new free agent becomes available, and he's good enough for 3 WAR (wins above replacement -- that is, a team that gets this player will win 3 more games than if they had the best available minimum-salary player in his place). Who will bid most for him?

Before I thought it through, it just seemed like it would be the Yankees, since his wins are worth more to them than to anyone else. But, after I thought about it a bit, prompted by a discussion over on Tango's site, I started to think that's not true. I realized that players go for the same price, regardless of which team signs them.

I think if you look at the empirical evidence, you'll find that's true; if you divide a free agent's salary by his projected performance, I'd bet you'd find it hovers around the same number ($4.5 million per team? I gotta ask Tango what the current number is), independent of which team signs him. There might be fluctuations, but I'd bet that there wouldn't be a huge difference between teams in the top half of the league and the bottom half of the league.

And that makes sense, if you think about it. It's not just players who are more valuable to the Yankees: it's everything. Baseball gloves, say. Obviously, Derek Jeter's glove is more important than Yuniesky Betancourt's glove: without a glove for Jeter, he can't play: and the Yankees lose a lot more money with Jeter sitting out gloveless than the Royals do with Betancourt sitting out gloveless. But both teams pay about the same amount for the actual glove. The same is true for everything a team uses: jet fuel, bus service, the food served in the clubhouse after the game, and so on. Why would wins be any different?

Anyway, once I thought about it a bit, I came to the conclusion that wins are like any other product they talk about in economics. Here's my logic, which won't be too surprising to economists, if I got it right. In fact, I'm writing out mostly to get it straight in my own mind.

----

I'm going to start with a bunch of simplifications, which won't affect the argument much. I'll come back to some of them later. Those assumptions are:

-- Every team is about to sign 25 players to one-year free-agent contracts
-- Every team has the same accurate projection for every player
-- Every team has good information about the revenue projections for every other team
-- Every team cares only about wins, and not about the personalities of the players
-- Every team pays $0 for replacement players

Suppose a team, say, the Phillies, is trying to figure out how much to spend on players. They start by writing down revenue projections for each number of WAR they might buy. If they buy 0 WAR, they will finish with 47.666 wins (I chose this number arbitrarily for convenience -- it's reasonably close.)

But if they buy 0 WAR, the fans will revolt -- they'll see their team not spending any money at all. There will be a big scandal, and the team will get in trouble. It would be such a bad situation that it works out as if the team got no revenue at all.

0 WAR -- $0 revenue

If they buy 1 win, or 2 wins, or 10 wins, it's the same thing. Suppose the team figures that the lowest realistic option is to buy 20 wins (to win 68 games). That will net them $100 million in revenue. (I'm making that number up, as well as all other numbers in this post.)

20 wins -- $100MM

They also figure that the 21st win will draw an additional $7 million out of the fans' pockets:

21 wins -- $107 MM

And 22 wins another $7 million, and so on, with diminishing returns. Once their business analysts have done their projections, they wind up with a chart like this. I've left off many of the rows, to keep things easier to read:

20 wins -- $100 MM
21 wins -- $107 MM
22 wins -- $114 MM
23 wins -- $120 MM
30 wins -- $158 MM
35 wins -- $180 MM
40 wins -- $206 MM
41 wins -- $210 MM
45 wins -- $225 MM
50 wins -- $235 MM
51 wins -- $237 MM
52 wins -- $238 MM
60 wins -- $240 MM

So the Phillies can buy 20 wins, and make $100 million, or they can buy 60 wins, finish at 108-54, and make $250 MM. Which option is best? Well, it depends how much it costs to buy those wins.

Suppose wins are $3 million each. Then the Phillies can make a chart, which I'm going to call that "Chart 1" because I might come back to it in a later post:

Chart 1 -- Hypothetical Phillies Business Analysis

Wins Revenue Salaries Profit
----------------------------------------
20 .. 100 ..... 60 ... 40
21 .. 107 ..... 63 ... 44
22 .. 114 ..... 66 ... 48
23 .. 121 ..... 69 ... 52
30 .. 158 ..... 90 ... 68
35 .. 180 .... 105 ... 75
40 .. 206 .... 120 ... 86
41 .. 210 .... 123 ... 87
45 .. 225 .... 135 ... 90
50 .. 235 .... 150 ... 85
51 .. 237 .... 156 ... 81
52 .. 238 .... 159 ... 79
60 .. 240 .... 180 ... 60

And so the Phillies conclude: if wins wind up costing $3 million each, we're best off if we buy 45 of them. They'll make $235 in revenues, pay $135 in salaries, and pocket $90 million profit.

But: what if wins wind up costing $4 million each? They repeat the above chart, updating the "cost" and "profit" columns, and discover that they make the most profit when they buy 40 wins: that's revenue of $206 million, salaries of $160 million, for profit of $46 million.

What if wins are $5 million? Then the best thing to do is buy 30 wins. Revenue $158MM, salaries $150MM, profit $8 MM. Any other number of wins leads to less profit.

What if wins are $1.1 million? Then the highest profit probably occurs at about 51 wins.

So the Phillies make a second chart:

Cost of Wins ... # Of Wins We Should Buy
---------------------------------------------------------
$1MM ................ 51
$3MM ................ 45
$4MM ................ 40
$5MM ................ 30

They repeat this process for every value that makes sense ... like $3.2 million, or $2.7 million, and so on, and put the results on a graph:


Now, the Yankees do the same thing, and come up with their own curve. You can assume that the Phillies know the Yankees curve, or that they don't -- it doesn't matter much. The Yankees curve will be higher, because their higher revenue makes it desirable for them to buy more wins:



And here's a chart for how many wins will be purchased by the Phillies and Yankees combined. For this chart, all I've done is add up the numbers for the two teams. For instance, how many wins will the two teams buy in total if wins cost $3MM each? Well, looking at the above chart, the Phillies will buy 45, and the Yankees will buy 60, for a total of 105.



The Phillies now repeat the process for the other 28 teams. They take the 30 curves, and add them up to create one composite curve. That might look something like this:




So, if wins are $1MM each, the 30 teams will try to buy 1350 of them. If they're $6MM each, they'll try to buy only 400 of them. (Reminder that I'm making these numbers up; they're probably not realistic.)

But what will the price actually be? Well, it turns out that we know for sure that there are exactly 1,000 WAR available for purchase. How do we know that? We know (well, we assumed) that a replacement-level team will win 47.66 games. But an average team must win 81 games. The difference is 33.33 WAR per team. Multiply that by 30 teams, and we get exactly 1,000 wins.

So when will teams choose to buy exactly 1,000 WAR? From the graph, we see that happens when wins cost exactly $4MM each. And so, teams bid up the price of free agents exactly to 4 million dollars.

No free agents will go for $3 million a win, because then teams would want to buy 1150 WAR when only 1000 are available, and the price will be bid up. Wins can't go for $5 million, because then teams would want to buy only 800 WAR when 1000 are available, and the remaining free agents would be knocking down GMs' doors offering to sign for less.

The equilibrium price, in our example, is $4 million, and that's what wins will go for.

------

But still, it seems weird that every team pays the same amount per win. Aren't wins worth more to the Phillies than the Royals?

Yes, the *average* win is worth more to the Phillies than the Royals. The *Nth* win is worth more to the Phillies than the Royals. But the *marginal* win is worth almost exactly the same.

That is: the Royals may only have bought 20 wins. Why didn't they buy a 21st win? Because that 21st win cost $4 million, and the increase in revenue they'd get from it would be worth less that $4 million: maybe $3.9 million.

The Phillies *did* buy a 21st win: as you can see from the first chart, the 21st win was worth $7 million to them: it bumped their revenue up from $100 million to $107 million.

The 21st win IS worth more to the Phillies than the Royals. And the 22nd, and the 23rd, and probably almost every number.

But the big-market teams will keep buying wins even after the small-market teams have stopped. The Phillies will keep buying until they hit 40 wins -- from the chart, we see that the 41st win is worth exactly $4 million (bumping revenue from $206MM to $210MM). (Actually, you'll have to pretend that I rounded, and that win is worth only $3.95 million: that's why the Phillies didn't spend $4 million on it.)

The Phillies' marginal win -- their 41st -- is worth about the same as the Royals' marginal win -- their 21st.

And the same is true for every team. The last win they chose to buy was worth more than $4MM, or they wouldn't have bought it. And the next win they chose not to buy was worth less than $4MM, or they *would* have bought it. When you consider that teams can buy fractional wins, you can simplify to say that the teams all buy fractions of a win until the next fraction gains them revenue that exactly matches the cost ($4MM per win).

That means: suppose a 1 WAR player retires just after every team fills its roster, and another 1 WAR player comes out of retirement to replace him. Who will bid highest on the new player? It will probably be the team that lost the original player, even if it's a poor team like the Royals. The other 29 teams gain less than $4 million in revenues if they pick up the player -- we know that because they chose not to buy any more $4MM wins than they already did. But the team that lost the player, the team that's now 1 WAR short of where they want to be, values the player at more than $4 million. We know that because they chose to buy the original (now retired) player for $4 million in the first place.

Of course, in real life, teams don't go to that many decimal places. In that case, every one of the 30 teams is roughly equally likely to sign the new player. We know that the last WAR each team signed was worth $4MM to them ... so the *next* WAR probably isn't worth much less than $4MM. And so we shouldn't be surprised if the Royals are just as likely to sign the new guy as the Yankees are.

-----

I guess I'll stop here for now ... next post I'll try to explain what I think happens when you eliminate some of the oversimplifications. What happens when not everyone is a free agent? What about arbs and slaves? What happens when some players have marquee value beyond their wins? What happens when teams can't really give a pinpoint forecast of their wins (which is always)? What about wins being worth more around the playoff bulge (85-95 wins, say)? What if teams have no idea what other teams' revenue curves look like? What if certain players have extra fan value to only one team because of special circumstances? What if teams are risk-averse? [Spoiler: I don't think all that much happens differently even when you account for all these things.]

Most of this stuff you can figure out with a bit of logic ... I make no claim to having huge expertise here. Those of you who know more economics than I do, let me know if I got anything wrong so far.


Labels: , ,

Sunday, April 18, 2010

Does a cricketer's career really depend on luck?

A couple of days ago, the "Freakonomics" blog, and several others, quoted a study, based on cricket, that purported to show that random luck in your first job can have a big impact on your career. The paper is called "What Can International Cricket Teach Us About the Role of Luck in Labor Markets?" and it's by Shekhar Aiyar and Rodney Ramcharan.

It's probably true that luck plays a big part in your work life, but I don't think the study actually shows that.

Here's what the authors did (and apologies in advance if I get some of the cricket terminology wrong). They figured out home field advantage in a player's cricket batting average (runs per out made), and found that the average batter hits for about 25% more runs at home than on the road. Given that's the case, then decision-makers should take that into account when evaluating players. Obviously, a batter who hits for 30 runs on the road likely has more ability than a batter who hits for 30 runs at home.

Now, consider a batter who's playing in an elite "test" cricket match for the first time. Some portion of batters, after their debut match, will be dropped from the team -- presumably those who didn't bat well. (It turns out that figure is about 25% of first-time batters being immediately dropped.)

Obviously, the worse a player bats, the more likely he'll be dropped. But managers should take into account whether it's a home match or a road match. All things being equal, a batter who hits for X runs at home should be more likely to be dropped than a player who hits for X runs on the road.

The authors do a regression, to predict the probability of being dropped, based on their batting that match, a dummy for whether it was home or road, and an interaction term (the dummy times the number of runs). It turned out that both the dummy and the interaction term were not statistically significant.

Therefore, the authors concluded, managers neglect to take home field advantage (HFA) into account when evaluating players -- they just look at runs. Therefore, a player's career is strongly affected by random chance -- the luck of whether his first match happened to be at home (where he gets a better chance of making the team) or on the road (where he gets a worse chance of making the team).

----

Except that ... the regression does NOT show that the manager ignores HFA, not at all. The regression equation the authors found (Table 9, column 2) was

Chance of being dropped = -0.0043 * (runs) + .000356 * (runs) if at home + .0527 if at home

Now, suppose a batter hits for 10 fewer runs than average. If he does that on the road, his additional chance of being dropped is

0.0043 * 10 = 4.3%.

But suppose he does that at home. His additional chance of being dropped is:

0.0043 * 10 + .000356 * 10 + .0527 = 9.9%.

Doesn't that seem like a reasonable adjustment for HFA? I think it does. I'm not sure what an average batter hits for ... say, 35 runs? That means if the player hits for 25 runs at home, he'll be cut 10% of the time. If he hits for 25 runs on the road, he'll be cut 4% of the time. What's wrong with that?

What the authors would say is wrong with that is that the signficance levels for the last two terms were too low, so we have to drop them. To which I say, nonsense! They look almost exactly as you'd think they would based on your prior expectation of managers not being dumb. If they're not significant, it's because you don't have enough data!

Looking at it in a different way: the authors chose the null hypothesis that the managers' adjustment of HFA is zero. They then fail to reject the hypothesis.

But, what if they chose a contradictory null hypothesis -- that managers' HFA *irrationality* was zero? That is, what if the null hypothesis was that managers fully understood what HFA meant and adjusted their expectations accordingly? The authors would have included a "managers are dumb" dummy variable. The equations would have still come up with 4% for a road player and 10% for a home player -- and it would turn out that the significance of the "managers are dumb" variable would not be significant.

Two different and contradictory null hypotheses, neither of which would be rejected by the data. The authors chose to test one, but not the other. Basically, the test the authors chose is not powerful enough to distinguish the two hypotheses (manager dumb, manager not dumb) with statistical significance.

But if you look at the actual equation, which shows that home players are twice as likely to be dropped than road players for equal levels of undperformance -- it certainly looks like "not dumb" is a lot more likely than "dumb".

----

It's like this: suppose I want to sell lots of lottery tickets. So I claim that your chances of winning the Lotto 6/49 jackpot are 1 in 1000. Mathematicians and experts all say that I'm wrong, that the odds are really 1 in 13,983,816. But I don't think that's right, and I have a study to back me up!

I randomly took 500 ticket buyers, and, it turns out, none of them won the jackpot. But I run an analysis on that dataset anyway. And you know what? I find that if the odds truly were 1 in 1000, the chance of nobody winning the jackpot would be 60%. That's really insignificant, not even close to the 5% level required to reject the null hypothesis that the chance are 1 in 1000!

What's the flaw in my logic? Well, technically, there isn't one: it's actually true that the data don't permit me to reject the 1 in 1000 hypothesis. But the data also don't permit me to reject the null hypothesis that the chances are 1 in 10000, or 1 in 100,000, or 1 in 1,000,000, or 1 in 10,000,000, or 1 in 13,983,816, or 1 in 1,000,000,000,000,000,000! Why should I specifically focus on 1 in 1000? Only because I want that one to be true? That's not right.

What I should have done, and what the authors should have done, is calculate a confidence interval. My confidence interval would be 1 in (168, infinity), and the reader would see that, even though 1 in 1000 is in the interval, so is the much more plausible result of 1 in 13,983,816.

If the authors of this study had done that, they would have noticed that their confidence interval, which included "managers ignore home field advantage completely", also included "managers are perfectly rational." Not only is the "rational" hypothesis more plausible than the "dumb" hypothesis -- it sure does seem to fit the authors' data a lot better.

Labels: ,

Friday, April 16, 2010

The flaw in the Scully pay/performance regession

In 1974, Gerald Scully published an academic article called "Pay and Performance in Major League Baseball." (Here's a Google search that finds a copy on David Berri's site.) It's a very famous paper, because it reached the conclusion that, in the pre-free-agent era, teams were paying players far, far less than the players were earning for their employers.

It was also one of the first academic papers to try to find a connection between individual performance and team wins. Unfortunately, Scully chose SLG and K/BB ratio as his measures of performance, but, I suppose, those probably seemed like reasonable choices at the time. But it turns out there's a much more serious problem.

In his set of variables to use in predicting winning percentage, the set of variables that included SLG and K/BB, Scully also included dummy variables for how far the team was out of first place. That is, in trying to predict how well a team did, Scully based his estimates partially on ... how well the team did!


That biases the results so much that I can't believe nobody's mentioned it before ... at least they haven't in all the mentions I've seen of this study.

If it's not obvious why that's the wrong thing to do, let me try to explain.

Suppose you predict that, this season, your favorite team will slug .390 and have a K/BB ratio of 1.5. What will its winning percentage be?

Well, if Scully had used only SLG and K/BB in his regression, it would be easy to figure out: you just take his regression equation, which would look something like

PCT = (a * SLG) + (b * K/BB) + c (if an NL team) + d


Plug in Scully's estimates for a, b, and c, plug in .390 and 1.5, and there you go -- your estimate.

But Scully's actual equation included those two extra terms

PCT =(.92 * SLG) + (.90 * K/BB) - 38.57 (if an NL team) + 37.24 + 43.78 (if the team finished within 5 games of first) - 75.64 (if the team finished more than 20 games out)

So now how do you calculate your team's expected PCT? You can't! Because you don't know whether to include those last two variables. After all, how can you predict in advance whether your team will wind up having finished near the top or the bottom? You can't! If you could, you probably wouldn't need this regression in the first place!

-----

Not only does the regression not make sense, but, more importantly, by including those two dummy variables, Scully's estimates of productivity wind up completely wrong. For instance: what is the effect of raising your SLG by 10 points? Well, that depends. Keeping all the other variables constant, it's .92 * 10, or 9.2 points. That's .0092, or about 1.5 wins in a 162-game season.

But wait! Those dummy variables for standings position won't necessarily stay constant. What if those 1.5 wins lifted you from 21 games back to 19.5 games back? In that case, the equation would give 9.2 for the SLG, but an extra 75.7 for the change in the dummy, for a total of 86.9 points! And what if they lifted you from 6 games back to 4.5 games back? In that case, the equation would estimate an extra 43.8 point bump, for a total of 53.0!

So what's the benefit of an extra 10 points slugging on the team's winning percentage?

--> 9.2 points -- for a team 22 or more games out
-> 75.7 points -- for a team 21.5 to 20 games out
--> 9.2 points -- for a team 19.5 to 7 games out
-> 43.8 points -- for a team 6.5 to 5 games out
--> 9.2 points -- for a team less than 5 games out

That makes no sense like that. You can't figure out how much the player's productivity is worth unless you know which of the five groups the team is in. But which group the team is in is exactly what you're trying to predict!

In any case, it's obvious that using 9.2 points as the measure of the player's increased productivity is wrong. It's *at least* 9.2 points, but sometimes substantially more. You need to average all five cases out, in proportion to how often they'd occur (and how can you know how often they occur without further study?). When you do that, you'll obviously get more than 9.2 points. But, as far as I can tell, Scully just used the SLG coefficient as his measure -- the player only got credit for the 9.2 points! And so he *severely underestimated* how much a player's performance helps his team.

------

Here's an example that will make it clearer. Suppose a lottery gives you a 1 in a million chance of winning $500,000. Then, if you do a regression to predict winnings based on how many tickets you buy, you'll probably get something close to

Winnings = 0.5 * tickets bought

Which makes sense: a 1 in a million chance of winning half a million dollars is worth 50 cents.

But now, suppose I add a term that says whether or not you won. Now, I'll get

Winnings = 0.0 * tickets bought + $500,000 (if you won)

That's true, but it completely hides the relationship between the ticket and the winnings. If you ignore the dummy variable, it looks like the ticket is worthless!

Same for Scully's regression. By including part of the "winnings" for having a good SLG or K/BB "ticket" in a different term, he underestimates the value of the "ticket".

------

So, since Scully's conclusion was that players are underpaid for their productivity, and Scully himself had underestimated that productivity ... well, the conclusion is completely unjustified by the results of the study. It may be true that players were underpaid -- I think it almost certainly is -- but this particular study, famous as it is, doesn't even come close to proving it.

UPDATE: As commenter MattyD points out (thanks!), I got it backwards. For the previous paragraph, I should have said:

However, Scully's conclusion on pay and productivity still holds. The study underestimated player productivity, but, if it found that players are paid less than even that underestimated production, it's certainly true that they are underpaid relative to their true production.


Labels: ,

Friday, April 09, 2010

JSE: Zimbalist on salaries and revenues

The second article in the February, 2010 issue of "Journal of Sports Economics" is "Reflections on Salary Shares and Salary Caps," by Andrew Zimbalist.

It's an overview of the recent history of revenues and salaries in the four major sports. I say "overview" because Zimbalist implies there's a lot of detail that went into the numbers that wouldn't fit in the article. Zimbalist starts off by quoting some incorrect numbers that appeared in the press, and says,

"... it helps to get the numbers right before plowing ahead. ... Special knowledge of the real world, even though it may sometimes be proprietary or may entail diligent digging, can help sharpen and deepen research by sports economists ..."


The main question in the paper is: what percentage of revenues is paid to the players in salaries? Zimbalist touches on some of the issues involved in figuring that out. In three of the sports (MLB is the exception), there's a salary cap that's based on a percentage of revenues. You'd think it would be as simple as looking at the union agreements to see what the percentages are. But what counts as revenue? The details are different for the different leagues, as defined in their respective contracts. For instance, the New York Knicks and the MSG network that broadcasts their games are both owned by the same company (Cablevision). In order to avoid having MSG pay a too-low price for the broadcast rights (thus artificially lowering the Knicks revenues), the contract contains a clause that values the TV contract at the same price as the Lakers' contract (which is a transaction between unrelated parties).

Also, NHL revenues are defined to include the value of complimentary tickets; NBA revenues are not. And so forth.

Anyway, here are the numbers, as Zimbalist calculates them:

In the NFL, salaries from 2001-2006 fluctuated in the range of 54 to 60 percent of total revenues. Before that, from 1994 to 2000, they were higher, between 60 and 65% all seasons but one.

In the NBA, salaries from 2001 to 2006 were 57% of "basketball-related income" in all seasons but one (60% in 2002-03). In the six preceding years, they ranged from 53% to 65%.

Zimbalist doesn't give data for the NHL, perhaps because the salary cap is so recent. But the agreement calls for salaries to comprise 54% to 57% of revenues, with the higher numbers applying when revenues are high.

Finally, for MLB, Zimbalist's numbers fluctuate a fair bit. Here they are from 1990 to 2007:

1990-94: 42%, 47%, 54%, 54%, 63%
1995-99: 62%, 58%, 59%, 56%, 59%
2000-04: 56%, 61%, 67%, 63%, 55%
2005-07: 53%, 51%, 51%

Those numbers don't include minor-league salaries. If you add those in, the 2007 figure rises from 51% to 57% (since MLB pays minor league salaries but doesn't participate in minor-league revenues). The same principle holds for the NHL, but to a much lesser degree (since there are fewer minor-league players, and some NHL teams own their affiliates outright). With minor leagues included, the NHL ratio rises to 58%.

If you're interested in some of the issues behind these estimates, I definitely recommend Zimbalist's article. There aren't a lot of hardcore details, but there is a discussion of some of the many issues that have to be considered to get accurate numbers.

-----

Generally, it looks like all four sports have about the same ratios, between 55 and 60 percent. Zimbalist expresses a bit of surprise at this, since MLB doesn't have a salary cap, while the other three leagues do. You might have expected the MLB ratio to be higher, because of that, but it doesn't work out that way.

I guess it shouldn't be that much of a surprise -- if the MLB ratio was too much higher, the teams would be demanding a cap, and that doesn't seem to be the case. To me, logic seems to suggest that MLB should be the healthiest of the four leagues: it spends the same ratio of revenues on player compensation, but the big-market teams pay more, and the small-market teams pay less. This lets all the teams make a decent profit, and puts the best teams where there are the most fans.

With a cap, the Yankees would have to spend the same as everyone else. They'd be an average team, and, since the Yankees are the biggest market, their revenues would drop more than the revenues of other teams would rise. And so the league as a whole would be worse off -- they Yankees would make less, and, with a salary floor like in the NHL, lots of small-market teams might start losing money.

Except ... well, as I wrote before, I wonder if, in the long term, the fans will be willing to put up with a system that virtually guarantees the Yankees and Red Sox so many more pennants than the Royals and Marlins. I guess time will tell. For my part, I much prefer the long-term competitive balance promised by the other three leagues.



Labels: , , , , , ,

Tuesday, April 06, 2010

JSE: Rodney Fort on sabermetrics and salary data

The February, 2010 issue of the "Journal of Sports Economics" contains eight articles, all of which are on topics of interest to sabermetricians. I'll try to review them all over the next couple of weeks. [2015 update: well, that didn't happen!]

I've already talked about one of them: the one by David Berri and JC Bradbury, "Working in the Land of the Metricians." This post is on one of the other seven articles: "Observation, Replication, and Measurement in Sports Economics," by Rodney Fort.

Dr. Fort's research here shows that the reliability of MLB salary data, as found on the internet and elsewhere, is somewhat variable. For instance, of the 13 sources of payroll data that Fort found, seven of them have data for 1999. Those estimates of average player salary are:

$1,567,873
$1,604,972
$1,569,000
$1,733,557
$1,926,565
$1,720,050
$1,609,418

The difference between the high and low is 23%, which is a lot. Fort guesses that some of the difference is what time of year the calculations were made, since rosters change continuously, and that seems like a reasonable explanation. In 2009, the USA Today data shows 812 players, as opposed to the 750 players that would appear if opening day rosters only were used.

Still, for the years there are multiple estimates, why does every one seem to be different? These are very good questions to be asking.

The article contains multiple (different) estimates for every year between 1976 and 2006. I suspect the discrepancies won't make a huge difference in most studies, but, still, accurate data is better than inaccurate data, and I didn't realize before Dr. Fort's article that the differences were so large.

-----

Fort's article serves as an introduction to his guest-edited "Special Issue on the Challenges of Working With Sports Data," and so he also talks about the sabermetric community, and his personal interactions with us. One of those examples mentions me:


"I have had a couple of informative interactions at ... Phil Birnbaum's [blog] ... Recently, Birnbaum attempted to decompose team valuations from Forbes into cash flow value and "Picasso Value". The posts that followed were all of a mind on improving Birnbaum's estimation approach. I posted a suggestion that his approach did not include other monetary values of ownership (documented in some of my work). Thus, he ran the danger of "inflating" Picasso value. Birnbaum acknowledged the input but the flow of discussion did not change."


Dr. Fort has a point -- he did disagree with my post, and those of commenters, and, indeed, the flow of discussion didn't change. In the first post, Fort wrote that Forbes' team values didn't include the value of tax breaks and corporate cross-overs (such as when a team sells broadcasting rights to its own parent company at a discounted rate, thus keeping the team's stated profits artificially low). I should have acknowledged that in a subsequent comment.

In the second post, Dr. Fort mentioned that again, and I asked him what his estimate of Picasso Value is in the light of his estimates of the cash flow value of owning a team. He argued that in the absence of hard evidence for consumption value (i.e., evidence that team owners are willing to accept less profit for the thrill of owning the team), we should assume that value is zero. There, I disagree -- even accepting Dr. Fort's estimates of other-than-operating-proft, I think there is still evidence of consumption value. But, I should have said that online at the time. Certainly, when an expert like Dr. Fort drops by with some relevant knowledge on an issue, we absolutely should acknowledge the contribution. My apologies to Dr. Fort that such was not the case.

-----

In another example, Dr. Fort talks about his experience at an American Statistical Association conference, where a bunch of academic statisticians had a session to debate how to figure out the best slugger in baseball history:


"The primary issue for nearly all in the room had to do with holding all else constant (dead ball era, end of the spitball, changes in rules ... and so on), not at all unfamiliar to economists. I interrupted the back-and-forth question and answer session and offered a normalized pay measure: the highest-paid slugger must surely be the best. The rest of the discussion held fast to its previous course ..."


Here, I'd argue that Fort's suggestion is not really answering the question that was asked. The statisticians are asking, what is the proper algorithm for calculating how good the player is? And Fort is saying, "don't bother! The team GMs already know the algorithm. If you want to know who the best player is, just look at their payrolls."

That's fair enough. But it's only natural that the statisticians won't be satisfied by that: they don't just want the answer, they want to know how to calculate it.

This narrative reminds me of an old story: A physics student sits down to an exam, and one of the questions is: "show how you can use a barometer to determine the height of a tall building." The student is stumped at first, but then writes: "Find the superintendent of the building, and say to him: 'If you tell me how tall your building is, I will give you this fine barometer.'"

-----

In both the introduction and conclusions, Fort comments on the issues raised by the Berri/Bradbury paper:

Berri and Bradbury relate to us their extensive experience with the SABR-metric and APBR-metric communities ... There are tensions and jealousies aroused over the issue of proper credential and peer reivew. There is the issue of proper assignation of credit; when incorporating a technique or measurement developed outside the [academic] peer-review process, what to do? Finally, there is the issue of dealing online with these fellow travelers who often adopt the anonymity of pseudonyms. ...

"I cannot help but think that part of the tension with the metricians revolves around desires for immortality in the naming of a thing. This seems patently absurd to me ... In sports, the authors point this out for the case of Dobson's creation of batting average in 1872. I do not ever remember seeing anybody cite Gauss (1821) and Karl Pearson (1984) on the origins of the standard deviation or the subsequent coinage of the variance by Ronald Fisher (1918). And I doubt these authors took much umbrage over the fact that their inventions simply became household terms in record time. However perhaps, we are wrong in this; if someone grouses, perhaps we should respond."


Hmmm ... from my standpoint, I don't think naming is the issue at all. I agree with Dr. Fort here that it's perfectly reasonable to omit citations for results that have become household terms. We all talk about "DIPS" or "Runs Created" or "Linear Weights" without mentioning Voros McCracken or Bill James or Pete Palmer every time; that's just normal. And I don't think I've ever seen any sabermetrician anywhere demanding that his particular term be used, or that anything be named after him.

One of the things we *do* expect from the academic community is that they be aware of our own research and conventions, and, unless they disagree with the findings, that they show as much respect for them as they would if the research had been published academically. That's just common sense, isn't it? Take DIPS, for example. Do a couple of quick Google searches, and you should find at least 20 studies testing and confirming the hypothesis. But when Berri and Bradbury write about it, in this very same journal as Fort's article, what do they do? They mention Voros McCracken, and then do a quick regression and announce that this "supports" DIPS. Which is fine, but, if those twenty studies were published academically, they wouldn't dare omit those citations. The omission gives the reader the (perhaps unintended) impression that the subject hasn't been studied yet.

It would have been very easy for Berri and Bradbury to have quoted existing research in addition to their own. They could have said (taking two of the many existing studies at random), " the DIPS theory was repeatedly tested and supported in numerous studies after McCracken's article; for example, Tippett (2003) and Gassko (2005). A simpler confirmation is that we now calculate the persistence over time of batting average on balls in play, and find it much lower than for other statistics."

They deliberately chose not to do that.

Most interestingly, Dr. Fort writes,


"Do not get me wrong. The internet seems a place of great potential. Suppose the living Nobel Prize winners start a blog. An idea occurs, they debate it, the courageous among us try to contribute, and the discipline moves forward on that issue. Few would doubt the value ... "


And you know what? That's exactly what's happening! Of course, there's no Nobel in Sabermetrics, but the best and brightest minds in the field are already online doing exactly what Dr. Fort wishes the best economists would do. And that's why the field moves so fast.

With perhaps the exception of computer software development, I bet that there is no other field of human knowledge today in which you can see progress move so quickly in real time, where the best minds in the field publish so much excellent, rigorous work so reliably, where collaboration happens so easily and instantly, and where even an unknown can have something to contribute to the work of even the most experienced sabermetrician.

Fort continues,

"In the publish-or-perish world [of academic economists, we] all know why this has not happened yet. The best we get is editorials by Nobel Prize winners, occasionally scolding each other. And the worst we get is anonymous yelling at each other behind pseudonyms in blog discussion areas. .... This may be entertaining ... but I wonder how it will all play out in the rigorous pursuit of answers to sports economics questions."

Well, in economics, maybe; I'll take Dr. Fort's word for it. But in sabermetrics, the internet has indeed provided the ideal breeding ground for the rigorous pursuit of answers.

As Dr. Fort implies, the incentives facing academic economists are very different from those facing amateur sabermetricians, or even professional sabermetricians (who write books or work for teams). To avoid "perishing," and suffering in their career, academics are forced to write formally and rigorously, submit to peer review by two or three colleagues, wait several months (or more) before getting a response, and publish in journals where relatively few people will read their work.

In contrast, non-academic sabermetricians have different goals: a love for baseball, a dream of working for a ballclub, the recognition and status in the community, or, yes, even just the thrill of scientific discovery. To those ends, they have incentives to explain their ideas informally, publish online instantly, have their work peer-reviewed within minutes or hours, have all the best researchers in the field be aware of their research almost immediately, and maintain credibility in the community by engaging critics and acknowledging shortcomings in their work.

My prediction: over the next few years, non-academic sabermetrics will gain more and more credibility, as more and more of it comes out of major league front offices, and journalists continue to acknowledge it. In that light, academia will find it harder and harder to ignore the accepted, established and growing body of knowledge on what will appear to be flimsy procedural grounds. (Does anyone outside of academia care that Bill James' ideas weren't peer reviewed?)

My view is that the academics, with incentives to maintain their established and expensive barriers to publication, will never be able to keep up with the freewheeling, open-source, dynamism of the non-academic crowd. Eventually, economists will come to the realization that progress in sabermetrics is best done by the non-academic community of sabermetricians, just as the invention of better computers is best done by the non-academic community of computer manufacturers. Academics will still publish papers of interest to sabermetricians, and influence their work, just as materials scientists and computer theoreticians publish work that's of interest to Apple and Google. But the bulk of sabermetrics itself will be seen as coming from non-academic sabermetricians, not academic economists.

It's not a knock against economists, who are certainly as capable of doing sabermetrics as anyone. It's just that academic processes are too rigorous and expensive for a world that moves so fast. Academia, I think, will stick to areas where it has a monopoly, like the more traditional areas of economics, or expensive fields like physics.


Labels: , , , ,