Saturday, August 30, 2014

Is MLB team payroll less important than it used to be?

As of August 26, about 130 games into the 2014 MLB season, the correlation between team payroll and wins is very low. So low, in fact, that *alphabetical order* predicts the standings better than salaries!

Credit that discovery to Brian MacPherson, writing for the Providence Journal. MacPherson calculated the payroll correlation to be +0.20, and alphabetical correlation to be +0.24. 

When I tried it, I got .2277 vs. .2346 -- closer, but alphabetical still wins. (I might be using slightly different payroll numbers, I used winning percentage instead of raw win totals, and I may have done mine a day or two later.)

The alphabetical regression is cute, but it's the payroll one that raises the important questions. Why is it so low, at .20 or .23? When Berri/Schmidt/Brook did it in "The Wages of Wins," they got around .40.

It turns out that the season correlation has trended over time, and MacPherson draws a nice graph of that, for 2000-2014. (I'll steal it for this post, but link it to the original article.)  Payroll became more important in the middle of last decade, but then dropped quickly, so that 2012, 2013, and 2014 are the lowest of all 15 years in the chart:






What's going on? Why has the correlation dropped so much?

MacPherson argues it's because it's getting harder and harder to buy wins. There is an "inability of rich teams to leverage their financial resources."  The end of the steroids era  means there are fewer productive free-agent players in their 30s for teams to buy. And the pool of available signings is reduced even further, because smaller-market teams can better afford to hang on to their young stars.


"Having money to spend remains better than not having money to spend. That might not ever change. Unfortunately for the Red Sox and their brethren, however, it matters far less than it once did."


------

My thoughts:

1.  The observed 2014 correlation is artificially low, because it's taken after only about 130 games (late-August), instead of a full season. 

Between now and October, you'd expect the SD due to luck to drop by about 12 percent. So, instead of 2 parts salary to 8 parts luck (for the current correlation of .20), you'll have 2 parts salary to 7.2 parts luck. That will raise the correlation to about .22.

Well, maybe not quite. The non-salary part isn't all binomial luck; there's some other things there too, like the distribution of over- and underpriced talent. But I think .22 is still a reasonable projection.

It's a small thing, but it does explain a tenth of the discrepancy.

------

2.  The lower correlation doesn't necessarily mean that it's harder to buy wins. As MacPherson notes, It could just mean that teams are choosing not to do so. More specifically, that teams are closer in spending than they used to be, so payroll doesn't explain wins as well as it used to.

Here's an analogy I used before: in Rotisserie League Baseball, there is a $260 salary cap. If everyone spends between $255 and $260, the correlation between salary and performance will be almost zero -- the $5 isn't enough of a signal amidst the noise. But: if you let half the teams spend $520 instead, you're going to get a much higher correlation, because the high-spending half will do much, much better than the lower-spending half.

That could explain what's happening here.

In 2006, the SD of payroll was around 42% of the mean ($32MM, $78MM). In 2014, it was only 38% ($43MM, $115MM). It doesn't look that much different, but ... teams this year are 10 percent closer to each other than they were, that has to be contributing to the difference.

(This is the first time I've done something where "coefficient of variation" (the SD divided by the mean) helped me, here as a way to correct SDs for inflation.

Also, this is a rare (for me) case where the correlation (or r-squared) is actually more relevant than the coefficient of the regression equation. That's because we're debating how much salary explains what we've actually observed -- instead of the usual question of much salary leads to how many more wins.)


------

3.  While doing these calculations, I noticed something unusual. The 2014 standings are much tighter than normal. 

So far in 2014, the SD of team winning percentage is .058 (9.4 games per 162). In 2006, the SD was larger, at .075 (12.2 games per 162). That might be a bit high ... I think .068 (11 games per 162) is the recent historical average.

But even 9.4 compared to 11 is a big difference.  It's even more significant when you remember that the 2014 figure is based on only 130 games. (I'd bet the historical average for late-August would be between 12 and 13 games, not 11.)

What's going on? 

Well, it could be random luck. But, it could be real. It could be that team talent "inequality" has narrowed -- either because of the narrowing of team spending (which we noted), or because all the extra spending isn't buying much talent these days.

I think the surrounding evidence shows that it's more likely to be random luck. 

Last year, the SD of team winning percentage was at normal levels -- .074 (12.04 games per 162). It's virtually impossible for the true payroll/wins relationship to have changed so drastically in the off-season, considering the vast majority of payrolls and players stay the same from year to year.

Also, it turns out that even though the correlation between 2014 payroll and 2014 wins is low, the correlation between 2014 payroll and 2013 wins is higher. That is: this year's payroll predicts last year's wins (0.37) better than it predicts this year's wins (0.23)! 

Are there other explanations than 2014 being randomly weird? 

Maybe the low-payroll teams have young players who improved since last year, and the high-payroll teams have old players who declined. You could test that: you could check if payroll correlates better to last year's wins than this year's for all seasons, not just 2013-2014.

If that happened to be true, though, it would partially contradict MacPherson's hypothesis, wouldn't it? It would say that the money teams spend on contracts *do* buy wins as strongly as before, but those wins are front-loaded relative to payroll.

We can see how weird 2014 really is if we back out the luck variance to get an estimate of the talent variance.

After the first 130 games of 2014, the observed SD of winning percentage is .058. After 130 games, the theoretical SD of winning percentage due to luck is .044.

Since luck is generally independent of talent, we know

SD(observed)^2 - SD(luck)^2 = SD(talent)^2 

Plugging in the numbers: .058 squared minus .044 squared equals .038 squared. That gives us an estimate of SD(talent) of .038, or 6.12 games per 162.

I did the same calculation for 2013, and got 10.2.

2013: Talent SD of 10.2 games
2014: Talent SD of  6.1 games

That kind of drop in one off-season pretty much impossible, isn't it? 

If that kind huge a compression were real, it would have to be due to huge changes in the off-season -- specifically, a lot of good players retiring, or moving from good teams to bad teams.

But, the team correlation between 2013 wins and 2014 wins is +0.37. That's a bit lower than average, but not out of line (again, especially taking the short season into account). 

It would be very, very coincidental if the good teams got that much worse while the bad teams got that much better, but the *order* of the standings didn't change any more than normal.

So, I think a reasonable conclusion is that it's just random noise that compressed the standings. This year, for no reason, the the good teams have tended to be unlucky while the bad teams have tended to be lucky. And that narrowed the distance between the high-payroll teams and the low-payroll teams, which is part of the reason the payroll/wins correlation is so low. 

------

4. We can just look at the randomness directly, since the regression software gives us confidence intervals. 

Actually, it only gives an interval for the coefficient, but that's good enough. I added 2 SDs to the observed value, and then worked backwards to figure out what the correlation would be in that case. It came out to 0.60. 

That's huge!  The confidence interval actually encompasses every season on the graph, even though 2014 is the lowest of all of them.

To confirm the 0.60 number, I used this online calculator. If the true correlation for the 30 teams is 0.4, the 95% confidence interval goes up to 0.66, and down to 0.05. That's close to my calculation for the high end, and easily captures the observed value of 0.23 in its low end. 

That's not to say that I think they really ARE all the same, that the differences are just random -- I've never been a big fan of throwing away differences just because they don't meet significance thresholds. I'm just trying to show how easy it is that it *could be* random noise.

I can try to rephrase the confidence interval argument visually. Here's the actual plot for the 2014 teams:




The correlation coefficient is a rough visual measure of how closely the dots adhere to the green regression line. In this case, not that great; it's more a cloud than a line. That's why the correlation is only 0.23.

Now, take a look at the teams between $77 million and $113 million, the ones in the second rectangle from the left.

There are eighteen teams in that group bunched into that small horizontal space, a payroll range of only $46 million in spending. Even at the historically high correlations we saw last decade, and even if the entire difference was due to discretionary free-agent spending, the true talent difference in that range would be only about 3 or 4 games in the standings. That would be much smaller than the effects of random chance, which would be around 12 games between luckiest and unluckiest. 

What that means is:  no matter what happens, that second vertical block is dominated by randomness, and so the dots in that rectangle are pretty much assured of looking like a random cloud, centered around .500. (In fact, for this particular case, the correlation for that second block is almost perfectly random, at -.002.)

So those 18 teams don't help much. How much the overall curve looks like a straight line is going to depend almost completely on the remaining 12 points, the high-spending and low-spending teams. In our case, the two low-spending teams are somewhat worse than the cloud, and the ten high-spending teams are somewhat better than the cloud, so we get our positive correlation of +0.23. 

But, you can see, those two bad teams aren't *that* bad. In fact, the Marlins, despite the second-lowest payroll in MLB, are playing .496 ball.

What if we move the Marlins down to .400? If you imagine taking that one dot, and moving it close to the bottom of the graph, you'll immediately see that the dots would get a bit more linear. (The line would get steeper, too, but steepness represents the regression coefficient, not the correlation, so never mind.)  I made that one change, and the correlation went all the way up to 0.3. 

Let's now take the second-highest-payroll Yankees, and move them from their disappointing  .523 to match the highest-payroll Dodgers, at .564. Again, you can see the graph will get more linear. That brings the correlation up to 0.34 -- almost exactly the average season, after mentally adjusting it a bit higher for 162 games.

Of course, the Marlins *aren't* at .400, and the Yankees *aren't* at .564, so the lower correlation of 0.23 actually stands. But my point is not to argue that it should actually be higher -- my point is that it only takes a bit of randomness to do the trick. 

All I did was move the Marlins down by less than 2 SDs worth of luck, and the Yankees by less than 1 SD worth of luck. And that was enough to bump the correlation from historically low, to historically average.

------

5. Finally: suppose the change isn't just random luck, that there's actually something real going on. What could it be?

-- Maybe money doesn't matter as much any more because low-spending teams are getting more of their value from arbs and slaves. They could be doing that so well that the high-spending teams are forced to spend more on free agents just to catch up. It wouldn't be too hard to check that empirically, just by looking at rosters.

-- It could be that, as MacPherson believes, there are fewer productive free agents to be bought. You couuld check that easily, too: just count how many free agents there are on team rosters now, as compared to, say, 2005. If MacPherson is correct, that careers are ending after fewer years of free agency, that should show up pretty easily.

-- Maybe teams just aren't as smart as they used to be about paying for free agents. Maybe their talent evaluation isn't that great, and they're getting less value for their money. Again, you could check that, by looking at free-agent WAR, or expected WAR, and comparing it to contract value.

-- Maybe teams don't vary as much as they used to, in terms of how many free-agent wins they buy. I shouldn't say "maybe" -- as we saw, the SD of payroll, adjusted for inflation, is indeed lower in 2014 than it was in 2006, by about 10 percent. So that would almost certainly be part of the answer. 

-- More specifically: maybe the (otherwise) bad teams *more* likely to buy free agents than before, and the (otherwise) good teams are *less* likely to buy free agents than before. That actually should be expected, if teams are rational. With more teams qualifying for the post-season, there's less point making yourself into a 98-win team when a 93-win team will probably be good enough. And, even an average team has a shot at a wild card, if they get lucky, so why not spend a few bucks to raise your talent from 79 games to (say) 83 games, like maybe the Blue Jays did last year?

-----

I'll give you my gut feeling, but, first a disclaimer: I haven't really thought a whole lot about this, and some of these argument occurred to me as I wrote. So, keep in mind that I'm really just thinking out loud.

On that basis, my best guess is ... that most of the correlation drop is just random noise. 

I'd bet that money buys free agents just as reliably as always, and at the usual price. The correlation is down not because spending buys fewer wins, but because more equal spending makes it harder for the regression to notice the differences.

But I'm thinking that part of the drop might really be the changing patterns of team spending, as MacPherson described. I wonder if that knot of 18 mid-range teams, clustered in such a small payroll range, might be a permanent phenomenon, resulting from more small-market teams moving up the payroll chart after deciding their sweet spot should be a little more extravagant than in the past. 

Because, these days, it doesn't take much to almost guarantee a team a reasonable shot at a wildcard spot -- which means, meaningful games later in the season than before, which means more revenue. 

In fact, that's one area where it's not zero-sum among teams. If most of the fan fulfillment comes from being in the race and having hope, any team can enter the fray without detracting much from the others. What's more exciting for fans -- being four games out of a wildcard spot alone, or being four games out of a wildcard spot along with three other teams? It's probably about the same, right? 

Which makes me now think, the price of a free agent win could indeed change. By how much? It depends on how increased demand from the small market teams compares to decreased demand from the bigger-spending teams.

------

Anyway, bottom line: if I had to guess the reasons for the lower correlation:

-- 80% randomness
-- 20% spending patterns

But you can get better estimates with some research, by checking all those things I mentioned, and any others you might think of.





Hat Tip: Craig Calcaterra


Labels: , , ,

9 Comments:

At Sunday, August 31, 2014 11:02:00 AM, Anonymous Alex said...

Do you know if that 2013 wins-2014 payroll correlation holds for other years as well? It just strikes me as a plausible relationship: teams who did well one year will pay more the next in an effort to get over the hump or get that 'final piece'.

 
At Sunday, August 31, 2014 4:30:00 PM, Blogger Don Coffin said...

Two additional possibilities:

1) This year's correlation will rise as we get to the end of the season. "Good teams" (which do have, now, a slightly higher tendency to have larger payrolls) will do better--raising their winning percentages (wins) and "bad teams" will do worse (either tanking to get a better draft position/higher signing budget, or just because who cares?). So the correlation will rise.

2) All teams might have begun doing a better job of "locking in" their good young talent, leaving less talent on the table in free agency. (I think Joe Posnanski has suggested this, and it has some prima facie probability).

But, in general, it's probably random variation.

 
At Monday, September 01, 2014 11:49:00 PM, Anonymous Jeremy B. Williams said...

Re: This year's wins or last?

I ran the payroll correlation in the 2009-10 offseason, and have not updated my analysis. I used payrolls from 2000-2009 (using Cot's) and win records from 2001-2009. This year's wins correlate better to next year's payroll (r=0.53) than to this year's (r=0.46). For each of the eight years where I had all the numbers, next year's salary was at least marginally better.

For each season, there was an average standard deviation in wins of 11.93 wins. The following calculation works better with variance (sigma^2), which averaged 142.42 win^2. Of that, 40.50 win^2 comes from the random Gaussian variation, 40.37 win^2 from the fit to payroll, leaving 61.55 win^2 (i.e., 7.85 wins per team per season) unexplained.

So, according to this fit, variation in team records is explained 28% by luck, 28% by next year's payroll, and 43% by "other" over the years I looked at.

 
At Tuesday, September 02, 2014 3:57:00 PM, Blogger Don Coffin said...

Jeremy--I'd suggest that a significant determinant off next year's payroll is *this year's wins," and I'd bet you'd agree with that. (A comment from the "correlation does not necessarily mean causation" section of the bleachers.)

 
At Tuesday, September 02, 2014 4:14:00 PM, Blogger Phil Birnbaum said...

Jeremy:

Very nice, thanks for that! I wonder what causes that? I can think of a couple of things:

1. Salary is as of opening day. Teams that trade during the season will see their wins change, but the salary change won't be logged until the following opening day.

2. Actually, (1) sounds so plausible may I don't have to go any further! But ... as Alex suggests, maybe teams doing well this year upgrade next year, and vice versa.

3. Over the off-season, some slaves become arbs and arbs become free agents. So, next year's salary better reflects this year's talent by the salaries becoming more realistic.

I bet it's (1).

 
At Wednesday, September 03, 2014 11:02:00 AM, Anonymous Alex said...

Phil's 1 and 2 go together in my mind; teams that are doing well in year X are going to spend more in year X+1 as they try to get better, both in year X and the offseason in between.

Although #1 might have something extra going for it, in that teams with 'artificially' low wins due to injuries or bad luck may become sellers during the season and then dump salary, which would also increase the correlation. In the offseason, teams might be a little more bullish on injured players coming back healthy and winning again.

 
At Wednesday, September 03, 2014 11:29:00 AM, Blogger Phil Birnbaum said...

Alex,

Fair enough. To my mind, the difference between (1) and (2) is that (1) reflects uncounted salary in the base year, whereas (2) reflects later actions by the team that apply only to the subsequent year.

 
At Thursday, September 04, 2014 12:37:00 PM, Anonymous Alex said...

Yeah, they're definitely different, but the team reasoning is about the same. Although I guess you don't (or I don't) hear of too many teams dumping salary in the offseason. It's almost always at the trade deadline. So that's another push for #1 (and for 1 being distinct from 2).

 
At Friday, September 05, 2014 12:21:00 AM, Anonymous Jeremy B. Williams said...

I agree with pretty much everything I see. Players who do well (or are lucky) get raises. Teams that overperform are more likely to upgrade for immediate results rather than waiting on long-term strategies.

My big take-away from this analysis is that only a little more than half of the variation in team performance is correlated with the two big factors of salary and the Gaussian distribution. That's a standard deviation of almost eight wins still on the table. Where is it coming from?

My only working hypothesis is counter-cyclical subsidy of the draft on losing (usually low-payroll) teams pulling the importance of salary down. But I would need to use a much longer baseline to study that effect.

Another interesting possibility is to use only data from the first half of the season. This would wash out a lot of the midseason trade data. I speculate that it would substantially shrink the overall variation, leaving a greater proportion to be explained by randomness and salary.

 

Post a Comment

<< Home