Wednesday, October 29, 2014

Do baseball salaries have "precious little" to do with ability?

Could MLB player salaries be almost completely unrelated to performance? 

That's the claim of social-science researcher Mike Cassidy, in a recent post at the online magazine "US News and World Report."

It argues an "economics lesson from America's favorite pastime." Specifically: How can it be true that high salaries are earned by merit in America, when it's not even the case in baseball -- one of the few fields in which we have an objective record of employee performance?

The problem is, though, that baseball players *are* paid according to ability. The author's own data shows that, despite his claims to the contrary. 


Cassidy starts by charting the 20 highest-paid players in baseball last year, from Alex Rodriguez ($29 million) to Ryan Howard ($20 million). He notes that only two of the twenty ranked in the top 35 in Wins Above Replacement (WAR). The players in his list average only 2.2 WAR. That's not exceptional: it's "about what you need to be an everyday starter."  

It sounds indeed like those players were overpaid. But it's not quite so conclusive as it seems.

WAR is a measure of bulk contribution, not a rate stat. So it depends heavily on playing time. A player who misses most of the season will have a WAR near zero. 

In 2013, Mark Teixeira played only 15 games with a wrist injury before undergoing surgery and losing the rest of the season. He hit only .151 in those games, which would explain his negative (-0.2) WAR. However, it's only 53 AB -- even if Texeira had hit .351, his WAR would still have been close to zero. 

A-Rod missed most of the year with hip problems. Roy Halladay pitched only 62 innings as he struggled with shoulder and back problems, and retired at the end of the season.

If we take out those three, the remaining 17 players average out to around 2.6 WAR, at an average salary of $22 million. It works out to about $8.4 million per win. That's still expensive -- well above the presumed willingness-to-pay of $5 to $6 million per expected win.

If we *don't* take out those three, it's about $10 million per win. Even more expensive, but hardly suggestive of a wide disconnect between pay and performance. At best, it suggests that the one year's group of highest-paid players performed worse than anticipated, but still better than their lower-paid peers.

Furthermore: as the author acknowledges, many of these players have back-loaded contracts, where they are "underpaid" relative to their expected year's talent earlier in the contract, and "overpaid" relative to their expected year's talent later in the contract. 

Even a contract at a constant salary is back-loaded in terms of talent, since older players tend to decline in value as they age. I'm sure the Yankees didn't expect Alex Rodriguez to perform at age 37 nearly as well as he did at 33, even though his salary was comparable ($28MM to $33MM).

All things considered, the top-20 data is very good evidence of a strong link between pay and performance in baseball. Not as strong as I would have expected, but still pretty strong.


As further evidence that pay is divorced from performance, the author notes that, even limiting the analysis to players who have free-agent status, "performance explains just 13 percent of salary."  It's not just a one-year fluke. For each of the past 30 years, the r-squared has consistently hovered in a narrow band between 10 and 20 percent.

That sounds damning, but, as is often the case, it's based on a misinterpretation of what the r-squared means. 

Taking the square root of .13 gives a correlation of .36. That's not too bad: it means that 36 percent of a player's salary (above or below average) is reflected in (above- or  below-average) performance.

Still, you do have to regress salary almost 64 percent to the mean to get performance. Doesn't that show that almost two-thirds of a player's salary is unrelated to merit?

No. It shows most of a player's salary is unrelated to *performance,* not that it's unrelated to *merit*. Performance is based on merit, but with lots of randomness piled on top that tends to dilute the relationship.

You might underestimate the amount of randomness relative to talent, especially if you're still thinking of those top-20 players. But most players in MLB are not far from the league minimum, both in salary and talent.

According to the article, the 358 lowest-paid players in baseball in 2013 made an average $534,000 each. 

With a league minimum of $500,000, those 358 players must be clustered very tightly together in pay. And the range of their talent is probably also fairly narrow. But the range of their performance will be wide, since they'll vary in how much playing time they get, and whether they have a lucky or unlucky year. 

For those 358 players alone, the correlation between pay and performance is going to be very close to zero, even if pay and talent correlate perfectly. (Actually, the author's numbers are based on only players with 6+ seasons in MLB, so it's a smaller sample size than 358 -- but the logic is the same.)

When you add in the rest of the players, and the correlation rises to 0.36 ... that's pretty good evidence that there's a strong link between pay and performance overall. And when you take into account that there's also significant randomness in the performances of the highly-paid players, it must be that the link between pay and *merit* is even higher.


The author has demonstrated the "low r-squared" fallacy -- the idea that if the number looks low enough, the relationship must be weak enough to dismiss. As I have argued many times, that's not necessarily the case. Without context or argument, the "13 percent" figure could mean anything at all.

In fact, here's a situation where you have an r-squared much lower than .13, but a strong relationship between pay and performance.

Suppose that player salary were somehow exactly proportional to performance. That is, at the end of the season, the r-squared turned out to be 100 percent, instead of 13 percent. (Or some number close enough to 100 percent to satisfy the author.)

In baseball, as in life, people don't perform exactly the same every day. Some days, Mike Trout will be the highest-paid player in baseball, but he'll still wind up going 0-for-4 with three strikeouts.

So even if the correlation between season pay and season performance is 100% perfect, the correlation between *single game* pay and *single game* performance will be lower.

How much lower?  I ran a test with real data. I compiled batter stats for every game of the 2008 season, and ran a regression between the player's on-base percentage (OBP) for that single game, versus his OBP for the season. 

The correlation was .016. That's an r-squared of .000265.

The r-squared of .13 the article found between pay and performance is almost *five hundred times* as large as the one I found between pay and performance. 

Even though my r-squared is tiny, we can agree that Mike Trout is still paid on merit, right? It would be hard to argue that there was a fundamental inequity in MLB pay practices for April 11, just because Mike Trout didn't produce that day.

Well, I suppose, on a technicality, you could argue that pay isn't based on merit for a game, but *is* based on merit for a season. But if you make that argument for game vs. season, you can make the same argument for season vs. expectation, or season vs. career. 

The r-squared might be only 13 percent for a single season, but higher for groups of seasons. Furthermore, if you could play the same season a million times over, luck would even out, performance would converge on merit, and the r-squared would move much closer to 100%.

And the article provides evidence of that! When the author repeated his regression by using the average of three seasons instead of one, the r-squared doubled -- now explaining "just a quarter of pay." An r-squared of 0.25 is a correlation of 0.5 -- half of performance now reflected in salary.

Half is a lot, considering the amount of luck in batting records, and taking into account that luck is much more important than talent for the bunch of players clustered at the bottom of the salary scale. 

Again, the article's own evidence is enough to refute its argument.


I think we can quantify the amount of luck in a batter's season WAR. 

A couple of years ago, I calculated the theoretical SD of a team's season Linear Weights batting line that's due to luck. It came out to 31.9 runs. 

Assuming a regular player gets one-ninth of a team's plate appearances, his own SD would be 1/3 the team's (the square root of 1/9). So, that's about 10.6 runs. Let's call it 10 runs, or 1.0 WAR. 

That one-win figure, though, counts only the kind of luck that results from over- or undershooting talent. It doesn't consider injuries, suspensions, or sudden unexpected changes in talent itself. 

Going back to the top 20 players in the chart ... we saw that three of those had injuries. Another three, it appears, had sudden drops in ability after they were signed (Vernon Wells, Tim Lincecum, and Barry Zito). 

Removing those six players from the list (which might be unfair selective sampling, but never mind for now), the remainder averaged 3.4 WAR. That's about $6.4 million per win -- very close to the consensus number. It would be even lower if we adjusted for back-loaded contracts.

At an SD of 1 WAR per player, the SD of the average of 14 players is 0.27 WAR. Actually, that's the minimum; it would be higher if any of the 14 were less than full-time. Also, the list includes starting pitchers -- I don't know if the luck SD of 1 win is reasonable for starters as well as batters, but I suspect it's close enough.

So, let's go with 0.27. We'll add and subtract 2 SD -- 0.54 --from the observed average of 3.4. That gives us a confidence interval of 2.9 to 3.9 WAR.

At 3.9 WAR, we get $5.6 million per win: almost exactly the amount sabermetricians (and probably front offices) have calculated based on the assumption that teams want to pay exactly what the talent is worth.

That is: it appears the results are not statistically significantly different from a pure "pay for performance" situation.


When the US News article talks about luck, it's different from the kind of luck I'm calculating here. The author isn't actually complaining that the overpaid players got unlucky and underperformed their pay. Instead, he believes that the highly-paid players were overpaid for their true ability, because they were "lucky" enough to fool everyone by having a career year at exactly the right time:

"In America, we tend to think of income as a reward for skill and hard work. ...

"But baseball shows us this view of the world is demonstrably flawed. 
Pay has preciously little to do with performance. Instead, being a top earner means having a good season immediately preceding free agency in a year where desperate, rich teams are willing to award outsized long-term contracts. ... 

"In other words, while ability and effort matter, it’s also about good luck."

Paraphrased, I think he's saying something like: "I've shown that pay is barely related to performance. Why, then, are some players paid huge sums of money, while others make the minimum?  It can't be merit. It must be that some players have a lucky year at a lucky time, and GMs don't realize the player doesn't deserve the money."

In other words: baseball executives are not capable of evaluating players well enough to realize that they're throwing away millions of dollars unnecessarily.   

The article gives no evidence to support that; and, furthermore, it appears that the author himself doesn't try, himself, to evaluate players and factor out luck. Otherwise, he wouldn't say this:

"But among average players, salaries vary enormously. For every Francisco Cervelli (Yankees catcher, $523,000 salary, 0.8 WAR), there is a CC Sabathia (Yankees pitcher, $24.7 million salary, 0.3 WAR). Both contribute about the same to the Yankees’ success (or lack thereof), but Sabathia earns roughly 50 times more."

Does he really believe that Sabathia and Cervelli should have been paid as equal talents?  Isn't it obvious that their 2013 records are similar only because of luck and circumstance?

Francisco Cervelli earned his +0.8 WAR in 61 plate appearances. That's about one-and-a-half SDs above +0.3, his then-career average per 61 PA.

Sabathia's salary took a jump after the 2009 season, at a time where he was averaging around 4 WAR per season. From 2010 to 2012, he actually improved that trend, creating +15.6 WAR total in those three years. It wasn't until 2013 that he suddenly lost effectiveness, dropping to 0.3 as reported. 

So it's not that Sabathia was just lucky to be in the right place at the right time. It's that he was an excellent player before and after signing his contract, but he suffered some kind of unexpected setback as he aged. (Too, his contract was structured to defer some of his peak years' value to his declining years.)

And it's not that Cervelli was unlucky to be in the wrong place at the wrong time, unable to find a "desperate" team otherwise willing to pay him $20 million. He's just a player recognized as not that much better than replacement, who had a good season in 2013 -- a "season" of 61 plate appearances where he was somewhat lucky.


In his bio, the author is described as "a policy associate at The Century Foundation working on issues of income inequality." That's really what the article is getting at: income inequality. The argument that MLB pay is divorced from performance is there to support the broader argument that inequality of income is caused by highly-paid employees who don't deserve it.

Here's his argument summarized in his own words:

"The first thing to appreciate is just how unequal baseball is. During the 2013 season, the eight players in baseball's 'top 1 percent' took home $197 million, or $6 million more than the 358 lowest-paid players combined. The typical, or 'median,' major league player would need to play 20 seasons to earn as much as a top player makes in one. ...

"But ... pay has preciously little to do with performance. ...

"In other words, while ability and effort matter, it’s also about good luck. And if that’s true of a domain where every aspect of performance is meticulously measured, scrutinized and endlessly debated, how much more true is it of our society in general?

"We end up with CEOs that make 300 times the average worker and 45 million poor people in a country with $17 trillion in GDP. And we accept it as fair."

Paraphrasing again, the argument seems to be: "Salary inequality in baseball is bad because it's caused by teams rewarding ability that isn't really there. If baseball players were paid according to performance instead of circumstance, those disturbing levels of inequality would drop substantially, and the top 1% would no longer dominate."

It sounds reasonable, but it's totally backwards. If the correlation between pay and performance were higher, players' pay would become MORE unequal.

Suppose salaries were based directly on WAR. At the end of the season, the teams pay every free agent $6 million dollars for every win above zero, plus the $500,000 minimum. (That's roughly what they're paying now, on expectation. Since expected wins equal actual wins, that would keep the overall MLB free-agent payroll roughly the same.)

Well, if they did that the top salary would take a huge, huge jump.

Among the top 20 in the chart, the top two WAR figures are 7.5 (Miguel Cabrera) and 7.3 (Cliff Lee. 

Under the new salary scale, both players would get sharp increases. Cabrera would jump to $45 million, and Lee to $44 million. The highest salary in MLB would go to Carlos Gomez, whose 2013 season was worth 8.9 WAR (4.6 of that from defense). Under the new system, Gomez would earn some $53 million. 

Under pay-for-performance, it would take only around 4.8 WAR to earn more than the current real-life highest salary, A-Rod's $29.4 million. In 2013, that would have been accomplished by 32 players

Carlos Gomez's salary would exceed the real-life A-Rod by 82 percent. Meanwhile, replacement players would still be making the minimum $500K. And Barry Zito, with his negative 2.6 wins, would *owe* the Giants $15 million. 

Clearly, inequality would increase, not decrease, if the connection between pay and performance became stronger. 

Mathematically, that *has* to happen. When luck is involved, and applies equally to everyone, the set of outcomes always have a wider range than the set of talents. As usual,

var(outcomes) = var(talent) + var(luck)

Since var(luck) is always positive, outcomes always have a wider range than expectations based on talent. 

In fairness to the author, he doesn't think teams are paid by talent. As we saw, he believes teams pay by misinterpreting random circumstances, a "right place right time" or "team likes me" kind of good luck. 

If that's really happening, and you eliminate it by basing pay directly on measurable performance, then, yes, it's indeed possible for inequality to go down. If Francisco Cervelli were being paid $100 million per season, because he was Brian Cashman's favorite, then instituting straight pay-by-performance would lower the top salary from $100 million to $53 million, and inequality would decrease.

But, as we saw, that's not the case: the real-life top salaries are much lower than the "pay-by-performance" top salaries. That means that teams aren't systematically overpaying. Or, at least, that they're not overpaying by anything near as much as 82 percent.


Imagine an alternate universe in which players have always been paid under the "new" system, $6 million per WAR. In that universe, as we have seen, the ratio between the top and median salaries is much higher than it is now, maybe 50 times instead of 20.

Then, someone comes along and presents a case for more equality:

"MLB salaries aren't as fair as they could be. They're based on outcomes, where they should be based on talent. Francisco Cervelli gets credit for 0.8 wins in 61 PA, even though we know he's not that good, and he just happened to guess right on a couple of pitches. 

"Players should be paid based on their established and expected performance, by selling their services to the highest bidder, before the season starts. That eliminates luck from the picture, and salaries will be based more on merit. The salary ratio will drop from 50 to 20, the range will compress, and the top players will earn only what they merit, not what they produce by luck."

Isn't THAT the situation that you'd expect someone to advocate if they were concerned about (a) rewarding on merit, (b) not rewarding on luck, and (c) reducing inequality of salaries?

Why, then, is this author advocating a move in the exact opposite direction?

(Hat tip: T.M.)

Labels: , , ,


At Wednesday, October 29, 2014 2:53:00 PM, Anonymous Anonymous said...

It seems the author is quite lucky we don't pay our social-science researcher based on merit.

In all seriousness though, as the vogue status of data increases we see more and more of these at best lazy, at worst deliberately misleading analyses. It makes it really hard to trust anything without digging into it completely which is unfortunate.

At Wednesday, October 29, 2014 7:55:00 PM, Anonymous Alex said...

Early on you refer to correlation as a percentage. It is not.

At Wednesday, October 29, 2014 8:08:00 PM, Blogger Phil Birnbaum said...

Changed that, thanks.

At Thursday, October 30, 2014 1:13:00 AM, Blogger Trent McBride said...

Two additional notes:

1. We can't forget that many of the higher paid players, even if they are on the decline from an on-the-field standpoint, could paid for non-WAR reasons, such as marketing and merchandise.

2. Let us not forget that baseball players are unionized! Forget about the politics of unions for a moment - it in inarguable that unions often favor non-merit based pay, such as longevity, etc, and this could contribute to the any dis-correlation of pay and merit in MLB. Of course, making such a point would be antithetical to the authors worldview, so he best ignore it.

At Thursday, October 30, 2014 2:41:00 AM, Anonymous Dave said...

It would appear that Cassidy has included players under their first 3 years of team control among the 358 lowest-paid players. To Trent's point, the system providing for 6 years of team control (first 3 years with salary set by team, then 3 arbitration years) is supported by the players union membership for obvious reasons. It focuses bidding (and therefore dollars) on the veteran free agents who, at any given point in time, dominate the players' association and therefore determine its stance in collective bargaining.

At Thursday, October 30, 2014 2:57:00 AM, Blogger Phil Birnbaum said...


1. Agreed, but I doubt that's a big factor in setting pay. But you never know.

2. Good point. Doesn't the NFL allow termination of any contract at any time? MLB doesn't, you have to pay the guy even if his performance drops. So the MLBPA is contributing to LOWER inequality (and lower correlation with merit) by insisting on that clause. Without contracts being guaranteed, teams would probably pay more knowing that they don't have to factor in the risk of winding up paying for nothing.

At Thursday, October 30, 2014 3:05:00 AM, Blogger Phil Birnbaum said...


Right. Most of the article restricts the discussion to free-agent-status players only, but the "20:1 ratio" part does not.

And, as you say, the MLBPA has a part in creating the higher inequality by agreeing to keep younger players' salaries suppressed.

That's a very good point: if 20:1 salary inequality is so unjust, why doesn't the union fix it? As you say, it might be because the veteran free agents are pushing for it, since it's to their benefit.

In fairness, it's possible even the younger players support the current system because they believe the tradeoff is worth it. Doesn't seem likely to me, but it's not impossible.

It's also possible that the author knows this, and is willing to "blame" the union as much as management for the state of inequality in baseball. Trent doesn't think so (and I wouldn't bet on it either), but just to give the author the benefit of the doubt ...

At Thursday, October 30, 2014 8:10:00 AM, Anonymous Eric R said...

Since contracts are awarded based on expectation from some previous levels of performance- if you want to see how much performance and pay line-up, shouldn't you look at FA contract value [some weighting of AAV and years] vs the last 'n' seasons before the contract was signed?

At Thursday, October 30, 2014 9:25:00 AM, Blogger Brian Burke said...

I believe there is an ideological component to this, particularly in the social sciences. If professors can convince us that outcomes in life are nearly all luck, then all that envy and 99% vs 1% stuff is justified. I bet researchers like this guy understand r^2 well enough to frame things the way he wants. If he were advocating something that fit his ideological assumptions, he would be citing how high the r is instead how low the r^2 is.

At Thursday, October 30, 2014 2:49:00 PM, Blogger Phil Birnbaum said...

Well, in fairness, give the author credit: in another article, he DOES blame the union:

"The system is like this by design, justified as a way to keep small-market teams competitive (and influential veterans in the player’s union happy). But there are good reasons to believe age-based inequality imposes considerable costs in terms of fairness and efficiency."

Here's the link.

At Saturday, November 01, 2014 1:41:00 PM, Anonymous Josh H said...

The USN article is fundamentally flawed in more than one way. First, for pay to be merit-based does not mean that you have to get rate-based ($/WAR, $/HR, etc)pay. Salaries are a function of past performance because that is what teams use to predict future performance. If you want to run a simple, bivariate regression of WAR on salary, use WAR of last six years (and don't use seasons where GP<80) for the regressor and use 2014 salary as the regressand.


Post a Comment

<< Home