Friday, May 30, 2014

May, 2014 issue of "By the Numbers"

"By the Numbers" is the statistical analysis publication of SABR (The Society for American Baseball Research).  

A new issue is now available. Here's the SABR link (.pdf). If that doesn't work, you can always find it at my own website.  If you like it and want back issues, the SABR link is here.

There are three articles in the issue.

-- First, Charlie Pavitt reviews "The Sabermetric Revolution," the recent book by Ben Baumer and Andrew Zimbalist.

-- Next, Don Coffin argues that the biggest statistical change in baseball, over the decades, is something other than home runs.

-- Finally, John F. McDonald tries some variations on the Pythagorean and "10 runs equals one win" estimators, to see if accuracy can be improved.

Labels: ,

Thursday, May 22, 2014

Are black NBA fans less loyal to their home teams?

Black NBA fans seem to be less loyal to their hometown teams than non-black NBA fans, the New York Times has found.

Here's the article, from Nate Cohn of The Upshot. It found data showing that, in ZIP codes where at least 40 percent of residents are black, the home team got a significantly smaller proportion of Facebook "likes" than in other ZIP codes. In Milwaukee, for instance, the map highlighting black areas is almost identical to the map highlighting areas where more fans prefer teams other than the Bucks. Here are those stolen maps:

It's a very interesting finding. But, I'm not convinced it's a race thing.

In general, what can you say about sports fans whose favorite team isn't their own city's?  It seems like they're more serious fans. Here in Ottawa, we have a lot of fans who are ... not bandwagon jumpers, but just people who support the team, by default, because it's the Ottawa team. A lot of them don't know that much about the players, or hockey in general.

But fans who support unlikely teams, like the Sharks or the Predators, probably have more than a passing interest in the NHL. Maybe they like the style of play, or one of their favorite players is there, or even, they want to root for a more successful team.  

Might that explain what's happening in Milwaukee? As it turns out, blacks are indeed more likely to be serious NBA fans. The second sentence of the article says,
"About 45 percent of people who watched N.B.A. games during the 2012-2013 regular season were black, even though African-Americans make up 13 percent of the country's population."

So, what I suspect is that at least part of the explanation is that black areas are being confounded with "high fan interest" areas.  I have no evidence of that, and I might be wrong.  (But, the article has no evidence against it, either.)


It's not just Milwaukee: Cohn discovered the same effect in Cleveland, Memphis, Atlanta, Detroit and Chicago. But, interestingly, there was no effect in Houston, Philadelphia, Dallas, and Washington.  

What would the difference be? Maybe the success of the teams?  The "disloyal" cities' teams averaged 35.5 wins this past season, and four of the six had losing records. The "loyal" fans' teams averaged 41.5, and only one of the four was below .500 (Philadelphia, at 19-63).

That's something, but it doesn't seem that strong.  

Are there be some cities where it's culturally acceptable to root for a different team, and other cities where it's not? Is this one of those random "tipping point" things?

Are basketball fans -- or even black basketball fans -- more fervent in Cleveland than they are in Houston? Probably not ... the Neilsen TV demographics report (.pdf), from which the "45 percent of viewers were black" statistic was taken, shows that Dallas and Chicago were almost identical in the percentage of the population that watches or listens to games.

Is the distinction just one of statistical significance, where Houston *does* have a strong effect, just not strong enough to be 2 SD from zero? Maybe it's that some cities are less segregated by ZIP code than others, so the effect is still there but doesn't show up in maps?

Could it be that there are more natural, opposing loyalties some places than others? Here in Ottawa, we have a ton of Leafs fans and Canadiens fans, because the Ottawa team is relatively new, and people hang on to the teams they loved in their childhoods. Did Milwaukee fans grow up rooting for Michael Jordan and the Bulls, which is why they formed weaker attractions to their Bucks? That sounds plausible, but then, it's hard to explain why the same thing holds for the Chicago area.

Any other ideas? Anybody see anything else that might be a relevant distinction between the two groups of cities?

Labels: ,

Sunday, May 18, 2014

Another "hot hand" false alarm

Here's a Deadspin article called "Gambling Hot Streaks are Actually Real." It's about a study by academic researchers in London who examined the win/loss patterns of online sports bettors. The more wagers in a row a client won, the more likely he was to also win his next bet. That is: gamblers appear to exhibit the proverbial "hot hand."  

It was a huge effect: bettors won only 48% of the time overall, but over 75% of the time after winning five in a row. Here's Deadspin's adaptation of the chart:

Keeping in mind the principle of "if it seems too good to be true, it probably is," you can probably think for a minute and come up with an idea of what might really be going on.


The most important thing: the bets that won 75% didn't actually win more money than expected -- they were just at proportionately low odds. That is: the "streaking" bettors were more likely to back the favorites on their next bet. (The reverse was also true: bettors on a losing streak were more likely to subsequently bet on a longshot.)

As the authors note, bettors are not actually beating the bookies in their subsequent wagers -- it's just that they're choosing bets that are easier to win. 

What the authors find interesting, as psychologists, is the pattern. They conclude that after winning a few wagers in a row, the bettors become more conservative, and after losing a few in a row, they become more aggressive. They suggest that the bettors must believe in the "Gambler's Fallacy," that after a bunch of losses, they're due for a win, and after a bunch of wins, they're due for a loss. That is: they take fewer chances when they think the Fallacy is working against them.

But, why assume that the bettors are changing their behavior?  Shouldn't the obvious assumption be that it's selective sampling, that bettors on a winning streak had *always* been backing favorites?

Some bettors like long shots, and lose many in a row. Some bettors like favorites, and win many in a row. It's not that people bet on favorites because they're on winning streaks -- it's that they're on winning streaks because they bet on favorites! 

Imagine that there are only two types of bettors, aggressive and conservative. Aggressives bet longshots and win 20% of the time; conservatives bet on favorites and win 80% of the time.

Aggressives will win five in a row one time in 3,125. Conservatives will win five in a row around one time in 3. So, if you look at all bettors on a five-win hot streak, there are 1024 conservatives for every aggressive. (In fact, for every streak length, the factor increases by 4. 4:1 after one win, 16:1 after two wins, and so on, to 1024:1 after five wins.)

It seems pretty obvious that's what must be happening.


But wait, it's even more obvious when you look closer at the study. It turns out the authors combined three different sports into a single database: horse racing, greyhound racing, and soccer.

A soccer game result has three possibilities -- win, lose, draw -- so the odds (before vigorish) have to average 2:1. On the other hand, if there are 11 horses in a race, the odds have to average 10:1. 

Well, there you go!  The results probably aren't even a difference between "aggressives" and "conservatives". It's probably that some bettors wager only on soccer, some wager only on racing, and it's the soccer bettors who are much more likely to win five in a row!


There's strong evidence of that kind of "bimodality" in the data. The authors reported that, overall, bettors won 48% of their wagers -- but at average odds of 7:1. That doesn't make sense, right?  48% should be more like even money.

I suspect the authors just used a simple average of the odds numbers. They took a trifecta, with 500:1 odds, and a soccer match, with 1:1 odds, and came up with an simple average of 250:1. 

It doesn't work that way. You have to use the average of the probabilities of winning -- which, in this case, are 1/2 and 1/501. The average of those is 503/2004, which translates to average odds of 1501:503, or about 3:1. (Another way to put it: add 1 to all the odds, take the harmonic mean, and subtract 1 from the result. If you leave out the "add 1" and "subtract 1", you'll probably be close enough in most cases.)

The bigger the spread in the odds, the worse the simple average works. So, the fact that 48% is so far from 7:1 is an indication that they're mixing heavy favorites with extreme longshots. Well, actually, we don't need that indication -- they authors report that the SD of the odds was 38.


Finally, if none of that made sense, here's an analogy for what's going on.

I study people who have a five year streak of eating lox and bagels. I discover that, in Year Six, they're much more likely to celebrate Hanukkah than people who don't have such a streak. Should I conclude that eating lox and bagels makes people convert to Judaism?

Labels: , ,

Friday, May 16, 2014

Do rich people ignore declines in their wealth?

Here's a recent article at FiveThirtyEight that didn't make sense to me. It's called "Why the Housing Bubble Tanked the Economy and the Tech Bubble Didn't."  It's by two guest writers, Amir Sufi and Atif Mian, both professors of economics.

Their argument goes like this (my words):


In 2007, the housing bubble wiped out $6 trillion in real estate values. Coincidentally, in 2000, the tech stock bubble dropped stock prices by roughly the same $6 trillion. 

But, from 2007-2009, there was a bad recession, with consumer spending dropping 8 percent. In 2000, by contrast, consumer spending actually grew. 

Why the difference?

Sufi and Mian argue that it's because the tech stock losses fell on mostly the rich, while the housing bubble hurt poor homeowners' nest eggs more severely. In a series of charts, they show how the richest quintile of home owners lost only about 25% of their total wealth in the housing crash, while the poorest 20 percent -- with their mortgage barely paid off, resulting in high leverage -- lost about 90% of their net worth. (This is dramatically illustrated in their second chart, where the bottom-quintile line dives to almost zero. Hey, embedding a tweet is fair use, right?  In that case, here's the graph.) 

The authors write,
"The poor cut spending much more for the same dollar decline in wealth ... If Bill Gates loses $30,000 in a bad investment, he's not going to cut back his spending. If a household with only $30,000 suffers a similar loss, they're going to massively slash spending."

So, that's their argument for why 2007 was so much worse than 2000: because it wasn't just the rich who took a huge hit. 


The article's facts make sense to me, but I don't think the conclusion follows. 

While the poorer homeowners did indeed suffer a larger percentage loss, that's not directly relevant. The *dollar* loss is the better measure of what affects the broader economy. When the local coffee shop goes out of business, it doesn't matter if it's because its 200 poorest customers stopped spending entirely, or its 800 richer customers cut spending by a quarter.

The authors implicitly acknowledge that, when they note that the less wealthy are more sensitive to changes on a dollar-for-dollar basis, their $30,000 to Bill Gates' $30,000. But the bottom quintile didn't lose more on a dollar for dollar basis -- only on a percentage basis. 

If the poor had lost the same amount as the rich, or close to it, I might buy the argument. But it wasn't even close. From the chart, it looks like the poorest lost about $25,000 in net worth, dropping from $30,000 to $2,000. But the richest lost a million dollars in value, literally, from $4 million down to $3 million. 

That is, the top quintile lost thirty times as much as the bottom quintile percent.

So, for the authors to defend their hypothesis, it's not enough that they show that the poor cut spending per dollar more than the rich do. They have to show that the poor cut spending per dollar *more than 30 times as much* as the rich do.

I doubt that's the case. A back-of-the-envelope guess shows that it's pretty much impossible. Let's guess at the average income of the bottom quintile -- $50,000, maybe?  And, suppose they spent all of it before the crash. 

After the crash, they get scared, and spend less. How much less?  Let's say, $5,000 less?  They still have to pay the mortgage, and taxes, and food, and clothing. 

Now, the top quintile. Let's suppose they spent all their income before the crash, and, to be generous to the authors, assume they again spent all their income after the crash. 

But, what about their wealth?  Do we really think they won't spend *any* less of their stash now that it's $3 million instead of $4 million?  

Most people accumulate wealth because they eventually want to spend it -- otherwise, what's the point?  Let's suppose they plan on spending half of it before they die, and leaving the other half to charity or heirs. After the crisis, they have $500,000 less to spend over their lifetime. If they've got 30 years left, simple arithmetic says they have to spend an average of $16,000 less per year. (And that doesn't even consider that their income from that wealth will drop proportionately.)

Now, you could argue that they won't cut their spending by $16,000 a year *immediately*. But, why not?  Maybe they're saving for retirement, so the cut in spending comes later. But, some of the richest quintile is *already* retired, so their spending cuts would be immediate, and larger than $16K.

And, look at it the other way. If you're worth $3 million, and the market booms, and suddenly you're worth $4 million ... are you really not going to spend MORE?  I would. And if it works one way, it should work the other way.

(Sure, your personal spending might not change if you're so rich that you have everything you want either way -- like a Bill Gates, or Warren Buffett. But the top quintile isn't even close. I can assure you, personally, that I'd be spending more with $4 million to my name than with $3 million.)

It's undeniable logic that when your net worth drops, your lifetime future spending (or donating) has to drop by exactly the same amount. The idea that richer peoples' spending drops by zero ... that seems to contradict both arithmetic and human nature.


Coincidentally, I found an answer to the "how much more do the rich spend?" question in a recent book review by Larry Summers, a prominent economist who was Secretary of the Treasury during the Clinton administration. Summers writes,

"The determinants of levels of consumer spending have been much studied by macroeconomists. The general conclusion of the research is that an increase of $1 in wealth leads to an additional $.05 in spending [per year]."

That would mean the top quintile, dropping $1 million in wealth, would spend $50,000 less next year. (More than I would have guessed.)

The above two sentences by themselves don't differentiate between richer and poorer. But the context is clearly about the rich, in a discussion about how the wealthy don't actually get that much wealthier because they tend to blow a lot of the money they've already got. 

Near the end of the article, Sufi and Mian confirm themselves that academic economists disagree with them:

"Former Federal Reserve Chairman Ben Bernanke described why academics doubt the importance of distribution issues ... suggesting that differences in spending propensities because of wealth would have to be "implausibly large" to explain the decline in spending during the 1930s."
"We disagree."

But there's nothing in their data that provides a basis for their disagreement. 


It occurred to me that there's evidence out there that we could check on our own. Specifically, sales of "rich people" goods. If the wealthy didn't cut spending during the crash, sales wouldn't have dropped -- or at least, not as much as for "normal people" goods. 

Looking at cars ... as a baseline, let's take Ford, the automaker least affected by the 2007 crisis. Here are total US Ford sales for 2006 to 2010, (and year-over-year percentage change):

2006 2,901,090
2007 2,507,366 (-14%)
2008 1,988,376 (-21%)
2009 1,620,888 (-18%)
2010 1,935,462 (+20%)

Compare those percentage changes to some of the luxury brands. I've added Honda, too, as a second "middle-class brand" reference point.

Bentley:       +3% -33% -49%  +5%
BMW:           +7% -15% -21% +12%
Mercedes:      +2% -21% -17%  +6%
Cadillac:      -5% -25% -33% +35%
Jaguar:       -24%  +2% -25% +12%
Lexus:         +2% -21% -17%  +6%
Ford:         -14% -21% -18% +20% 
Honda:         +5%  -6% -19% + 5%

It does seem like luxury cars were hit almost exactly the same way as Ford and Honda, doesn't it?  


Among the large, publicly-traded US home builders, one of them, Toll Brothers, specializes in luxury homes. According to their annual report (.pdf, see first page), their average home sold for $639,000 last year.

If rich people didn't cut spending during the crisis, you'd expect that Toll Brothers' sales wouldn't have declined, or, at least, declined less than those of the builders of "middle-class" houses. 

Nope.  Here's the 2006-2009 percentage drop in sales for all the homebuilders Value Line covers:

82% Beazer 
76% D.R. Horton
74% Hovnanian
84% KB Home
81% Lennar
81% M.D.C.
72% Meritage
56% NVR
71% Pulte
73% Ryland
70% Std. Pacific
71% Toll Brothers

Toll Brothers fits right in.


Here are some luxury-goods makers' revenue changes from 2006 to 2009:

-27% Sotheby's
+ 2% Tiffany
-27% Zale
-29% Movado
+71% Coach
-49% Brunswick
-17% Harley-Davidson
-75% Winnebago

All are down more than the overall 8%, except Coach and Tiffany. The simple average is still more than -8%. The weighted average is probably a little smaller than that, because Winnebago, Sotheby's, and Movado have lower sales than the other companies. 

Also, I'm not sure Coach should count ... it had been growing wildly throughout the decade, and continued to do so until recently. Also, its growth did slow significantly during the crisis. From 2002 to 2013, Coach's revenues grew by an average 13% annually -- but from 2008 to 2009, sales rose only 2%.

In any case, even if you include Coach, the results are in line with the 8%. If you don't include Coach, then, wow, those companies' sales did much, much worse than -8%.


Sufi and Mian also seem to think their hypothesis is somehow related to the issue of income inequality:

"[This] shows how important distributional issues should be in macroeconomics. As the recent craze over French economist Thomas Piketty's new book, 'Capital in the Twenty-First Century,' shows, both economists and lay people are beginning to understand that wealth inequality is crucial for understanding the broader economy."

But, inequality doesn't impact their argument at all. What their hypothesis says is that shocks to poorer people are more likely to cause recessions. In that case, what prevents recessions is not equality, but wealth. 

In fact, by their argument, inequality is almost completely uninformative as a predictor. If we were all equal and poorer, the recession would be severe. If we were all equal and richer, there would be no recession at all. It's *wealth* that matters for their theory, not inequality.


So, in summary: the authors are writing to refute conventional thinking on recessions. But, they don't really tell us why they think the established science is wrong. Their key premise seems to be contradicted by arithmetic and actual sales figures. And, the data they choose to show us doesn't actually bear on the disagreement. 

Do I have this wrong?  Am I missing something?  Any economists out there want to correct me?

Labels: ,

Sunday, May 04, 2014

Salaries and merit

People like to say that salaries should be based on merit. Employees should be rewarded by their ability and performance, and not by connections, or family, or race, or class, or luck.  

In a previous post, I argued that it's something we say, but don't really mean.  We don't really act as if we believe that "merit" is what counts.  In fact, outside of sports and school, we don't seem to want to measure merit at all.


The classic example is teachers.  We love to talk about how valuable teachers are and how important education is, and almost everyone has a story about a great teacher and what a difference he or she made to their life.  But, even so ... we don't make much effort to separate the better teachers from the worse ones.  

Sure, that's not necessarily easy to do.  Ratings are arbitrary, and subject to all kinds of manipulation and favoritism.  We can't let the students vote on merit, because they'll just reward the teachers who give the highest grades and the least homework.  How do you judge "good" and "bad", anyway?  Isn't it mostly subjective?

All of those issues are legitimate, but not insurmountable. The students themselves know who the best and worst teachers are, independent of the grades they give, right?  In my high school, we all pretty much agreed on who were the best.  A good analogy is MLB broadcasters.  Here's one poll of the best and worst.  You probably agree with it, for the most part, right?  (Except that Vin Scully should really be #1.)

There is a movement to pay teachers based on merit, but mostly from policy wonks.  It's not something that parents seem to care about, or students.  And even when it is, it's almost all about improving education. That's important, sure. But, what about from an ethical standpoint?  If we really believe in paying for merit, shouldn't the argument be that better teachers should be paid more on the principle of simple fairness?  Tellingly, the Wikipedia article on teacher merit pay doesn't mention that argument at all.


There are some situations where we *do* want to recognize merit.  It's obvious that someone who works 40 hours deserves --  "merits" -- twice as much pay as someone who works the same job for only 20 hours.  They do twice the work, so they deserve twice the pay.

But, what about when one employee can do twice the work in the same amount of time?  

In my field, software development, there's an adage that the difference between the best and worst programmer is 100 times the productivity: 10 times between best and average, and 10 times between average and worst.  Let's say it's really only 5 times instead of 10, and let's say we fire all the bad programmers.  That still leaves a 5x difference between best and worst.

Should a fast programmer make 500 percent of the salary of an average one?  If you believe in merit, then, you have to say yes.  Don’t you?  

You could even argue that the difference should be *more* than five times.  If you had to hire five slow employees to replace the one fast one, you'd need five times the desks, the heating bill, the parking lot, and lots of other fixed costs. Furthermore, critical situations often arise when something needs to be done very fast.  The single employee provides you that capability, but the five slow ones will just run into each other.  (Brooks's Law: "Adding manpower to a late software project makes it later.")

But no company could get away with that kind of salary differential.  There'd be a riot.  

Some of the average guys would feel underappreciated and quit.  Others would instead spend their time coming up with rationalizations that the fast programmers are producing lower-quality work, and perhaps even sabotaging them.  The managers would enviously rebel against their underling programmer making three times their salary (even though they'd have to grudgingly admit he's worth it).  The suits in head office would say, why is that geek with no social skills making more than me, when I have an MBA and a thousand dollar suit?  And, when the superstar programmer got older and slowed down, you'd have to find a way to cut his salary in half, without humiliating him.  

I’d argue that merit pay doesn't fail because merit is too difficult to measure.  It fails because merit CAN be measured, and we don't like the results.  


You're a teacher.  You've been doing it for a few years, and you're proud of your career choice.  The parents like you, and people show a lot of respect when they find out you're a teacher.  You have a satisfyingly high level of status in the community.

But, now, someone figures out a way to accurately evaluate all the teachers.  When they’re done, it turns out that you’re only 30th percentile.  Seven out of ten of your colleagues are better than you.  

Now, you look and feel like a bit of a failure.  Even if you kind of knew, before, that you weren't as wonderful a teacher as some of your colleagues, you didn't really have to face it. You could think, hey, I’m good in my own way.  I may not be able to hit home-run explanations like Barry Bonds, but I use time-tested approaches, so I don’t strike out much either!  

But now that "Teacher WAR" shows you’re barely above replacement, your rationalizations don't work any more.  Also, your pay gets cut.

Even the *good* teachers don’t necessarily benefit from evaluations.  When the world gets a reminder that some teachers are worse than others, everyone’s status takes a hit.  Then the mediocre teachers quit, better ones replace them, there's more competition, and suddenly you’re middle of the pack.  And what happens when you have a bad year? Whatever it is you’re doing right, you have to keep it up in order to keep your place in the pecking order.  In fact, you probably have to get substantially better.  It's lose/lose for almost everyone.  

Except: if they only rate the best.  Then, nobody else really gets hurt.  Sure, you didn't win, and that shows that you're not the very, very best, but that’s no real shame.  And, for all the world knows, you might still be the second best, or in the top 1%.  

I think that’s why, in many professions, you'll see awards given to the best in the field, while nobody else gets rated or mentioned.  (Google any job description followed by "awards," and, usually, you'll find something.)  The awards aren't just to benefit the winners ... they’re also there to raise the status of the profession in general.  When a local employee wins a national award for best mechanic, all mechanics gain a little bit of additional respect.  ("Geez, I thought that mechanics were just people too dumb to graduate high school, but this guy seems really smart!  Now that I think about it, my guy might be pretty good too.")

In the everyday world, the equivalent to an award is a promotion.  Instead of saying, "Bob, you fix cars only half as fast as Joe, so we're going to pay you only half as much," we say, "Bob, as you know, Joe is the very best mechanic here, so we're promoting him to senior technician."  You get to recognize the best without having to formally call out the worst.

Another benefit is we tend to compare ourselves to our peers, rather than people at different positions in the hierarchy.  So, after the promotion, Bob is less likely to become resentful, because he's less likely to compare himself to Joe.  Also, if Joe is doing a different job, Bob doesn't get reminded every day that Joe is twice as good.

But ... it's a promotion to a different job.  You're taking the best performers out of the environment where they've excelled, and moving them to something unproven.  In software in the government, they take the best programmers, and move them to something that’s very different -- like “project leader,” which is a middle-management job.  Sure, you need strong technical expertise to do that job, but almost any decent programmer can do it competently.  When you have a programmer who’s five times as good as average, why put him in a job that’s he’s less suited for?  It’s like telling Mike Trout, "You’re so good at baseball that we’re promoting you to bench coach."

And there are only so many promotions to go around.  We've all seen lots of low-paid workers who are excellent at what they do, and lots of their colleagues who aren't that good.  I’m pretty sure the range of their salaries is much narrower than the quality and quantity of their output -- Walmart cashiers, for instance, seem to all be within a 25% range

So if you're good, and you want to be rewarded on merit, you have to count on promotions.  But how many of the best can be promoted?  Walmart has 2.2 million associates, and promotes 170,000 of them annually.  That’s less than 8 percent.  That means you're going to have a lot excellent cashiers making only slightly more money than average cashiers.  

If we really cared about merit, we'd be outraged by that.


In Canada, we have "pay equity" legislation, designed to close the gap between men’s and women’s earnings, by reclassifying male-dominated and female-dominated jobs in such a way that their pay scales become more equal.  

The rationale is "equal pay for work of equal value."  As in, "a secretary is just as valuable as a carpenter and should be paid no less."

But ... we only seem to care that the pay in question becomes literally equal.  We demand that equals be paid equally, but we don’t really care that non-equals are paid non-equally.

If you really believe in equal pay for work of equal value, the faster programmer should be earning much more than the slower programmer, and the better teacher should be earning much more than the worse teacher, and the industrious janitor should be earning much more than the average janitor. But we don’t take to the street to say, "Jan is working twice as hard at the ice cream parlor as Marsha, but being paid the same!  Stop the exploitation!"

When we do accept pay differences, we don’t seem to care much about whether they're proportional to ability.  We may agree that doctors are more "meritorious" than, say, welders. But, how much more?  Should a doctor be paid twice what a welder makes?  Ten times?  Twenty times?  Nobody cares, really, do they?  

The rare times that we do care, it’s almost always on the side of making everyone more equal.  Much is made of the ratio between CEO and worker pay, how it's too high.  Why do we think CEOs make too much?  For egalitarian reasons that have nothing to do with merit.  I have no idea, honestly, what the ratio should be, how much more valuable a fast-food CEO is worth than the guy who flips the burgers.  Is it 100 times? 1000 times?  10,000 times?  If we did care about equal pay for equal value, it would matter, wouldn't it?

It certainly matters in sports.  We know Mike Trout is a bargain at 600 times the US minimum wage, but overpaid at 6000 times the minimum wage.  Why don’t we have the same argument for CEOs?  I think it's because we don’t care about merit for corporate bigwigs; we're just uncomfortable that they make millions. Maybe I'm wrong -- maybe we really do believe CEOs make more than they deserve on merit.  But, if that's the case, why does hardly anyone argue numbers about how much they're actually worth?

I stumbled onto this article that profiles Marie Sanders, a single mother who works for McDonald's.  After more than two years of full-time work at Mickey D's, she still earns only $7.75 an hour and has trouble making ends meet.  

There's nothing in the article about merit.  The article never asks: "Is Ms. Sanders worth more than $7.75?  Is she worth the $15 she implies she deserves?  How were her job evaluations?"  If the writer *did* ask that, there would be outrage.  "How dare you imply that she's not competent enough to deserve a living wage!  Don't big corporations owe their workers a decent enough paycheck to allow them the dignity of feeding their family?"

Actually, Ms. Sanders sounds intelligent and competent.  I bet she’s well above average among her peers.  If that's true, does that bolster her case for $15 an hour?  I think it does, and I think most everyone would agree.

But if her excellence is an argument for paying her more, that's the same thing as saying her co-workers' averageness (or even mediocrity) is an argument for paying them less.  If Marie should earn more than Andy, the reasonably competent fry cook, that means Andy should earn less than Marie. 

How much less?  Maybe even ... $7.75?  If not that low, then, what?  $10?  OK, say $10.  But, now, what if there's someone even worse than Andy, someone who's just marginally competent?  

I think we don't want to bite the "merit" bullet and face that some workers are worth a lot less than others.  We're uncomfortable if Marie earns the $15 she's worth, and Andy earns the $7 he's worth.  Instead, we're happier if Marie and Andy both make $11 -- even if Marie is a lot better at her job.  

We say we want salaries to be merit-based, but what we really seem to want is for salaries to be equal.  

Labels: ,