Sunday, September 28, 2014


Bill James doesn't like to be called an "expert." In the "Hey Bill" column of his website, he occasionally corrects readers who refer to him that way. And, often, Bill will argue against the special status given to "experts" and "expertise."

This, perhaps understandably, puzzles some of us readers. After all, isn't Bill's expertise the reason we buy his books and pay for his website?  In other fields, too, most of what we know has been told to us by "experts" -- teachers, professors, noted authors. Do we want to give quacks and ignoramuses the same respect as Ph.Ds?

What Bill is actually arguing, I think, is not that expertise is useless -- it's that in practice, it's used to fend off argument about what the "expert" is saying.  In other words, titles like "expert" are a gateway to the fallacy of "argument from authority."

On November 8, 2011 (subscription required), Bill replied to a reader's question this way:

"I've devoted my whole career to battling AGAINST the concept of expertise. The first point of my work is that it DOESN'T depend on expertise. I am constantly reminding the readers not to regard me an expert, because that doesn't have anything to do with whether what I have to say is true or is not true."

In other words: don't believe something because an "expert" is saying it. Believe it because of the evidence. 

(It's worth reading Bill's other comments on the subject; I wasn't able to find links to everything I remember, but check out the "Hey Bill" pages for November, 2011; April 18, 2012; and August/September, 2014.)

Anyway, I'd been thinking about this stuff lately, for my "second-hand knowledge" post, and Bill's responses got me thinking again. Some of my thoughts on the subject echo Bill's, but all opinions here are mine.


I think believing "experts" is useful when you're looking for the standard, established scientific answer.  If you want to know how far it is from the earth to the sun, an astronomer has the kind of "expertise" you can probably accept.

We grow up constantly learning things from "experts," people who know more than we do -- namely, parents and teachers. Then, as adults, if we go to college, we learn from Ph.D. professors. 

Almost all of our formal education comes from learning from experts. Maybe that's why it seems weird to hear that you shouldn't believe them. How else are you going to figure out the earth/sun distance if you're not willing to rely on the people who have studied astronomy?

As I wrote in that previous post, it's nice to be able to know things on your own, directly from the evidence. But there's a limit to how much we can know that way. For most factual questions, we have to rely on other people who have done the science that we can't do.


The problem is: in our adult, non-academic lives, the people we call "experts" are rarely used that way, to resolve issues of fact. Few of the questions in "Ask Bill" are about basic information like that. Most of them are asking for opinion, or understanding, or analysis. They want to pick Bill's brain.

From 1/31/2011: "Would you have any problem going with a 4-man rotation today?"

From 10/7/2013: "Bill, you wrote in an early Abstract that no one can learn to hit at the major league level. Do you still believe that?"

From 10/29/2012: "Do you think baseball teams sacrifice bunt too much?"

In those cases, sure, you're better off asking Bill than asking almost anyone else, in my opinion. Even so, you shouldn't be arguing that Bill is right because he's an "expert."  

Why?  Because those are questions that don't have an established, scientific answer based on evidence. In all three cases, you're just getting Bill's opinion. 

Moreover: all three of those issues have been debated forever, and there's *still* no established answer. That means there are opinions on both sides. What makes you think the expert you're currently asking is on the correct side? Bill James doesn't think a four-man rotation is a bad idea, but any number of other "experts" believe the opposite. 

Subject-matter experts should agree on the basic canon, sure. It should be rare that a physics "expert" picks up a textbook and has serious disagreements with anything inside.

But: they can only agree on answers that are known. In real life, most interesting questions don't have an answer yet. That's what makes them so interesting!

When will we cure cancer? What's the best way to fight crime? When should baseball teams bunt? Will the Seahawks beat the spread?

Even the expertest expert doesn't know the answer to those questions. Some of them are unknowable. If anyone was "expert" enough to predict the outcome of football games, he'd be the world's richest gambler. 


All you can really expect from an expert is that he or she knows the state of the science.  An expert is an encyclopedia of established knowledge, with enough understanding and experience to draw inferences from it in established ways.

Expertise is not the same as intelligence. It is not the same as wisdom. It is not the same as insight, or freedom from bias, or prescience, or rationality.

And that's why you can get different "experts" with completely different views on the exact same question, each of them thinking the other is a complete moron. That's especially true on controversial issues. (Maybe it's not that controversial issues are less likely to have real answers, but that issues that have real answers are no longer controversial.)

On those kinds of issues, where you know there are experts on both sides, you might as well flip a coin as rely on any given expert.

And hot-button issues are where you find most of the "experts" in the media or on the internet, aren't they?  I mean, you don't hear experts on the radio talking how many neutrons are in an atom of vanadium. You hear them talking about what should be done to revive the sagging economy. Well, there's no consensus answer for that. If there were, the Fed would have implemented it long ago, and the economy would no longer be sagging. 

Indeed, the fact that nobody is taking the expert's advice is proof that there must be other experts that think he's wrong.

Sometimes, still, I find myself reading something an expert says, and nodding my head and absorbing it without realizing that I'm only hearing one side. We don't always conciously notice the difference, in real time, between consensus knowledge and the "expert's" own assertions. 

Part of the reason is that they're said in the same, authoritative tone, most of the time. Listen to baseball commentators. "Jeter is hitting .302." "Pitching is 75 percent of baseball." You really have to be paying attention to notice the difference. And, if you don't know baseball, you have no way of knowing that "75 percent of baseball" isn't established fact! At least, until you hear someone dispute it.

Also, I think we're just not used to the idea that "experts" are so often wrong. For our entire formal education, we absorb what they teach us about science as unquestionably true. Even though we understand, in theory, that knowledge comes from the scientific method ... well, in practice, we have found that knowledge comes from experts telling us things and punishing us for not absorbing them.  It's a hard habit to break.


The fact is: for every expert opinion, you can find an equal and opposite expert opinion. 

In that case, if you can't just assume someone's right just because he's an expert, can you maybe figure out who's right by *counting* experts?  

Maybe, but not necessarily. As Bill James wrote (9/8/14),

"An endless list of experts testifying to falsehood is no more impressive than one."

It used to be that an "endless list" of experts believed that W-L record was the best indication of a pitcher's performance. It used to be that almost all experts believed homosexuality was a disease. It used to be that almost no experts believed that gastritis was caused by bacteria -- until a dissenting researcher proved it by drinking a beaker of the offending strain. 

Each of those examples (they're mine, not Bill's) illustrates a different way experts can be wrong. 

In the first case, pitcher wins, the expert conventional wisdom never had any scientific basis -- it just evolved, somehow, and the "experts" resisted efforts to test it. 

In the second case, homosexuality, I suspect a big part of it was the experts interpreting the evidence to conform to their pre-existing bias, knowing that it would hurt their reputations to challenge it. 

In the third case ... that's just the scientific method working as promised. The existing hypothesis about gastritis was refuted by new evidence, so the experts changed their minds. 

Bill has a fourth case, the case of psychiatric "expert witnesses" who just made stuff up, and it was accepted because of their credentials. From "Hey Bill," 11/10/2011 and 11/11/2011:

"Whenever and wherever someone is convicted of a crime he did not commit, there's an expert witness in the middle of it, testifying to something that he doesn't actually know a damned thing about.  In the 1970s expert witnesses would testify to the insanity of anybody who could afford to pay them to do so."

"Expert witnesses are PRAISED by professional expert witnesses for the cleverness with which they discuss psychological concepts that simply don't exist."

In none of those cases would you have got the right answer by counting experts. (Well, maybe in the third case, if you counted after the evidence came out.)  

Actually, I'm cheating here. I haven't actually shown that the majority isn't USUALLY right. I've just shown that the majority isn't ALWAYS right. 

It's quite possible that those four cases were rare exceptions: that, most of the time, when the majority of experts agree, they're generally right. Actually, I think that's true, that the majority is usually right -- but I'm only willing to grant that for the "established knowledge" cases, the "distance from the earth to the sun" issues. 

For issues that are legitimately in dispute, does a majority matter?  And does the size matter?  Does a 80/20 split among experts really mean significantly more reliability than a 70/30 split?  

Maybe. But if you go by that, it's not *knowing*, right?  It's just handicapping. 

Suppose 70% of doctors believe X, and, if you look at all times that seventy percent of doctors believed something else, 9 out of 10 of those beliefs turned out to be true. In that case, you can't say, "you must trust the majority of experts."  You have to say, at best, "there's a 9 out of 10 chance that X is true."

But maybe I can say more, if I actually examine the arguments and evidence.

I can say, "well, I've examined the data, and I've looked at the studies, and I have to conclude that this is the 1 time out of 10 that the majority is dead wrong, and here is the evidence that shows why."  

And you have no reply to that, because you're just quoting odds.

And that's why evidence trumps experts. 

Here's Bill James on climate scientists, 9/9/2014 and 9/10/2014:

"[You should not believe climate scientists] because they are experts, no. You should believe them if they produce information or arguments that you find persuasive. But to believe them BECAUSE THEY ARE EXPERTS -- absolutely not.

"It isn't "consensus" that settles scientific disputes; it is clear and convincing evidence. An issue is settled in science when evidence is brought forward which is so clear and compelling that everyone who looks at the evidence comes to the same conclusion. ... The issue is NOT whether scientists agree; it is whether the evidence is compelling."

If you want to argue that something is true, you have two choices. You can argue from the evidence. Or, you can argue from the secondhand evidence of what the experts believe. 

But: the firsthand evidence ALWAYS trumps the secondhand evidence. Always. That's the basis of the entire scientific method, that new evidence can drive out an old theory, no matter how many experts and Popes believe they're wrong, and no matter how strongly they believe it.

You're talking to Bob, a "denier" who doesn't believe in climate change. You say to Bob, "how can you believe what you believe, when the scientists who study this stuff totally disagree with you?"

If Bob replies, "I have this one expert who says they're wrong" ... well, in that case, you have the stronger argument: you have, maybe, twenty opposing experts to his one. By Bob's own logic -- "trust experts" -- the probabilities must be on your side. You haven't proven climate change is real, but you've convincingly destroyed Bob's argument. 

However: if Bob replies, "I think your twenty experts are wrong, and here's my logic and evidence" -- well, in that case, you have to stop arguing. He's looking at firsthand evidence, and you're not. Your experts might still be right, because maybe he's got bad data, or he's misinterpreting his evidence, or his worthless logic comes out of the pages of the Miss America Pageant. Still, your argument has been rendered worthless because he's talking evidence, which you're not willing or able to look at directly.

As I wrote in 2010,

"Disbelieving solely because of experts is NOT the result of a fallacy. The fallacy only happens when you try to use the experts as evidence. Experts are a substitute for evidence. 

"You get your choice: experts or evidence. If you choose evidence, you can't cite the experts. If you choose experts, you can't claim to be impartially evaluating the evidence, at least that part of the evidence on which you're deferring to the experts. 

"The experts are your agents -- if you look to them, it's because you are trusting them to evaluate the evidence in your stead. You're saying, "you know, your UFO arguments are extraordinary and weird. They might be absolutely correct, because you might have extraordinary evidence that refutes everyone else. But I don't have the time or inclination to bother weighing the evidence. So I'm going to just defer to the scientists who *have* looked at the evidence and decided you're wrong. Work on convincing them, and maybe I'll follow."  

In other words: it's perfectly legitimate to believe in climate change because the scientific consensus is so strong. It is also legitimate to argue with people who haven't looked at the evidence and have no firsthand arguments. But it is NOT legitimate to argue with people who ARE arguing from the evidence, when you aren't. 

That they're arguing first-hand, and you're not, doesn't necessarily mean you're wrong. It  just means that you have no argument or evidence to bring to the table. And if you have no evidence in a scientific debate, you're not doing science, so you need to just ... well, at that point, you really need to just shut up.

The climate change debate is interesting that way, because, most of the activist non-scientists who believe it's real really haven't looked at the science enough to debate it. A large number have *no* firsthand arguments, except the number of scientists who believe it. 

As a result, it's kind of fun to watch their frustration. Someone comes up with a real argument about why the data doesn't show what the scientists think it does, and ... the activists can't really respond. Like me, most have no real understanding of the evidence whatsoever. They could say, like I do to the UFO people, "prove it to the scientists and then I'll listen," but they don't. (I suspect they think that sounds like they're taking the deniers seriously.)

So, they've taken to ridiculing and name-calling and attacking the deniers' motivations. 

To a certain extent, I can't blame them. I'm in the same situation when I read about Holocaust deniers. I mean the serious ones, the "expert" deniers, the ones who post blueprints of the death camps and prepare engineering and logistics arguments about how it wasn't possible to kill that many people in that short a time. And what can I do?  I let other expert historians argue their evidence (which fortunately, they do quite vigorously), and I gnash my teeth and maybe rant to my friends.

That's just the way it has to be. You want to argue, you have to argue the evidence. You don't bring a knife to a gunfight, and you don't bring an opinion poll to a scientific debate.

Labels: , , ,

Sunday, September 14, 2014

Income inequality and the Fed report

The New York Yankees are struggling. Why don't they sign Reggie Jackson? Sure, he's 68 years old, but he'd still be a productive hitter if the Yankees signed him today.

Why do I say that? Because if you look at the data, you'll see that players' production doesn't decline over time. In 1974, the Oakland A's hit .247. In 2013, they hit .254. Their hitting was just as good -- actually, better -- even thirty-nine years later!

So how can you argue that players don't age gracefully?


It's obvious what's wrong with that argument: the 2013 Oakland A's aren't the same players as the 1974 Oakland A's. The team got better, but the individual players got worse -- much, much worse. Comparing the two teams doesn't tell us anything at all about aging.

The problem is ridiculously easy to see here. But it's less obvious in most articles I've seen that discuss trends in income inequality, even though it's *exactly the same flaw*.

Recently, the US Federal Reserve ("The Fed") published their regular report on the country's income distribution (.pdf). Here's a New York Times article reporting on it, which says, 

"For the most affluent 10 percent of American families, average incomes rose by 10 percent from 2010 to 2013."

Well, that's not right. The Fed didn't actually study how family income changed over time. Instead, they looked at one random sample of families in 2010, and a *different* random sample of families in 2013.  

The confusion stems from how they gave the two groups the same name. Instead of "Oakland A's," they called them "Top 10 Percent". But those are different families in the two groups.

Take the top decile both years, and call it the "Washington R's." What the Fed report says is that the 2013 R's hit for an average 10 points higher than the 2010 R's. But that does NOT mean that the average 2010 R family gained 10 points. In fact, it's theoretically possible that the 2010 R's all got poorer, just like the 1974 Oakland A's all got worse. 

In one sense, the effect is stronger in the Fed survey than in MLB. If you're a .320 hitter who drops to .260 while playing for the A's, Billy Beane might still keep you on the team. But if you're a member of the 2010 R's, but wind up earning only an middle-class wage in 2013, the Fed *must* demote you to the minor-league M's, because you're not allowed to stay on the R's unless you're still top 10 percent. 

The Fed showed that the Rs, as a team, had a higher income in 2013 than 2010. The individual Rs? They might have improved, or they might have declined. There's no way of knowing from this data alone.


So that quote from the New York Times is not justified. In fact, if even one family dropped out of the top decile from 2010 to 2013, you can prove, mathematically, that the statement must be false.

That has nothing to do with any other assumptions about wealth or inequality in general. It's true regardless, as a mathematical fact. 

Could it just be bad wording on the part of the Fed and the Times, that they understand this but just said it wrong? I don't think so. It sure seems like the Times writer believes the numbers apply to individuals. For instance, he also wrote, 

"There is growing evidence that inequality may be weighing on economic growth by keeping money disproportionately in the hands of those who already have so much they are less inclined to spend it."

The phrase "already have so much" implies the author thinks they're the same people, doesn't it? Change the context a bit. "Lottery winners picked up 10 percent higher jackpots in 2013 than 2010, keeping winnings disproportionately in the hands of those who already won so much."  

That would be an absurd thing to say for someone who realizes that the jackpot winners of 2013 are not necessarily the same people as the jackpot winners of 2010.

Anyway, I shouldn't fault the Times writer too much ... he's just accepting the incorrect statements he found in the Fed paper. 

And I don't think any of the misstatements are deliberate. I suspect that the Fed writers were sometimes careless in their phrasing, and sometimes genuinely thought that "team" declines/increases implied family declines/increases. 

Still, some of the statements, in both places, are clearly not justified by the data and should not have made it into print.


I've read articles in the past that made a similar point, that individuals and families might be improving significantly, even though the data appears to give the impression that their group is falling behind. 

It's not hard to think of an example of how that might be possible. 

Imagine that everyone gets richer every year. During the boom, immigration grows the population by 25 percent every year, and the new arrivals all start at $10 per hour.

What happens? 

(a) the lowest bottom 20 percent of every year earn the same amount; but 
(b) everyone gets richer every year

That is: *everyone* is better off *every year*, even though the data may make it falsely appear that the poor are stagnating.

(Note: the words "rich" and "poor" are defined as "high wealth" and "low wealth," but in this post, I'm also going to [mis]use them to mean "high income" and "low income."  It should be obvious from the context which one I mean.)


Now, even if you agree with everything I've said so far, you could still have other reasons to be concerned about the Fed report. For me, the me, the most important fact is the discovery that 2013's poor (bottom quintile) have 8 percent less income than 2010's poor. 

You can't conclude that any particular family dropped, but you *can* conclude that, even if they're different people, the bottom families of 2013 are worse off than the bottom families of 2010. That's real, and that's something you could certainly be concerned about. 

But, many people, like the New York Times writer, aren't just concerned about the poorer families -- they worry about how "income inequality" compares them to the richer ones. They're uncomfortable with the growing distance between top and bottom, even in good times where the "rising tide" lifts everyone's income. For them, even if every individual is made better off, it's the inequality that bothers them, not the absolute levels of income, or even now fast overall income is growing. If the "Washington R's" gain 20 percent, but the "Oakland P's" gain only 5 percent ... for them, that's something to correct.

They might say something like,

"It's nice that the overall pie is growing, and it's nice that the "P's" are getting more money than they used to. But, still, every year, it seems like the high-income "team" is getting bigger increases than the low-income "team". There must be something wrong with a system where, years ago, the top-to-bottom ratio used to be 5-to-1, but now it's 10-to-1 or 15-to-1 or higher."

"Clearly, the rich are getting richer faster than the poor are getting richer. There must be something wrong with a system that benefits the rich so much while the poor don't keep up."

Rebutting that argument is the main point of this post. Here's what I'm going to try to convince you:

Even when the rich/poor ratio increases over time, that does NOT necessarily imply that the rich are getting more benefit than the poor. 

That is: *even if inequality is a bad thing*, it could still be that the changes in the income distribution have benefited the poor more than the rich.

I can even go further: even if ALL the benefits of increased income go to the poor, it's STILL possible for the rich/poor inequality gap to grow. The government could freeze the income of every worker in the top half, and increase the income of every worker in the bottom half. And even after that, the rich/poor income gap might still be *higher*.


It seems that can't be possible. If everyone's income grows at the same rate, the ratio has to stay the same, right? If rich to poor is $200K / $20K one year, and rich and poor both double equally, you get $400K / $40K, and the ratio of 10:1 doesn't change. Mathematically, R/P has to equal xR/xP.

So if benefits that are equal keep the ratio equal, benefits that favor the poor have to change the ratio in favor of the poor. No? 

No, not necessarily. For instance:

Suppose that in 2017, the ratio between rich and poor is 1.25. In 2018, the ratio between rich and poor is 1.60. Pundits say, "this is because the system only benefited the rich!"

But it could be that the pundits have it 100% backwards, and the system actually only favored the poor. 

How? Here's one way. 

There are two groups, with equal numbers of people in each group. In 2017, everyone in the bottom group made $40K, and everyone in the top group made $50K. That's how the ratio between rich group and poor group was 1.25.

The government instituted a program to help the poor, the bottom group. Within a year, the income of the poor doubled, from $40K to $80K, while the top group stagnated at $50K. 

So, in 2018, the richest half of the population earned $80K, and the poorest half earned $50K. That's how inequality increased, from 1.25 to 1.60, only from helping the poor!


What happened? How did our intuition go wrong? For the same reason as before: we didn't immediately realize that the groups were different people in different years. The 2017 rich aren't the same as the 2018 rich.

When the pundits argued "the system only benefited the rich," whom did they mean? The "old" 2017 rich, or the "new" 2018 rich? Without specifying, the statement is ambiguous. So ambiguous, in fact, that it almost has no meaning.

What really happened is that the system benefited the old poor, who happen to be the new rich. It failed to benefit the old rich, which happen to be new poor.

Inequality increased from 1.25 to 1.60, but it's meaningless to say the increase benefited the "rich". Which rich? Obviously, it didn't benefit the "old rich."

But, isn't it true to say that the increase benefited the new rich? 

It's true, but it doesn't tell us much -- it's true by definition! In retrospect, ANY change will have benefited the "new rich" more than the "new poor."  If you used to be relatively poor, but now you're relatively rich, you must have benefited more than average. So when you say increasing inequality favors the "new rich," you're really saying "increasing inequality favors those who benefited the most from increasing inequality."  

These examples sound absurd, but they're exact illustrations of what's happening:

-- You have a program to help disadvantaged students go to medical school. Ten years later, you follow up, and they're all earning six-figure incomes as doctors. "Damn!" you say. "It turns out that in retrospect, we only helped the rich!"

-- Or, you do a study of people who won the lottery jackpot last year, and find that most of them are rich, in the top 5%. "Damn!" you say. "Lotteries are just a subsidy for the rich!"

-- Or, you do a study of people who were treated for cancer 10 years ago, and you find most of them are healthy. "Damn!" you say. We wasted cancer treatments on healthy patients!

It makes no sense at all to regret a sequence of events on the grounds that, in retrospect, it helped the people with better outcomes more than it helped the people with worse outcomes. Because, that's EVERY sequence of events!

If you want to complain that increasing inequality is disproportionately benefiting well-off people, that can make sense only if you mean it's those who were well off *before* the increase. But the Fed data doesn't give you any way of knowing whether that's true. It might be happening; it might not be happening. But the Fed data can't prove it either way.


Here's an example that's a little more realistic.

Suppose that in 2010, there are five income quintiles, where people earn $20K, $40K, $60K, $80K, and $100K, respectively. I'll call them "Poor," "Lower Class," "Middle Class," "Upper Class," and "Rich", for short. We'll measure inequality by the R/P ratio, which is 5 (100 divided by 20).

Using three representative people in each group, here's what the distribution looks like:

2010 group, 2010 income
P    L    M    U    R
20   40   60   80   100
20   40   60   80   100
20   40   60   80   100
R/P ratio: 5

From 2010 to 2013, people's incomes change, for the usual reasons -- school, life events, luck, shocks to the economy, whatever. In each group, it turns out that one-third of people make double what they did before, one third experience no change, and one third see their incomes drop in half. 

Overall, that means incomes have grown by 16.7%: the average of +100%, 0%, and -50%. Workers have 1/6 more income, overall. But the change gets spread unevenly, since life is unpredictable.

Here are the 2013 incomes, but still based on the 2010 grouping. The top row are the people who dropped, the middle row are the status quo, and the bottom row are the ones who doubled.

2010 group, 2010 income
P    L    M     U     R
10   20   30    40    50
20   40   60    80   100
40   80  120   160   200
R/P ratio: 5

You can easily calculate that every 2010 group got, on average, the same 16.7% increase. So, since life treated the groups equally, the 2010 rich/2010 poor ratio is still 5. In chart form:

2010 group, % change 2010-2013
 P     L     M     U     R  
+17%  +17%  +17%  +17%  +17%

But the Fed doesn't have any of those numbers, because it doesn't know which 2010 group the 2013 earners fell into. It just takes the 2013 data, and mixes it into brand new groups based on 2013 income:

2013 group, 2013 income
P    L     M     U     R  
10   30    40    80   120
20   40    50    80   160
20   40    60   100   200
R/P ratio: 9.6

What does the Fed find? Much more inequality in 2013 than in 2010. The ratio between rich and poor is 9.6 -- almost double what it was! 

The Fed method will also see that the bottom three groups are earning less than the corresponding group earned three years previous.  Only the top two groups, the "upper class" and "rich," are higher. Here are the changes between each new group and the corresponding old group:

Perceived change 2010-2013
 P    L    M    U    R  
-17% -8%  -8%  +8%  +60%

If you don't think about what's going on, you might be alarmed. You might conclude that none of the economy's growth benefited the lowest 60 percent at all -- that all the benefits accrued to the well off! 

But, that's not right: as we saw, the benefits accrued equally. And, as we saw, the "R" group ALWAYS has to be high, by definition, since it's selectively comprised of those who benefited the most!

In effect, comparing the 2010 sample to the 2013 sample is a subtle "cheat," creating an illusion that can be used (perhaps unwittingly) to falsely exaggerate the differences. When the poor improve their lot, the method moves them to another group, and winds up ignoring that they benefited. 

For instance, when a $30K earner moves to $90K, a $90K earner moves to $120K, and a $120K earner drops to $30K, the Fed method makes it look like they all benefited equally, at zero. In reality, the "poor" gained and the "rich" declined -- the $30K earner grew 200%, the $90K earner grew 33%, and the $120K earner dropped by -75%. 

No matter how you choose the numbers, as long as there is any movement between groups, the method will invariably overestimate how much the "rich" benefited, and underestimate how much the "poor" benefited. It never works the other way.


One last example.

This time, let's institute a policy that does something special for the disadvantaged groups, to try to make society more equal. For everyone in the P and L group in 2010, we institute a program that will double their eventual 2013 income. Starting with the same 20/40/60/80/100 distribution for 2010, here's what we see after the 2013 doubling:

2010 group, 2013 income
P     L    M    U    R  
20    40   30   40   50
40    80   60   80  100
80   160  120  160  200
R/P ratio: 2.5

Based on the 2010 classes, we've cut the rich/poor ratio in half! But, as usual, the Fed doesn't know the 2010 classes, so they sort the data this way:

2013 group, 2013 income
P    L    M    U     R  
20    40  60    80  160
30    40  80   100  160
40    50  80   120  200
R/P ratio: 5.8

Inequality has jumped from 5.0 to 5.8. That's even after we made a very, very serious attempt to lower it, doubling the incomes of the previous poorest 40 percent of the population!


There's an easy, obvious mathematical explanation of why this happens.

When you look at income inequality, you're basically looking at the variance of the income distribution. But, changes from year-to-year are not equal, so they have their own built-in variance.

If the changes in income are independent of where you started -- that is, if the system treats rich and poor equally, in terms of unpredictability -- then

var(next year) = var(this year) + var(changes)

Which means, as long as rich and poor are equal in how their incomes change, inequality HAS TO INCREASE. 

Take 100 people, start them with perfect equality, $1 each. 

Every day, they roll a pair of dice. They multiply their money by the amount of the roll, then divide by 7. 

Obviously, on Day 2, equality disappears: some people will have $12/7, while others will have only $2/7. The third day, they'll be even more unequal. The fourth day, even more so. Eventually, some of them will be filthy, filthy rich, having more money than exists on the planet, while others will have trillionths of a dollar, or less.

That's just the arithmetic of variation. Increasing inequality is what happens naturally, not just in incomes, but in everything -- everything where things change independently of each other and independently over time. 

What if you want to fight nature, and keep inequality from growing? You have to arrange for year-to-year changes to benefit the poor more than the rich. That effect has to be large -- as we saw earlier, doubling the income of the 40 poorest percent wasn't enough. (It was a contrived example, but, still, it sure *seemed* like it should have been enough!)


How much do you have to tilt the playing field in favor of the poor? Thinking out loud, scrawling equations ... I didn't double-check, so try this yourself because I may have screwed up ... but here's what I got:

Without independence, 

var(next year) = var(this year) + var(changes) + 2 cov(this year, changes)

Solving on the back of my envelope ... if I've done it right, using logarithm of income and some rough assumptions ... I get that the correlation between this year's income and the change to next year's income has to be around -0.25.

My scrawls say that if you're in the top 2.5% of income, your next-year change has to be in the bottom 30%. And if you're in the bottom 2.5%, your next-year change has to be in the top 30%. 

That seems really tough to do. In a typical year that the economy grows normally, what percentage of incomes in the Fed survey would be lower than last year's? If it's 30 percent, then ... to keep inequality constant, just ONE of the things you need to do is make sure high-income people, on average, never earn more this year than last year.

You'd almost have to repeal compound interest!


I don't mean to imply that increasing inequality is *completely* just the result of normal variation. There are lots of other factors. Progressive taxation creates a small effect on equality. Increased savings while the economy grows contributes to inequality. A growing population means that inequality increases where bestselling authors have a larger market. And so on. 

But the point is: because increasing inequality happens naturally, you can't conclude anything just from *the fact that there's an increase*. At the very least, you have to back out the natural effects if you want to really explain what's going on. You have to do some math, and some arguing. 

The argument, "Inequality is growing -- therefore, we must be unfairly favoring the rich" is not a valid one. It is true that inequality is growing. And it *might* be true that we are unfairly favoring the rich. But, the one doesn't necessarily follow from the other. 

It's like saying, "Philadelphia was warmer in June than April; therefore, global warming must be happening."


Again, I'm not trying to argue that inequality is a good thing, or that you shouldn't be concerned about it. Rather, I'm arguing that increasing inequality does NOT tell you anything reliable about who benefits from the "system" or how much (if at all) the increase favors the rich over the poor.

I am arguing that, even if you think increasing inequality is a bad thing, the following are still, objectively, true:

-- increasing inequality is a natural mathematical consequence of variation;
-- it is not necessarily the result of any deliberate government policy;
-- it does not necessarily disproportionately favor the rich or hurt the poor;
-- there is no way to know which individuals it favors just from the Fed data;
-- the natural forces that cause inequality to increase are very strong;
-- natural inequality growth may be so strong that it will persist even after successful attempts to benefit the poor generously and significantly;
-- the poor could be gaining relative to the rich even while measured inequality increases.

As for the Fed study itself,

-- the Fed statistics do not measure income changes for any family or specific group of families;
-- the Fed statistics that measure distributional income changes for percentile groups are a biased, exaggerated estimate of the income changes for the average family starting in that percentile;
-- It is impossible to tell, from the Fed's numbers, how the poor are faring relative to the rich.

Finally, and most importantly,

-- all of these statements follow necessarily from basic logic and math -- and do not require any other arguments from politics, economics, compassion, greed, fairness, or partisanship.