### More r-squared analogies

OK, so I've come up with yet another analogy for the difference between the regression equation coefficient and the r-squared.

The coefficient is the *actual signal* -- the answer to the question you're asking. The r-squared is the *strength of the signal* relative to the noise for an individual datapoint.

Suppose you want to find the relationship between how many five-dollar bills someone has, and how much money those bills are worth. If you do the regression, you'll find:

**Coefficient = 5.00** (signal)

**r-squared = 1.00** (strength of signal)

**1 minus r-squared = 0.00** (strength of noise)

**Signal-to-noise ratio = infinite** (1.00 / 0.00)

The signal is: a five-dollar bill is worth $5.00. How strong is the signal? Perfectly strong -- the r-squared is 1.00, the highest it can be. (In fact, the signal to noise ratio is infinite, because there's no noise at all.)

Now, change the example a little bit. Suppose a lottery ticket gives you a one-in-a-million chance of winning five million dollars. Then, the expected value of each ticket is $5. (Of course, most tickets win nothing, but the *average* is $5.)

You want to find out the relationship between how many tickets someone has, and how much money those tickets will win. With a sufficiently large sample size, the regression will give you something like:

**Coefficient = 5.00** (signal)

**r-squared = 0.0001** (strength of signal)

**1 minus r-squared = 0.9999** (strength of noise)

**Signal-to-noise ratio = 0.0001** (0.0001 / 0.9999)

The average value of a ticket is the same as a five-dollar bill: $5.00. But the *noise* around $5.00 is very, very large, so the r-squared is small. For any given ticketholder, the distribution of his winnings is going to be pretty wide.

In this case, the signal-to-noise ratio is something like 0.0001 divided by 0.9999, or 1:10,000. There's a lot of noise in with the signal. If you hold 10 lottery tickets, your expected winnings are $50. But, there's so much noise, that you shouldn't count on the result necessarily being close to $50. The noise could turn it into $0, or $5,000,000.

On the other hand, if you own 10 five-dollar bills, then you *should* count on the $50, because it's all signal and no noise.

It's not a perfect analogy, but it's a good way to get a gut feel. In fact, you can simplify it a bit and make it even easier:

**-- the coefficient is the signal.**

**-- the r-squared is the signal-to-noise ratio.**

You can even think of it this way, maybe:

**-- the coefficient is the "mean" effect.**

**-- the (1 - r-squared) is the "variance" (or SD) of the effect.**

Five-dollar bills have a mean value of $5, and variance of zero. Five-dollar lottery tickets have a mean value of $5, but a very large variance.

------

So, keeping in mind these analogies, you can see that this is wrong:

*"The r-squared between lottery tickets and winnings is very close to zero, which means that lottery tickets have very little value."*

It's wrong because the r-squared doesn't tell you the actual value of a ticket (mean). It just tells you the noise (variance) around the realized value for an individual ticket-holder. To really see the value of a ticket, you have to look at the coefficient.

From the r-squared alone, however, you *can* say this:

*"The r-squared between lottery tickets and winnings is very close to zero, which means that it's hard to predict what your lottery tickets are going to be worth just based on how many you have."*

You can conclude "hard to predict" based on the r-squared. But if you want to conclude "little value on average," you have to look at the coefficient.

------

In the last post, I linked to a Business Week study that found an r-squared of 0.01 between CEO pay and performance. Because the 0.01 is a small number, the authors concluded that there's no connection, and CEOs aren't paid by performance.

That's the same problem as the lottery tickets.

If you want to see if CEOs who get paid more do better, you need to know the size of the effect. That is: you want to know the signal, not the *strength* of the signal, and not the signal-to-noise ratio. You want the coefficient, not the r-squared.

And, in that study, the signal was surprisingly high -- around 4, by my lower estimate. That is: for every $1 in additional salary, the CEO created an extra $4 for the shareholders. That's the number the magazine needs in order to answer its question.

The low r-squared just shows that the noise is high. The *expected value* is $4, but, for a particular case, it could be far from $4, in either direction. I haven't checked, but I bet that some companies with relatively low-paid executives might create $100 per dollar, and some companies who pay their CEOs double or triple the average might nonetheless wind up losing value, or even going bankrupt.

------

Now that I think about it, maybe a "lottery ticket" analogy would be good too:

Think of every effect as a combination of lottery tickets and cash money.

-- The regression coefficient tells you the total value of the tickets and money combined.-- The r-squared tells you what proportion of that total value is in money.

That one works well for me.

------

Anyway, the idea is not that these analogies are completely correct, but that they make it easier to interpret the results, and to spot errors of interpretation. When Business Week says, "the r-squared is 0.01, so there is no relationship," you can instantly respond:

**"... All that r-squared tells you is, whatever the relationship actually turns out to be, the signal-to-noise ratio is 1:99. But, so what? Maybe it's still an important signal, even if it's drowned out by noise. Tell us what the coefficient is, so we can evaluate the signal on its own!"**

Or, when someone says, "the r-squared between team payroll and wins is only .18, which means that money doesn't buy wins," you can respond:

**"... All that r-squared tells you is, whatever the relationship actually turns out to be, 82 percent of it comes in the form of lottery tickets, and only 18 percent comes in cash. But those tickets might still be valuable! Tell us what the coefficient is, so we can see that value, and we can figure out if spending money on better players is actually worth it."**

------

Does either one of those work for you?

(You can find more of my old stuff on r-squared by clicking here.)

Labels: r-squared, regression, statistics

## 1 Comments:

I think this analogy works somewhat well. In the world of simple correlation I have to admit to not seeing what would be difficult to figure out about R squared; it's literally the square of the correlation, and so it's a direct measure of effect size, or importance, which it seems is what you want the regression coefficient to be. There's no disconnect for me.

The analogy breaks down as soon as you move to multiple regression though. In that case R squared is still about effect size but of all predictors. And the coefficient of any one predictor has its own error/noise, and the value can be influenced by factors like correlation with the other predictors, missing variables, etc.

Post a Comment

## Links to this post:

Create a Link

<< Home