Tuesday, November 23, 2010

Explaining the sibling study

I promised to revisit Frank Sulloway and Richie Zweigenhaft's sibling study in light of their response to me last week. And I thought it might be best if I start over from scratch.

Today, I'm going to try to just *describe* the study and the results. In a future post, I'll add my own opinions. For now, my presumption is that on everything in this post, the authors and I would agree.

In the original study, the authors' primary finding was that younger baseball players attempt more stolen bases (per opportunity) than their older brothers, with an odds ratio of 10.58. If I explain this right, by the end of this post, it should be clear what the authors did and what the 10.58 figure actually means. I've been a bit unclear on it in the past, but I think I've got it now.


The purpose of the study was to check whether younger brothers attempt more stolen bases than older brothers. The authors hypothesized that would be the case. That's because the psychology literature on siblings finds that younger siblings tend to develop a more risk-taking personality than their older siblings, and stolen base attempts are the baseball manifestation of taking risks.

The authors found approximately 95 pairs of siblings for their study. When they compared the younger player to the older in each pair, they found support for their hypothesis. It turned out that the younger brother "beat" the older (in the sense of trying more SB attempts per time on base) 58 times, and the older "beat" the younger only 37 times. That's a .610 winning percentage for the younger brother.

Younger brother: 58-37 (.610)

By my calculation, that's statistically significant at the 5% level (2.15 SDs from .500).

However, for reasons to become clearer later, the authors preferred to give the results in terms of odds. Stated that way, the younger brothers had odds of 58:37, which is odds of 1.57:1.

Younger brother: 1.57:1

However, that's still not quite the way the authors chose to describe results like this one. In both the paper and the response, the authors quote the "odds ratio" of dividing the younger player's odds by the older player's odds. The younger brother is 1.57:1, and the older brother is obviously the reverse, 1:1.57. If you divide the first odds (1.57/1) by the second (1/1.57), you get 1.57 squared, or 2.46. That means:

The "odds ratio" of younger brother to older brother is 2.46.

This 2.46 does not appear in the actual paper; I'm doing it here for comparison later.


However, the authors argue, the numbers above are misleading. They don't adequately represent the true effects of being an older or younger brother. That's because, they argue, there is a confounding effect not controlled for -- which player got called up first.

The authors and I disagree on whether this is an appropriate control, but both of us agree that controls are often useful and revealing.

Take an unrelated example. Suppose you did a study, and you found that there was an equal proportion of Canadians and Brazilians who were world-class hockey players. Would you be able to conclude that Canadians and Bermudans are generally equal in hockey talent?

No, you wouldn't, because you'd see that Canadians tend to get a lot more practice time than Brazilians. Canadians live in a cold climate, with lots of frozen ponds. And, Canada has a lot more ice rinks per capita than Brazil does. That means Canadians should get a lot more practice than Brazilians.

Given those facts, you'd expect a lot more hockey players from Canada than Brazil. If you find equal numbers, that's evidence that Brazil must have a higher aptitude for hockey than Canadians, to be able to succeed equally despite fewer opportunities to practice.

More formally, you might control for the number of ice rinks that each group has access to. You'd find that *holding ice rinks constant*, Brazilians are a lot better at hockey than Canadians are. I won't do a numerical example, but you can probably see how this would work.

So, going back to baseball siblings: the authors argue that "getting called up first" is this study's equivalent to "having lots of ice rinks". If you control for that, the odds ratio gets a lot bigger.

To prove that, they took 80 pairs of siblings where one got called up before the other, and they split them up according to whether it was the older or the younger who got called up first. The results were:

-- When the younger brother got called up first, the brother called up first went 5:1.

-- When the *older* brother got called up first, the brother called up first went 32:42.

The authors now calculate the odds ratio as it stands with this control. They divide (5/1) by (32/42), and get 6.56.

What they're saying is something like this: "The raw data makes it look like the odds ratio is 2.46. However, that's because we didn't control for callup order, which is important, just as important as the "access to hockey rinks" control. If we do that, we see that the raw data understates the true odds ratio, which is 6.56."


Putting in the control for callup order is like adding a variable to a regression. Whenever you do that, you check the apparent size of the effect, and also the significance level.

As we just saw, the effect size is fairly large: we went from an odds ratio of 2.46, without the control, to 6.56 with the control.

But what about the significance level? As it turns out, the control variable turns out to be not significant.

My simple argument goes like this: the six pairs in the "younger called up first" control went 5-1, or .833. If the control was not significant, we'd expect them to go .588, like in the sample as a whole. The chance of a .588 team going 5-1 or better is more than 21%, far higher than the 5% required for significance.

The evidence from the authors' paper goes something like this: for their entire sample (which we'll talk about a bit later), the authors wound up with an odds ratio of 10.58 (instead of 6.56). On page 13 of their response, they report a 95% confidence interval of (2.21, 50.73). That is easily wide enough to include the odds ratio that would have resulted if the control had no effect, which is 2.46 for our sample, and a bit higher for the authors' larger sample.

The authors never explicitly say that the "called up first" variable is not statistically significant, but it seems clear that that's the case.


Now: what does an odds ratio actually *mean*? In our example above, where we came up with an odds ratio of 6.56, what does that 6.56 mean? How do we use it in an English sentence?

The answer, I think, is this:

The Vegas odds if you bet on the older brother are 6.56 times higher than the Vegas odds if you bet on the younger brother.

If you bet on the older brother attempting more steals than the younger brother when the older brother is called up first, your odds are 42:32, which is 1.3125 to 1. If you bet on the younger brother attempting more steals than the older brother when the younger brother is called up first, you get 1:5, which is 0.2 to 1. Divide 1.3125 by 0.2, and you get 6.56.

The numbers in the odds are 6.56 times higher, but maybe it's easier to understand that your *winnings* are also 6.56 times higher. Let's check:

If you bet on the older brother, you'll get odds of 42:32. If you bet $32, and the older brother steals more, you'll make a profit of $42.

If you bet the same $32 on the younger brother, you'll get odds of 1:5. That means if the younger brother steals more, you'll make a profit of $6.40.

Divide $42 by $6.40, and you get ... 6.56, as expected.


So that's what odds ratio means in terms of betting. Are there other intuitive ways of explaining it?

The authors don't really give any, but they do agree that it's difficult to interpret. They write,

"Ironically, although odds ratios are often used in an attempt to clarify complex statistical findings, people who are not familiar with them sometimes misinterpret what odds ratios do and do not mean.... [A]n odds ratio of 10.58 ... does not mean that younger brothers attempted 10.58 times the number of steals as did their older brothers. Similarly, this statistic does not mean, as Schwarz mistakenly reported in the New York Times, that more than 90 percent of younger brothers attempted more steals per opportunity than their own older brothers."

The authors write what it *doesn't* mean, but not what it *does* mean. Let me try to tackle that now, in a couple of different ways.


The most important thing is that the odds ratio of 6.56 doesn't tell you anything about how much the younger brothers outsteal the older brothers, or vice versa. It only tells you how that ratio *changes* when you swap "younger" for "older".

Again, going back to the actual numbers, which I'll repeat from a few paragraphs ago:

-- When the younger brother got called up first, the brother called up first went 5:1.

-- When the *older* brother got called up first, the brother called up first went 32:42.

What are the odds the younger brother beats the older brother? Well, if the younger brother got called up first, the odds are 5:1. If the older brother got called up first, the odds are 42:32 (1.31:1). So the younger brother occasionally wins 5:1 (6 out of 80 times), and frequently wins 42:32 (74 out of 80 times). Neither of those numbers, alone, has anything to do with 6.56 to 1.

What are the odds the brother called up first beats the other brother? Well, if the younger brother got called up first, the odds are 5:1. If the older brother got called up first, the odds are 32:42 (0.76:1). So the first-callup brother occasionally wins 5:1 (6 out of 80 times), and frequently wins only 32:42 (74 out of 80 times). Again, neither of those numbers, alone, has anything to do with 6.56:1.

The 6.56 only comes in when you divide the 5:1 by the 0.76:1. It's the *difference* between betting on the younger brother and betting on the older brother.

Look at this sentence:

"The _________ brother was called up first. The odds that he beats his sibling are _____:1."

There are two ways to fill in this sentence:

"The younger brother was called up first. The odds that he beats his sibling are 5:1."

"The older brother was called up first. The odds that he beats his sibling are 0.76:1."

What the 6.56 is saying is that, if you switch the word "older" and "younger", you have to divide or multiply the odds number by 6.56 for the sentence to still be true.

That's regardless of what the actual odds are. If, instead of 5:1 and 0.76:1, the odds turned out to be 10:1 and 65.6:1, the odds ratio would *still* be 6.56. Again, the 6.56 represents the *difference* in how the odds change, not what the odds actually are.


Odds and odds ratios are used a lot in regressions that try to predict probabilities. If you're trying to figure out how various factors affect cancer survival, for instance, you'll probably use odds. Why? Because, if you use probabilities, you'll usually wind up with something greater than 1, which doesn't make sense.

Suppose you test a new treatment for cancer. If you figure that it doubles your chances of a cure, you're in trouble. Because, suppose a patient presents with a 60% chance of surviving without the new treatment. Do you really want to say that, with the treatment, his chances go up to 120%?

To solve that problem, statisticians use odds, instead. The 60% patient is actually 3:2. Suppose the treatment triples the odds. Now, he's 9:2 to survive, instead of 3:2. That's perfectly OK. 3:2 is 60%, and 9:2 is 91%. Nothing over 100%.

When there are two factors, the usual assumption is that you can just multiply out the odds. If chemotherapy doubles the odds, and surgery triples the odds, then the model will say that if you get both treatments, you multiply your odds by 6. (This is the idea behind logit regression -- you just take the log of the odds so you can add linearly, like a regular regression.)

This is done all the time in statistics, but whether it's actually how nature works, I'm not sure. I can't think of an intuitive reason why it *would* work. (And I could easily make up an example where it doesn't.)

Generally, I believe it works well when the probabilities are really small, like when the odds double from 2,000,000:1 to 1,000,000:1 -- but not when they're bigger, like when they go from 1:4 to 2:4. In any case, the technique is used all the time, so there may be something to it that I don't see, and so I certainly won't argue against it.


So, in the baseball context, I think the implication of quoting an odds ratio of 6.56 goes something like this:

Suppose you look at two siblings, and evaluate their baseball skills and psychology from head to toe.

Brother A is a little fatter and slower than brother B. And you figure that, if Brother A is older than brother B, and called up first, he has only a 20% chance of attempting more steals than brother B.

But then, you ask, what if Brother A is *younger* than brother B, and still called up first? Well, you start with the original 20% probability, and convert it to odds, giving you 1:4. Then, you multiply by the odds ratio of 6.56. That gives you new odds of 6.56:4, which works out to 62%.

The conclusion: A may have a 20% chance to beat a *younger* brother called up first, but he'd have a 62% chance to beat an *older* brother called up first.

If you want, you can start with something other than 20%. Suppose if A is older, he's 1:1 to beat his brother. Then, if A is younger, he's 6.56:1. The probability goes from 50% to 87%.

Or, suppose A is so much faster than even when he's older, he's 9:1. Then, if he's younger, he'd be 59:1. So he'd go from 90% to 98.3%.

Or, just use the example from the actual data. If A is older, he's 32:42. If he's younger, multiply that by 6.56, and you get that he's now 5:1.

That's what the 6.56 implies, in brothers-stealing terms.

As an aside, I know I promised no criticism, but here's just one quick thing. In their second paper, the authors write,

"For brothers called up to the major leagues first, a younger brother was 6.56 times more likely than an older brother to have a higher rate of attempted steals."

I think they didn't mean to say that. While the younger player has 6.56 times the odds, he's not 6.56 times more likely. He's only 1.93 times more likely -- from 83% (5:1) to 43% (32:42). The authors do argue elsewhere that odds ratios have to be interpreted carefully, so I think this was just a slip of the tongue.


There's nothing in the method that tells you *how* the 6.56 change happens. Maybe if A is the younger brother, he keeps himself in better shape, so he's not as fat as if he were the older brother. Maybe first siblings tend to be fatter and slower than second siblings for genetic reasons, and so it's because when A is older than B, he's more likely to have been born fatter, and B more likely to have been born skinnier.

Or, maybe the authors' hypothesis is correct: If A is the younger brother, his personality develops him into a bigger risk-taker. If the authors are correct that the personality issue is the primary driver of the 6.56, the implications are that even if A is fatter than B, the personality factor alone is enough to more than compensate, since it can bring him from 20% to 62%, or from 43% to 83%, for psychological reasons alone.

Also, the study's raw data came up with the effect that being the younger brother instead of the older brother increases your chances from 43% to 83%. It does NOT speculate on what happens if the other player you're being compared to is not related to you at all. The study compares "has an older brother" to "has a younger brother", but ignores the "doesn't have a brother at all" case.

Furthermore, the study doesn't consider a pair of siblings unless both of them make the major leagues. But, the authors hypothesize that the risk-taking strategy is adopted in childhood. If that's the case, you'd think that it doesn't matter if one of the brothers doesn't make it past the low minor leagues; the effect should still be there. Probably, even if one brother never made it out of little league, the effect should still exist.

That would make an interesting follow-up study.


I've been talking about an odds ratio of 6.56, but the authors' conclusion is an even higher odds ratio, 10.58. How do they get the 10.58?

It's a combination of the 6.56, and two other cases. One other case is where the brothers got called up the same year (giving an odds ratio of 7.00). The third case is almost exactly the same as the first case, but slightly different because of the way the authors dealt with families of more than two siblings. For that third case, they wound up with a "younger brother called up first" ratio of 5:0, instead of 5:1. That works out to "infinity".

So the authors combined the three odds ratios:


How do you combine an "infinity?" Well, they used the "Mantel-Haenzel common odds ratio" technique, which is able to handle ratios with a zero denominator. They wound up with an overall odds ratio of 10.58.

So that's where the "10 times" in the original NYT article comes from, and that's where the "10.58" in the paper and the response comes from.


There you go. As I said, I think it's right this time. Let me know if you find anything that needs correcting. If all is OK, I'll eventually prepare a response to the authors' response, outlining more clearly why I disagree with some of their conclusions.

Labels: , ,

Tuesday, November 16, 2010

Do younger brothers steal more bases than older brothers? Part VI

Note: I'm not happy with this post. I've already revised it twice because I found things that were wrong ... as it is now, I think it's right, but it's not very focused and I think some of the emphasis might be on the wrong issues.

So, just a warning that I plan to redo it soon.


Although this is "Part 6" of the discussion of the sibling study, I'm going to try to make it stand alone, so if you're coming to this thread for the first time, read on.

A few months ago, Frank Sulloway and Richie Zweigenhaft published a study on brothers (siblings) in baseball. They came to the conclusion that a younger brother is about 10 times as likely to attempt more stolen bases in his career (adjusted for opportunities) than his older brother. Ten times is a LOT.

After I read the paper, I believed the result was incorrect. My previous posts on the subject explained why. Following that, I had an e-mail conversation with one of the authors. I don't believe either of us was able to convince the other of the rightness of our respective positions.

Two weeks ago, the authors released a second paper, which attempted to clarify the arguments and address some of my points. I remain unconvinced. And, indeed, I think I've been able to come up with a better, more easily understood argument that explains why.


The authors' study comprised approximately 95 sets of siblings. In those 95 pairs,

58 times the younger brother attempted more steals
37 times the older brother attempted more steals.

I think the authors and I would agree on this (although my numbers might be off by one or two because of the way the authors handled cases where there were more than two brothers, like the Alous).

So the younger brothers had a 58-37 record against their younger siblings, which works out to .610. The SD of winning percentage in 95 games is about .051. So .610 is a little more than 2 standard deviations above .500. That's statistically significant at the 5% level.

I believe this is a legitimate finding, and if the authors had left it at that, we'd have no disagreement.

Another thing they did is to express the result as odds instead of a winning percentage. If you divide 58 by 37, you get 1.57. So you can say something like,

"Younger brothers had odds of 1.57 to 1 of beating their older brothers in steal attempts."

That's perfectly accurate. Again, we have no disagreement.


What the authors did next is where we start to disagree. They took the 95 cases, and split them up into groups, based on the order in which the brothers were called up to the major leagues. To keep things simple, I'm going to leave out the case where the brothers were called up the same season, and concentrate on the case where the brothers were called up in different seasons.

That leaves 80 cases, in which the younger brother's record was 47-33. That's an odds ratio of 1.42 (47 divided by 33). But that's not what the authors come up with.

Because, look what happens when you split them up based on who was called up first:

-- When the older brother was called up first, the brother called up first went 32-42 against the other brother.

-- When the younger brother was called up first, the brother called up first went 5-1 against the other brother.

Converting that to odds:

-- When the older brother was called up first, the brother called up first had odds of .76:1 (32:42).

-- When the younger brother was called up first, the brother called up first had odds of 5.00:1 (5:1).

Now, to compare the younger to the older, the authors divide 5.00 by .76 and get an "odds ratio" of 6.56.

As it turns out, the authors an even more extreme result, 10.58 instead of 6.56. Why? Mainly (and I'm simplifying here) because the 5-1 can also be interpreted as 5-0, depending on how you handle families with more than two siblings. 5-0 is an odds ratio of infinity. The authors use a mathematical technique to average the "infinity," the "6.58", and the result for when the siblings get called up the same year ("7.0"), and it works out to 10.58.

Which is where the authors get their statement,

"It may be seen that the common odds ratio is 10.58, as previously reported [in our original paper]."

That sentence, actually, is pretty much correct.


So if the sentence is correct, what's the problem? The problem is the authors' interpretation of what an odds ratio means. Remember that odds ratio of 6.56 above? That's the correct number. But the authors write,

"For brothers called up to the major leagues first, a younger brother was 6.56 times more likely than an older brother to have a higher rate of attempted steals."

That is not true. That is not what odds ratio means.

Let's suppose you have 100 younger brothers called up first, and 100 older brothers called up first. 6.56 times more likely implies that you'll have 6.56 times as many "wins" in the first group as in the second group. But that's not the case. The younger brothers would have a ratio of 5:1, which, for 100 trials, is 83-17. The older brothers would have a ratio of 32:42, which, for 100 trials, is 43-57.

That means that the likelihood of winning rises from 43 (out of 100) to 83. That's 1.93 times more likely, not 6.56 times more likely.

The "1.93" figure is called the "relative risk". Relative risk is not the same thing as odds ratio.

So if "6.56 times more likely" is not the correct interpretation of an odds ratio of 6.56, what IS the correct interpretation? It's this:

An odds ratio of 6.56 means that if you place a $100 bet on the less likely outcome, your potential winnings will be 6.56 times as high as if you bet $100 on the more likely outcome.

Specifically for this case: If you bet $100 on the 5:1 favorite, you'll win $20. If you bet $100 on the 32:42 underdog, you'll win $131.25. And, $131.25 divided by $20 is 6.56.

*That* is what the odds ratio really means. You can decide how intuitively meaningful it is. It probably means more to you if you're a sports bettor than if you're not.


So why is that a problem? Isn't it a real sense of what the 6.56 (or 10.58) figure actually means? Why, then, do I say it's misleading?

Because it exaggerates the scale of the effect. Roughly, it squares it.

Suppose home field advantage is 2:1 -- the home team has twice the chance of winning as the visiting team. That means that, in turn, the visiting team has half the chance of winning, which is an odds ratio of 0.5:1.

If I do what the authors did, and divide the home team odds by the visiting team odds, I get 2 divided by 0.5, which is 4. But I cannot say, "a home team is 4 times more likely to win than a visiting team." That would be wrong: the correct odds are obviously 2:1. What I'm actually saying is, "if I bet $100 on the visiting team, I'll win 4 times as much money as if I bet on the home team."

Now, that's all well and good, but I would argue that the important measure is the 2:1, not the 4:1. We get the "4" by comparing the 2:1 favorite to the 2:1 underdog. In effect, the odds ratio is roughly "squaring the odds". Which makes sense: if you divide X by the reciprocal of X, you get X squared.

If you take the 6.56 odds ratio, and figure the square root, you get 2.56. That, I think is a reasonable guess at what the effect actually is.

Put another way: the 6.56 occurs when you switch the status of *two* players -- you make the young one get called up first, and you make the old one get called up last. How do you split up the effect between the two players? The most obvious way is to "give" them 2.56 each.


Anyway, that's mostly semantics, and it's about the odds ratio, which is not the interesting question.

The interesting question, to me, is, how often will a younger brother have more steal attempts than his older brother, even controlling for callup order? The answer is nothing near 10.58.

Look at it this way:

-- if the younger player gets called up first, the odds are 5:1.
-- if the younger player gets called up last, the odds are 1.42:1.

Doesn't it follow that the younger player's odds have to be somewhere between 1.42 and 5? After all, the younger player is either called up first, or he's called up last. The best case is when he's called up first, and the odds are 5:1 that he'll beat his brother. So the *overall* odds of beating his brother can't be *more* than 5:1, right?


Back to the odds ratio: if the authors agreed with me, and reverted to 3.25 instead of 10.58, would I believe it? Well, no, because of the confidence interval issue.

There were only 5 or 6 pairs of brothers in the "younger player gets called up first" group. They either went 5-0 or 5-1. The "3.25" figure is almost entirely based on that fact. If they had gone, say, 3-3, instead, the odds would work out to something like the 1.57 we got just by counting. (If you recall, the younger brothers went 58-37 overall, without dividing the sample into "first callup" and "last callup".)

Suppose the actual odds for the "younger called up first" were really the same as the "older called up first". Then, we'd have expected a .610 winning percentage.

The chance of a .610 team going 5-0 is 8.4%. The chance of a .610 team going 5-1 is 20%.

So the observed p-value is somewhere between .084 and .2 -- both higher than the .05 required for significance.


The authors don't do any explicit significance testing, but they say their confidence interval for the 10.58 odds ratio is (2.21, 50.73).

Again, suppose the odds for both callup groups were actually 1.57 in favor of the younger brother. Then the odds ratio we'd observe would be 1.57 squared, which is 2.46.

The authors actually found a confidence interval of (2.21, 50.73). They did things a little differently, and used more data, but, overall, I'd say their confidence interval is pretty consistent with what we found above. We found "almost significant but not really", and the authors are close. I'm actually not sure if we did exactly what they did, if our null hypothesis would be inside their confidence interval or not, but it would probably be close either way.


So my conclusions are:

1. A basic look at the overall data show younger players with odds of 1.57:1 to beat their older brothers in career steal attempts.

2. Dividing the data into "called up first" and "called up last" appears to increase the odds to somewhere between 1.42 and 5.00.

3. The authors' odds ratio of 10.58 does not easily translate into anything intuitive about the odds of one brother beating another, except for the difference in the amount you'd win if you bet.

4. The authors' odds ratio of 10.58 is not how most sabermetricians would express the effect. Going by the example of home field advantage, we'd be more likely to go with an odds ratio of 3.25.

5. In any case, the difference between the 3.25 and the 1.57 we would obtain (if there were no "callup first" effect) is not statistically significant.

6. As I have argued in previous posts, the "called up first / called up last" split is not an appropriate control, because it reverses cause and effect. (You can disagree with this point, if you choose, and the overall argument still holds.)


Bottom line: the data show that younger brothers attempt more steals than their older brothers at a statistically significant rate, with odds of 1.57 to 1. Isn't that interesting enough on its own?


Note: this post is substantially revised. First post was 11/16 am. Took that down, reposted 11/16 pm. Revised again 11/17 am.

Vigorous hug (think Cournoyer/Henderson) to Tango for pointing out that the reported odds ratio is actually approximately the square of the true odds ratio. I hadn't realized that was what's going on until he pointed it out.

Labels: , ,

Wednesday, November 10, 2010

Do younger brothers steal more bases than older brothers? Part V -- the authors' response

This past summer, I posted a few times about Frank Sulloway and Richie Zweigenhaft's "sibling" study. The authors have now responded online.

Their response is here (.pdf).

I'm away on vacation, and don't have the original paper, or copies of the discussion I had with the authors via e-mail, so I'll just link the paper for now, in order to get it out there as soon as possible.

Thanks to Dr. Sulloway and Dr. Zweigenhaft for the discussion, and I'll probably comment further within a couple of weeks.

Labels: , ,

Friday, November 05, 2010

Everyone hates the KFC "Double Down"

Why are we humans so gullible as to believe in things that are obviously not true, like astrology, homeopathy, and stuff Joe Morgan says?

One argument is that it's a question of being numerate and understanding the scientific method. If people who believe in homeopathy truly understood that there isn't even one molecule of active ingredient of their "medicines," they might be swayed. And if only Joe Morgan learned a bit about how the Runs Created formula actually works, he'd understand why we've reached the conclusions we have and come over to our side. It's just a matter of education.

Well, I'm not so sure that's true. Case in point: the reaction to KFC's "Double Down" sandwich.

The "sandwich" consists of two breaded, boneless chicken breasts, with bacon and cheese and sauce in between. There's no bun. (If you've never seen one before, here's a picture.)

The Double Down has been available in the US for a few months now, but it just came to Canada about a month ago, and all the usual nutrition and obesity spokespeople are flipping out. Google "double down Canada," and see for yourself:

-- "KFC's Double Down hits Canada; nutritionists worried - CTV News"

-- "Double Down, Canada: We gobble artery-clogging fat fare from KFC in record numbers"

-- "KFC’s Double Down raises eyebrows among Canadian nutrition experts."

There's lots more than that. For instance, my local paper, the Ottawa Citizen, had TWO editorial cartoons about the Double Down in the last couple of weeks. (Here's one of them.)

So what's the big problem? I don't see it. The Double Down isn't really any worse than other fast food items. It has 540 calories (including 30g of fat) and 1,740 mg of sodium. The Big Mac, which nutritionists don't seem to be pooping themselves over, has the same 540 calories (28g fat) and 1,020 mg of sodium. And the double Wendy's Baconator has 980 calories (63g fat) and 1,830 mg of sodium. (Let's not even talk about the triple.)

So the Double Down doesn't seem like that big a deal. Yes, it's got a little more sodium per calorie than the other two items, but, really, not that much more. And it's not like KFC is any worse than chicken anywhere else.

A KFC breaded boneless chicken breast has 560 mg of sodium. I went down to my kitchen and pulled out a box of President's Choice frozen breaded chicken breasts. Each one has ... exactly the same, 560 mg of sodium. I checked the "blue menu" variation of those chicken breasts, the ones with less fat and fewer calories: 450 mg. And, finally, I checked non-breaded honey-lemon chicken breasts: each breast had 460 mg of sodium. Those are the healthiest, least-fatty ones, at only 150 calories each.

So, again, what's the big deal? It looks to me like seasoned chicken breasts routinely have salt added to make them taste good.

So why blame KFC?
Is it because of innumeracy? Do the nutritionists and the journalists simply not understand what the numbers mean? No, it's not. It's the opposite, in fact. Every article I've seen points out that other fast food products aren't much better, and they all print the numbers for calories and sodium. But, somehow, it goes in one sentence and out the other.

Why is the Double Down singled out for so much opprobrium?
Because it has no bun. I think it's a status thing. Having no bun -- or rather, having a bun made out of meat -- is weird, and weird in a lower-class non-gourmet way. It looks like gluttony -- chicken and more chicken, stacked so high it's hard to bite into. It's something that truckers might eat, but cultured nutritionists, and educated people who care about obesity of the lower classes -- well, when they want to eat chicken, they use a knife and fork ... or, at least, put it in a bun like civilized people.

Furthermore, the existence of the Double Down feels like a personal attack on nutritionists. Because it has no bun, it's kind of fun, with overtones of gluttony. People who take nutrition seriously *hate* that. Food isn't supposed to be fun. It's supposed to be serious. Nutritionists have spent years and years getting degrees and becoming experts on what's good to eat and what's not ... and now what happens? KFC tries something completely different, appealing to people's appetites in a whole new way, *without even consulting the nutritionists*!

Without the absence of the bun, none of this happens, because there's really intrinsically nothing special about the product. Think about it:

1. The Double Down is made of two breaded boneless chicken breasts. But KFC has had those same boneless chicken breasts around for some time now. They actually offered them in a meal, but flat on a plate, with a knife and fork, and two side dishes. Nobody complained, and nobody took notice.

If I were to order that, and take the tray to my table, and eat it, nobody would look twice. But if I took those two patties, stacked them up on top of each other, and spread the the honey mustard sauce between them, and then I picked up my newly-constructed double-decker greasy chicken thingy in my two hands, and ate them that way ... well, people would stare, and some of them would be grossed out.
Even though it's the same meal, they'd think I was a disgusting pig, and wonder why I'm not ill and obese.

Same product, different presentation, completely different reactions. It's the shape of the Double Down, not the content. To some, stacked chicken patties with sauce in the middle is disgusting. But "that's disgusting" won't get them taken seriously, so they have to complain about the nutritional content.

2. McDonald's has a breaded chicken sandwich. They'll let you make it a double for an additional charge, and they'll be happy to add cheese and bacon. Nobody complains. Why? Because it's got a bun. Take away the bun, and what have you got? The McDonald's version of a Double Down.

3. There's a well-regarded product that are almost exactly the equivalent of the Double Down, and nobody complains. It's Chicken McNuggets. Well, it's not exactly the same, because there's no cheese or bacon. But a 10-piece McNuggets, with two sauces, otherwise has almost exactly the same profile as the Double Down: 610 calories (35g fat), and 1560 mg sodium.

But McNuggets are bite sized. You eat them individually and daintily. You don't stack them all up and ooze the sauce in between them.

4. Okay, here's an established main course that really IS almost exactly a Double Down: Chicken Cordon Bleu. It's a breaded chicken breast, and when you cut it open, there's cheese and butter and ham inside. (Yes, it's ham instead of bacon, but close enough.) If you look up nutrition information for Chicken Cordon Bleu, it's pretty close. It has to be -- the ingredients are almost the same.

Chicken cordon bleu is classy, classy enough to serve at weddings. Foodies don't mind it, and nobody campaigns against it. But if KFC sold it, and made you eat it with your hands? Yuck!
It's gross, and the multinational corporate megagiant doesn't care about the health of its customers!


We humans have a "blink" mentality -- we see a situation and come to a conclusion in an instant. Often, those conclusions are wrong, fed by our prejudices.
Overcoming those prejudices does indeed take numeracy and intelligence.

But the most important thing it takes is a willingness to think about the issue with an open mind.

No matter how smart you are, and no matter how many math courses you've taken, you still have to be able to put aside your instinctive first impression and take a fair look at the issue and the evidence.
Of all the people who'd be able to evaluate a foodstuff with a clear eye, you'd think nutritionists and health experts would be at the top of the list. But despite all their expertise -- or perhaps because of it -- they're among the worst offenders.

(In fairness, it could be selective sampling; the nutritionists who don't have a problem with it wouldn't be out for media attention. But you'd think if they were that common, some reporter would have found a few to present a countering view.)

In Ontario, the minister in charge of health promotion even started talking about *banning* the Double Down. She was quickly shot down by the Premier. But still -- the public official in charge of public health, the one who you'd think would have the most responsibility to see both sides and evaluate the situation, had the most extreme, prejudiced reaction. It's as if the head of the American Bar Association suddenly called for an impromptu lynching, with a thousand lawyers lining up behind her.

The solution is not just to teach more math. The solution is to make it socially unacceptable to accept silly, unjustified arguments. There are some places where we've been partially successful. We're able to do that, at least a little bit, in academia, where peer review at least calls out the cranks. We're able to do that in the justice system -- I'd bet that few judges would give credence to the argument that the Double Down is significantly worse than the Baconator just because you consume it more lustily.

We just have to promote that way of thinking in everyday life -- which is difficult, and perhaps impossible. But in my utopia, when nutritionist Joe Morgans say that Chicken Cordon Bleu is worse for you when you use your hands instead of a plate, we stop quoting them in approving newspaper articles, and laugh at them instead.

Labels: ,