Wednesday, December 25, 2013

Probabilities, genetic testing, and doctors

Skeptic magazine features a regular medical column by a doctor, Harriet Hall.  This month (subscription required), she talks about how patients demand too much certainty from doctors, when the science is often unsettled and doctors are often imperfect.  Mostly good stuff, except one of her points bothered me:


"Direct-to-consumer genetic testing can be misleading.  ... Testers only look for specific SNPs (single nucleotide polymorphisms) and report probabilities based on imperfect information.  They may report that people with your SNP are 30% more likely to develop Parkinson's disease than people with other SNPs.  But disease is not destiny*.  Even if you have the gene for that disease, that gene may or may not be expressed.  Gene expression depends on environmental and epigenetic factors and on interactions with other genes.  Our access to genetic information currently exceeds our understanding of what that information actually means."

[* I think she means "But genetics is not destiny."]

Maybe I'm misunderstanding her point, but ... her argument does not debunk genetic testing probabilities.  It *supports* them.

If what Dr. Hall means is that having a certain SNP doesn't necessarily mean you'll get Parkinson's ... well, of course not.  It only means you have a 30% higher chance than you would otherwise, as stated.  If that's her argument, she's obviously just attacking a straw man.  I'm going to assume that's not really her argument.

In which case, what I think she's saying is something like this (my paraphrase):


"People with gene X get Parkinson's 30% more often than average.  But, if you have gene X, it doesn't mean that you're *certain* to have a 30% higher probability, in the sense that a weighted coin has a 30% higher probability of landing heads.  Gene X might interact with gene Y.  If you have both X and Y, you might have a 90% greater chance.  But if have X and not Y, you may be completely average.  Or, in connection with other genes, you might even have a *lower* than average chance of getting Parkinson's!"

But even if that's true -- and it's almost certain that it is, that gene X acts in combination with other genetic traits -- it doesn't change the fact that you DO have a 30% increased probability of winding up with Parkinson's.  Because you still don't know if you're in the 90% group, or the 0% group.  

----

Here's an analogy.  God has a collection of red urns and blue urns.  Each urn has 20 coins in it.  The red urns have 20 fair coins.  The blue urns have 17 14 fair coins, and 3 6 two-headed coins.

Your DNA determines which urn you draw a coin from.  You draw a coin without looking.  At some later date, you'll flip the coin.  If the coin lands heads, you eventually get Parkinson's.  

Your chance of getting Parkinson's from the red urn is 50%.  Your chance of getting Parkinson's from the blue urn is 65%.

You take a direct-to-consumer DNA test, and it says you drew from the blue urn.  You say, "Oh, no, I have a 30% higher Parkinson's probability than someone who's DNA tested red!"  You are correct.  But, as I read it, Dr. Hall is saying, "No, that's misleading.  You might wind up having drawn a fair coin, and still be at 50%.  You might be normal!"

Well, yes, but ... that doesn't change the fact that, without knowing which coin you actually wind up with,  your probability is still 65%, because 65% of blue urn drawers will get Parkinson's, *including* the ones who are normal.  So, you are correct in being worried!

----

What I think might be going on, with this argument, is that Dr. Hall isn't actually thinking of the probability of Parkinson's.  She's thinking of an intermediate result, which kind of coin was drawn.  

The 65% chance of Parkinson's is the combination of 

(a) having drawn an unfair coin and getting Parkinson's for sure; or
(b) having drawn a fair coin and getting Parkinson's at a normal rate.

If you look at it that way, you might think: "how can you state flat out that you're at higher risk for Parkinson's, when there's an 85% chance you drew a fair coin and are completely normal?"

That argument implies that you can't say anything about the probability of getting Parkinson's unless you know what coin you drew.  That's not correct.  It's not how probabilities work.   

What's important is your overall risk of heads, not your overall risk of getting a high-probability coin.  I think what's going on is that we're not so much worried about getting Parkinson's as we are about *being at high risk* for Parkinson's.  A "normal" risk bothers us as ... well, almost zero, because we're just used to it, we tolerate it.  But a "bigger than normal" risk sets off alarm bells.  

What the Hall argument is for is making someone feel better by addressing that cognitive fallacy.  It says, "yes, if you look at it that way, you're a higher risk, but ... that's really just a big pool of normal people with a small minority with MUCH higher risk.  You're probably just one of the normals, so don't worry about it."

-----

To see the fallacy another way ... 

You're playing russian roulette.  There are two guns, with six chambers each.  One of the chambers in one of the guns has a bullet.  You've picked a gun at random, and spun the cylinder.  You're due to pull the trigger when you reach middle age.

You think, "I have a 1 in 12 chance of dying".

Now, experts have done some analysis, and they've noticed that chamber 3 winds up containing the bullet twice as often as any other chamber.  They're not sure why, but they know it's a real effect, and not random.

You now sign up to get your random selection "tested," and it comes back that you wound up with chamber 3.  You are distressed.  You think, "Instead of a 1 in 12 chance of dying, I'm down to 1 in 7.  I'm 71 percent more likely to die than I thought!!"

But, the doctor says, "No, that's misleading!  You may have chosen the empty gun, in which case your chance is zero!"

That, obviously, is BS, just by common sense.  But the doctor states it in a form where the missing common sense is harder to notice.  Something like:


"You don't know your chance is down to 1 in 7.  Whether you die depends not just on the chamber, but on the interaction between the chamber and other factors, like the gun.  If you don't have the "Gun A" gene, the "chamber 3" gene will not be "expressed," and you won't have any chance of dying at all.  Our access to the information of what chamber you have currently exceeds our understanding of what that information will really mean when you pull the trigger."

Sorry, it's still 1 in 7.   

If half the people are zero, and half the people are 1 in 3.5, and nobody knows which group you're in ... you're 1 in 7.  The fact that you *might* be in the zero group doesn't change that fact.

------

It's interesting that the author of this article specifically mentions "direct-to-consumer genetic testing."  Recently, in the US, the FDA banned the company "23andMe" from supplying genetic information to customers, on the grounds that test results are a "medical device for the diagnosis and prevention of disease."

That actually seems flimsy to me, First, that information is a "device"; you'd have to ban all medical books from the library, on that grounds.  Second, the information doesn't diagnose a disease, it just gives you information about your probability of contracting that disease.  Third, if you really wanted to crack down on diagnosis, you'd ban Consumer Reports magazine, which recently ran an article on how to tell a cold from the flu.  Fourth, the information is really no different, in kind, than the information that Parkinson's is hereditary.  The sentence, "Your parents died of it, so you might be higher risk" is certainly not a "medical device."  Fifth ... well, I'll stop here, but, obviously, I could go on for a while.

And, the "information" is not that complicated.  I purchased the service last year, before the ban, and I still have access to my results.  Among other things, my genes suggest I have six times the normal chance of contracting Type 1 Diabetes.  What they told me was something like this (my paraphrase):


"On average, 1.0 in 100 males will develop Type I Diabetes in their lifetimes.  We estimate your risk at 6.0 percent, which is six times as high.
"Why do we think that?  Academic study X found that one of your particular gene combinations was related to an 18% increase in risk.  Study Y found another one of your combinations was related to a 4% decrease.  A third study  was related to a 400% increase.  And so on.  Overall, it works out to 6 times the chance.
"Here's a few sentences on the biological details in the studies, the presumed mechanism by which the genes translate to diabetes, if you care, and to make sure you get the idea that we understand the science.
"Also, we survey our members in hopes of mining the data to find empirical connections.  In this case, we haven't made any of our own discoveries yet.
"That's all we know.  Remember, and there are other factors that contribute to whether you get diabetes -- like environment and lifestyle -- so don't go assuming that you're going to get the disease just because you have this genetic makeup."

Not that complicated, and pretty well explained.  What should I do with the information?  Well, for my part, knowing that my risk is six times as high (6 percent probability from birth, but only 2 percent from my current age), I might, you know, keep an eye on it, especially because they told me my type 2 diabetes risk is also a little high.

But, they didn't try to sell me anything, or tell me what to do, or suggest treatment, or anything.  They do, at some point, suggest genetic counselling, or talking to my doctor, if the results bother me.

And, they're not a fraud or anything.  I think the information and probabilities, are, for the most part, correct.  I trust that 23andMe got it right.  They identified things that I've heard run in my family.  And, they even found me an actual relative I had never heard of, based on DNA profile alone.

-----

Going off-topic here, but what's the FDA's problem?  

It might be a turf war.  According to this article, the FDA is p*ssed off that 23andMe didn't respond deferentially enough to their investigation.  And, doctors tend to think that anything to do with disease needs to go through them, as gatekeepers.  But, never mind that, and let's just look at the rationales they actually give.

Mostly, they think that patients are too uneducated to handle the information:


Robert Field, a Wharton health care management lecturer, believes the 23andMe technology would not have generated so much regulatory concern if it had been marketed to doctors instead of consumers. "Any kind of genetic testing has to be combined with professional counseling to do the patient any real good," he notes. "The concern is that when you do a home test, you’re not going to get that counseling, and you’re not going to know how to act appropriately on the results." If the test had been marketed to doctors instead, Field adds, "you would have built into the process the professional advice needed."

Well, that's kind of arrogant, isn't it, that we need a doctor to tell us what "6 times as high a risk" means?  I mean, doctors may be expert in diagnosing diabetes, and treating it, but do they also somehow have some god-given expertise in explaining probabilities?  In fact, from the same article:


"Most of the physicians said they didn't know what they were going to do with that kind of information, [medical-genetics professor Reed] Pyeritz says."

I mean, seriously, if you think that counselling is needed ... I have a degree in statistics.  I think, you know, *I* should be the counsellor.  In fact, I think the FDA should ban doctors from advising patients on risk without a trained statistician in the room. 

Now, I don't mean to trivialize the customers' confusion about what the results actually mean.  In the forums on the 23andMe site, I've seen a lot of posts like, "They were wrong.  They told me I had only a 1% chance, but I was diagnosed last week."  Or, "Oh my God, I'm ten times the [1 in a million] risk for disease Y, I'm going to die!"

But, aside from those obvious cases, I suspect doctors might be *worse* at evaluating the information, because they understand the medical side too much.  If I tell you that you have a 1 in 7 chance of dying, you get it.  But if I tell you that you have a 1 in 7 chance of dying and then tell you the rules of the Russian Roulette game ... now, you have knowledge with which to rationalize your disbelief.  "How can they misleadingly say I'm at a higher risk?  I may have the empty gun!"  Even though that extra knowledge should make you MORE certain that the 1 in 7 is correct.

In articles on the web, I read different doctors making that same argument, that, after all, there are many genes that cause Parkinson's (or whatever), and those haven't been discovered yet, so how can the results be accurate?

But they can.  And they can, for the same reason that they tell overweight people that they're a higher risk for a heart attack, even though there are multiple causes of that, too.  

Anyway ... I've gone on too long about this, which was meant to be just a statistical post.  Still, I think the faulty statistical argument presents an excellent example of how doctors overreach -- in this case, to push to make it illegal for anybody but them to obtain and interpret my own genetic information, even though they clearly don't know how to interpret it themselves.

Knowing how to diagnose and treat Parkinson's Disease is something in which medical professionals have expertise.  Understanding and interpreting probabilities about Parkinson's -- even those based on genetic testing -- is not.



-----

UPDATE: Part II is here.



Labels: , , ,

Wednesday, December 11, 2013

Explaining

When I was a kid, the adult science writer I read the most was Isaac Asimov.  He wasn't the most expert in any of the fields he wrote about (except, perhaps, biochemistry, which was his Ph.D.), but he was easy to read and understand.  

Some call that kind of writing "accessible," which, I guess, means that you don't need a lot of background to follow what the author is saying.  But I don't think that really captures it. It's been a while since I read any Asimov, but I bet that even in subjects where I have a fair bit of background -- math, say -- Asimov would still be a cleaner read than other authors.  I think Asimov's real skill is: he's just really, really good at explaining things.  In fact, he's been nicknamed "The Great Explainer."

Explaining is one of those important skills that, in my view, gets no respect at all.  Ask what makes a good teacher, and what do people say?  Motivating the students, and understanding every pupil's strengths and weaknesses, and being able to gauge the mood of the classroom, and being an interesting and varied speaker, and using multimedia and experiments, and knowing the subject, and stuff like that. But to me, the biggest thing is: finding explanations that students will actually understand.

In my life, there have been things that confused me for years, or that I understood but didn't "really" understand. Then, one day, either I figured it out for myself, or I read something that instantly ended my confusion.  And I asked myself, "Why the hell didn't anyone explain it properly before?"

For example ... for years, I was confused about how one of the aspects of regression analysis.  Statisticians would say, "IQ accounts for 35 percent of the variance of salary, and parental income accounts for another 40 percent."  I wondered, how doe that work?  What if the effect of education tuns out to be as important as IQ?  Then, you have 110 percent!  What's going on?

But I just lived with it, until eventually I figured it out. What's the explanation?  It's a law of nature that standard deviations are pythgorean.  So you can never find independent factor "triangle sides" whose squares add up to more than 100% of the overall "hypotenuse".

And with that, it made sense.  Even though I knew the sum-of-squares thing in another context, I never made the connection, and it was never explained to me -- until I figured it out, two decades after my last statistics class.  

OK, was that one too mathy?  Here's an easier one: why it takes 10 runs in baseball to create one additional win.  I knew it was right, but I understood why only in a roundabout kind of way.  My gut still had a vague notion that 10 runs was much too high, and I had to keep correcting my gut.  

But then I found an explanation where it really made sense to me.  If you prefer a shorter summary, here's Eric T. with the hockey version (6 goals = 1 win):


"... imagine taking an average team, picking six of its games at random, and giving the team an extra goal in each game.

"Three of those games will be games it won anyway, so your extra goal doesn't change the result. In another game or two, the team lost by two or more and your goal still doesn't help. Only occasionally do you turn a loss into a win (or overtime loss), and so in the end, your six extra goals only produce roughly two extra points."

-----

It's not just me, right?  "Why didn't anyone explain it that way before?" happens to everyone.

Think about something you understand well, but had a bit of trouble with in the beginning.  Don't you think that you could have learned it in half the time if it had been explained differently?  How much time is wasted struggling through murky explanations in pedantic textbooks, or incoherent notes from class, when you might have been able to understand it in five minutes if it had been done a bit better?

-----

There are many reasons I admire Bill James.  One of the biggest is his ability to explain the things he's discovered.  His explanations are ... well, I think they're nearly perfect.   He explains what happens, and why, and how his method works, and it all comes together so well that you can read it once, at normal human reading speed, and ... you just get it.  His explanations just penetrate your brain effortlessly.

A lot of that is that he's such a good writer, but that's not enough.  William Shakespeare was a good writer, but I wouldn't bet on him being able to explain Runs Created.  A good writer will say things well, but a good explainer will also choose the right things to say.

-----

I used to teach regularly, a class in how to use a certain niche software product.  There's no proper textbook, so I had to figure out how to explain it so that the students would actually get it.  Some things I did worked better than others, and I'd try adjusting what I did from class to class. Occasionally, I would think, "Geez, you know, it's a complicated subject ... this is probably as clear as it can be explained, and they're going to have to work a bit to actually get it."

But then, I would stop and think, "What would Bill James do?" And I would realize that if it were Bill, he would have a way to get the point across.  He would have found the right analogy, or the right story, or the right thread of logic.  

I've learned a lot of things, beyond just baseball, from following Bill's work over the last thirty years.  One of the most important is: nothing is so complicated that it can't be explained well.  If I try to explain something, and it's not working, and people are having to work hard at understanding ... I have to think: it's my fault.  My explanation isn't good enough. 



Labels: , ,

Monday, December 09, 2013

Do Western teams dominate in NFL night games?

You're getting ready for an NFL night game.  A team from the Eastern time zone is playing a team from the Pacific time zone.  How should you bet?

From 1970 to 2011, you should have bet on the West Coast team.  In games starting at 8:00 pm (Eastern) or later, they beat the spread 70 out of 106 times.  That's a record of 70-36, or .660.  The odds of something that extreme happening by chance (either way) is 1 in 806.  It's 3.3 standard deviations from the mean.

In a control sample of afternoon games, there was no such effect.  In fact, the Western teams went only 143-150 (against the spread) in those.

What's going on?  Well, the academic authors who found this result claim it's due to the circadian rhythms of the human body.  Physiologists and psychologists believe athletic performance peaks in the late afternoon.  So, for a game that starts at 8:00 pm Eastern time, the players from the West are actually playing at 5:00 pm "body time," which is why they perform better.

That result comes from a recent academic study: "The Impact of Circadian Misalignment on Athletic Performance in Professional Football Players," by Roger S. Smith, Bradley Efron, Cheri D. Mah, and Atul Malhotra.  Here's a Business Week rundown that actually just came out today.  Deadspin mentioned it here, and Brian Burke mentioned it here.

When I read the reports, I couldn't believe that the 70-36 could actually be accurate.  I downloaded the study ($8), and then went to Pro Football Reference to confirm for myself with their game finder.  And, yup, it all checks out!  

------

Wow, eh?  Could that actually be what's happening, that time-of-day effects on the human body are so big that they're almost twice the size of the home field advantage?  

Well, as you might expect, I'm skeptical.  I can think of a whole bunch of other things that might be going on.

Nothing of what I'm going to say is conclusive ... you should take this post not as a definitive rebuttal, but, perhaps, as a case for the defense, a "devil's advocate" kind of argument.

------

1.  There's no actual evidence that the West teams played better.  The only data we have is that they consistently beat the spread.  

It's just as possible, isn't it, that the bookmakers were shading the spread in favor of the east teams, at least in the night games?

I broke down the night games by day of week.  (They don't quite add up because I did everything manually, and probably screwed up somewhere.)

Sunday: 18-12
Monday: 46-19
Other:   3- 7

It turns out almost all the effect happens on Monday.

Does this support the "line shading" hypothesis?  I think it does, a little bit.  Monday night games, I'd imagine, get the most action from bettors ... if bookmakers shade the line when betting gets too heavy and unbalanced, it seems like Monday games should be the best candidates for when that happens.

---

2.  Night games are not random.  They're selected specifically because the NFL believes that they'll be the best games.  Maybe they're what the NFL thinks will be the games with the best teams, or the most exciting games, or the games with the most serious playoff implications.

For the most part, the NFL schedules the night games before the season starts (the exception: late-season Sunday games, which, in recent years, are chosen on the fly).  So, the league is, to some extent, guessing which the good teams are.  

Because of that, certain teams will appear on Monday nights more than others.  In this particular sample, the San Francisco 49ers appeared 28 different Mondays; the Seattle Seahawks, only six.

Could it be that there's something about the 49ers that caused them to beat the spread so much?  Perhaps the oddsmakers, and bettors, consistently underestimated San Francisco, those years.  For 16 consecutive years -- 1983 to 1998 -- the 49ers went 10-6 or better.  You'd think they'd have regressed to the mean at some point, but they didn't.  So, perhaps they were consistently lucky?

There's a bit of support for that -- every year from 1983 to 1989, the Niners had a winning record against the spread. Over their entire streak, they never went worse than 7-9.  Of course, a lot of that is due to their 19-9 record on Monday night ... but, still.

---

3.  Suppose that we assume that SOME of the effect is due to these kinds of factors.  Suppose that, because of shaded or incorrect spreads, the Western teams had a 53% chance of beating the spread, instead of 50%.

In that case, the odds of a 70-36 record now drop to only 1 in 225.  That gets a bit easier to accept as random chance.  

At 55 percent, you're down to 1 in 73.  

---

4.  If an effect actually exists, it's most likely a lot lower than what was actually observed.  

First, you need to regress the 70-36 to the mean, since, in nature, small effects are found much more frequently than large ones.  

Second, any effect smaller than 2 SD wouldn't have been published, which means, in general, effects that DO get published are overestimates.  

The study found the western teams beat the spread by an average of 5.26 points, with an SD of 1.33.  That means any result less than 2.66 points wouldn't have made it into print.  

Suppose that, unbeknownst to us, the actual circadian effect is 2.5 points.  If every study takes a different random sample of 106 games, fewer than half the studies will find statistical significance.  And ALL of those studies that *do* find significance will overestimate the real effect, because the minimum significant effect is 2.66.

That wouldn't be a problem if the SD was, say, 0.01, or something, because then almost ANY real effect would be found.  But, in this case, when the bar is set so high, the selectively-sampled observed effects are likely to be inflated by luck.  

---

5.  The effect for home games and road games is almost the same.  That is, no matter which of the teams is jet-lagged, the West team has the same advantage.  

That's fine, if the theory that only time of day matters.  

It does imply, though, that jet lag, or adjusting to a new time zone, doesn't matter much at all.  Which may be true, but I've seen other psychologists argue the opposite.  

---

6.  If there's such a huge effect for a three-hour difference, you'd expect to still have a substantial effect for a two-hour difference.  So I checked west-coast teams playing night games on the road in Central Time.  In those games, the effect disappeared.  The Western teams went 17-26 against the spread.

---

7.  If the effect does depend on time of day, then the effect should be similar for the fourth quarter of late-afternoon games, right?  Those games might end at 7:00 pm, while the night games start at 8:00 pm.  Not much difference there.  

Does that happen?  I haven't checked, but that would be a good test.  You could also see if the effect diminishes as the night game goes on ... by the end of the Monday night game, the West team is playing at 8:00 or 9:00 pm circadian time.  Of course, the East team is at the actual time of 11:00 pm to midnight, but, as far as I read, the paper doesn't posit that there should be a big difference between early evening and late evening.

(UPDATE: One author says the effect is based on distance from the 3:00 am physiological low point.  Going with that ... an 8:00 game is 10 hours benefit for the PST team, and only 7 hours benefit for the EST team.  But, then, an afternoon 1:00 game would be the opposite, 7 hours to 10!)

---

8.  You could also check late-afternoon games in general.  The East team is on 4:00 time, while the West team is on 1:00 time.  So East should have the advantage.  But, the authors explicitly say (in the Business Week article) that the "ramp up" effect is smaller than the "ramp down" effect, so maybe you wouldn't see anything.  

And, while we're here ... Daylight Savings Time.  The effect should be different the week the clocks change, right?  Instead of the West team playing at 5:00 and the East playing at 8:00, it's really (from a circadian standpoint) 6:00 and 9:00.  

---

9.  The author published a similar study in 1997, with similar results.  But the effect continued -- which suggests that bettors didn't react to it, and bookmakers didn't bother adjusting their lines.  

It's possible that the betting community was just shortsighted in not believing the paper's claims, but ... sharp bettors are usually quick to seize on inefficiencies like this.  To me, that's at least a bit of evidence that it might be something else.

---

10.  There are afternoon games in many other sports -- college football, NBA, NHL, major-league baseball.  You could check to see if this happens there.  

Even better, you could check sports in which *actual* performance can be measured, not just performance relative to another team.  Do golfers hit better in late-afternoon?  Do Rubik's-cube solvers have better times in events that take place later in the day?  What about bowlers, or dart-throwers?  There should be lots of ways to check.  

---

11.  In fairness, My view is that someone would have noticed if there were large time-of-day effects in other sports.  

From my own introspection, I think I'm better certain times of day than others.  I tend to get drowsy late in the afternoon, and I wouldn't be surprised at all if my (extremely minimal) athletic ability drops during that time, and other times I'm tired.  

But, I *notice* my tiredness.  Shouldn't professional athletes have noticed something, too, especially when they're so focused on their bodies and their performances?

Maybe it's possible for a team to drop from .500 to .333 without noticing the physiological changes that caused it, just like they may not notice any difference when they play worse on the road.  But ... I dunno, my gut says that's just too big an effect that nobody even *suggested* it before the academics.

---

12.  Oops!  As I was doing the final edit for this post, I remembered an NBA study that found Western teams travelling east had an advantage.  I checked, and the advantage was huge, just like this one.  And most NBA games are at night.  So ... hmmm.  

However, that study wasn't as clean as this one, with a complicated regression.  And it was denominated in actual winning percentage, rather than against the spread.  But, still ... hmmm.

In fairness, I have to say that study supports the pro-circadian argument, to some extent.


-------

My completely arbitrary, intuitive, Bayesian guess as to what's actually causing this effect?  I'd say ... 20 percent line shading, 75 percent luck, and maybe 5 percent physiology.  

I'd also guess -- again, without any justification other than my gut -- that there's a 35 percent chance that there's no real measurable effect of circadian physiology at all, and a 65 percent chance that there's a measurable, but small, effect.  (I had it at 50/50 before I recalled the study in #12.)

Regardless, I definitely don't want to imply that this study isn't important.  Any time you find a huge, 70-36 result, after a prior prediction with a plausible mechanism ... well, that's something you definitely want to put out there for serious consideration.  I'm just not as confident as the authors that what they've found is an actual thing.

Prove me wrong, somebody!








Labels: , ,

Monday, December 02, 2013

Do nice players make their teammates worse?

Dick Allen was known for being an unpleasant teammate who warred with teammates and divided clubhouses.  In "The Politics of Glory," Bill James called him "a manipulator of extraordinary skill," and wrote,


"[Allen] did more to *keep* his teams from winning than anybody else who ever played major league baseball."

That led me to wonder: how would that happen?  What would make teams less likely to win with Allen in the dugout?  I guess, maybe, if the players don't trust each other, and team morale disappears ... the players won't care as much about winning.  They won't try as hard, or do the little things they'd normally do.  Maybe they wouldn't stay in shape, or they wouldn't study the opposition pitchers and hitters as intently, or they wouldn't be as receptive to coaching.  Stuff like that.

In other words: with Dick Allen on the team, their individual performances would be worse than you'd expect otherwise.  Can we find evidence of that in the statistics?  

------

Back in 2005, I created a little algorithm to determine if a player had a "lucky" year at the plate.  Basically, I looked at his stats two years before, and two years after, and took a weighted average of those four years.  Then, I regressed to the mean a little bit, and I figured, that's what the guy "should have" done.  Any deviation from that, I attributed to random variation.  (I've attached an description at the bottom of this post.)

In my original study, I treated all the discrepancies as luck.  But, of course, if a player does worse than he "should have," it might also be due to other factors, like Dick Allen.  

Of course, you probably wouldn't see Allen's entire influence: if he played with the same players for many years, they'd be consistently worse than they could have been, and the algorithm would have no way of seeing that.  But, enough players come and go that maybe we could see at least *some* effect.

So, for each of Dick Allen's seasons, I looked at the luck numbers for all the batters on his teams.  (I used only batters, in part because, for some reason, my database was very slow processing pitchers).  I omitted players who spent time, that year, on more than one team.

The result: Over 15 seasons, the batters on Dick Allen's teams were 125 runs unlucky -- about 8.3 runs per season, or 5/6 of a win.

------

Of course, that doesn't really mean anything -- small sample size, and all that.  I think if you did a significance test, you'd find that "-8.3 runs" isn't even a single SD from zero.

But, maybe we can try a larger sample, of a larger group of "controversial" players.

This web page lists the 15 "meanest players" in baseball.  Ten of them were position players.  I ran the same "Dick Allen" test for all ten.  (My database includes estimates only up to the 2009 season.)

The results ... pretty much completely random.  Five had lucky teammates, overall, and five had unlucky teammates.  

If you care, Mike Teixeira was the "worst" at -30.9 runs per season, while Prince Fielder was "best" at 21.7.

-------

Then, I found a list of the "nicest" players in baseball (who, strangely, are all position players).  I ran the test for those, too.  I expected the same, random, non-result.  I was surprised.

Of the fifteen nicest guys in baseball, thirteen of them had unlucky teammates.  That is, in fifteen tests, "unlucky" had a winning record of 13-2.  The probability of something that extreme happening by chance is about 1 in 271.* 

[*Update, 12/3: The 1 in 271 is for something that extreme happening in the "negative" direction.  The chance of something happening that extreme in *either* direction is only 1 in 135, which is more relevant here.  Thanks to Jared Cross, in the comments, for pointing that out.]

Here, let me list all fifteen players:

-14.0 Thome
-29.0 Ibanez
-13.1 Damon
-26.8 Mauer
-11.1 Granderson
- 7.9 Jeter
+17.4 Pujols
-23.5 Hudson O.
-20.1 Hunter Torii
-16.7 Pena Carlos
+28.1 McCann
-30.9 Teixeira
-11.2 Giambi Jason
-21.7 Young Michael
- 6.0 Holliday Matt

Strangely, Pujols and Teixeira also made the "mean" list as well as this "nice" list.  If you take them both out, you're left with 12 out of 13.  That's a 1 in 585 shot. 

-------

Could it actually be the case that nice guys make their teammates worse?  I suppose ... maybe "nice guys" means players who are less intense, and that makes the clubhouse a bit too laid back?

Or, it could be that cause-and-effect are backwards.  Maybe when you play for a lot of teams that disappoint, and you take it well, you get a reputation for niceness.  But ... I dunno, the average is really only around one win a year.  It seems unlikely that would be it.

It seems more likely that it might be some unknown third factor, that explains both (a) players being nice, and (b) their teams being unlucky.  Maybe, for instance, nice guys play for nice managers, and it's nice managers that are causing the underachievement?  Or something like that.  Is there anything in that list those players have in common that could be the answer?  

Of course, it could be just random, despite the 1-in-271 odds.  

--------

Let's try something else.  If there's something real going on, and it actually is common for players to influence their teammates for good or bad, then that's probably some kind of "leadership" characteristic we're looking at.  And, you'd think, players who are good leaders are more likely to go on to become managers.  So, you'd expect that major-league managers should have been more likely to have "lucky" teams during their playing careers.

And, again ... yup, there seems to be something there.  As of August 29, twenty-three of the 30 MLB managers had major-league careers as position players.  (Only two managers were major-league pitchers, and the remaining five never made it to the bigs.)  15 of the 23 were "positive luck".  As a won-lost record, that's 15-8.  That probability is about 1 in 9.5.

Here they are.  (I've included the number of the player's seasons that were considered.)

+ 8.9 Sandberg (16)
+ 8.7 Ventura (15)
- 6.0 Mattingly (14)
+15.2 Baker (19)
+ 9.0 Gibson (17)
+ 7.2 Scioscia (13)
+17.2 Johnson Dave (12)
+ 1.3 Weiss (14)
+ 5.7 Matheny (13)
+12.2 Girardi (15)
- 1.1 Redmond (12)
-15.7 Hurdle (10)
+12.5 Bochy (9)
+ 5.5 Roenicke (7)
+ 8.2 Sveum (11)
- 8.0 Melvin (9)
-25.5 Washington (10)
- 7.2 Gardenhire (5)
- 4.3 Francona (10)
-39.5 Wedge (4)
+16.8 Yost (6)
+53.9 Gibbons (only 2)
+ 7.6 Porter Bo (3)

The managers are in the same order as in this link, which is from most to least illustrious career.  The negatives seem to be concentrated near the bottom, so you might think that maybe it's just that good players have more pluses than bad players.  But, no,when I looked at everyone, not just managers, there was no such effect.

Also, the "positive" managers seem to have had more years in the majors.  But, again, I checked, and again there's no general effect like that.  Players with longer careers are just as likely to have had unlucky teammates than players with shorter careers.

But, maybe ... maybe, all else being equal, you're more likely to be considered for a managerial job if you played on winning teams.  Teams with good luck are more likely to have won.  So, maybe there's that kind of selective sampling effect going on here.  

Or, maybe it's just random.

------

Finally, I checked an arbitrary bunch of other players I thought of who maybe had negative reputations for something or other.  Those results were back to random:

+ 0.4 Reggie Jackson
- 8.3 Dick Allen
- 6.9 Rick Cerone
+20.1 Barry Bonds
-25.7 John Mayberry
+11.9 Garry Templeton
+13.6 Willie Stargell
- 3.4 Thurman Munson
- 6.5 Jose Canseco

-------

So, there you have it.  Taken at face value, it seems that clubhouse cancers don't seem to affect their teammates.  But, nice guys make their fellow batters worse.  And, having a future manager on the team makes them better.  

I'm not really sure I want to take it at face value, though. Still, I have no idea what's really going on.



-----

Addendum:

Here's the explanation I promised, for how the "luck" is calculated.  As an example, I'll estimate Tommy Herr for 1985.  

In the four years surrounding 1985, Herr's offensive WARs were: 

2.4, 2.1, [1985], 2.4, 1.3

What number would make 1985 fit right in?  Maybe, 2.4 or so?  

Well, Herr actually had a 6.0 offensive WAR in 1985.  That makes him 3.6 wins "lucky," which is 36 runs.  My algorithm actually comes up with a luck estimate of 32.7 runs (I didn't use WAR, and my algorithm is a bit more complicated).  But that's roughly how it works.

The idea is, it's a way of doing roughly the same thing you'd do if you looked at the record by eye.

And, BTW, this is not exactly the same algorithm as I used in the past ... not sure what the difference is, unfortunately.  But the results are very similar to the old ones.

For more details, go to my website and search for "1994 Expos".  

Labels: , , ,