Tuesday, February 20, 2018

How much of success in life is luck instead of skill?

How much of MLB teams' success is due to skill, and how much due to luck? We have a pretty good idea of the answer to that. But what about success in life, in general? If a person is particularly successful in their chosen field, how much of that success is due to luck?

That's the question Robert Frank asks in his 2016 book, "Success and Luck."  He believes that luck is a substantial contributor to success, as evidenced by his subtitle: "Good Fortune and the Myth of Meritocracy."

On the basic question, I agree with him that luck is a huge factor in how someone's life turns out. There is a near-infinite number of alternative paths our lives could have taken. If a butterfly had flapped its wings differently in China decades ago, I might not even exist now, never mind be sitting here typing this blog post.

In his preface, Frank favorably quotes Nicholas Kristof:

"America's successful people['s] ... big break came when they were conceived in middle-class American families how loved them, read them stories, and nurtured them .... They were programmed for success by the time they were zygotes."

But ... that's not a very practical observation, is it? Sure, I am phenomenally lucky that my parents decided to have sex that particular moment that they did, and that the winning sperm cell turned out to be me. In that light, you could say that luck explains almost 100 percent of my success. 

So, maybe a better question is: suppose I was born as me, but in random circumstances, in a random place and time. How much more or less successful would I be, on average?

As Frank writes:

"I often think of Birkhaman Rai, the young hill tribesman from Bhutan who was my cook long ago when I was a Peace Corps volunteer in a small village in Nepal. To this day, he remains perhaps the most enterprising and talented person I've ever met....

"... Even so, the meager salary I was able to pay him was almost certainly the high point of his life's earnings trajectory. If he'd grown up in the United States or some other rich country, he would have been far more prosperous, perhaps even spectacularly successful."

Agreed. Those of us who are alive in a wealthy society in 2017 are pretty much the luckiest people, in terms of external circumstances, of anyone in the history of the world.  For all of us, almost all of our success is due to having been born at the right time in the right place. 

But, again, that's not a very useful answer, is it? Even the most talented, hardest-working person would have nothing if he had been born in the wrong place and time, so you have to conclude that every successful person has been overwhelmingly lucky.

I think we have to hold our personal characteristics as a given, too. Because, almost everyone who is successful in a given field has far-above-average talent or interest in that field. I was lucky to have been born with a brain that likes math. Wilt Chamberlain was lucky to have been born with a genetic makeup that made him grow tall. Bach was born with the brain of a musical genius.

It gets even worse if you consider not just innate talent for a particular field, but other mental characteristics that we usually consider character rather than luck. Suppose you have an ability to work hard, or to persevere under adversity. Those likely have at least some genetic -- which is to say, random -- basis. So when someone with only average musical talent becomes a great composer by hard work, we can say, "well, sure, but he was lucky to have been born with that kind of drive to succeed."

Frank says:

"I hope we can agree that success is much more likely for people with talents that are highly valued by others, and also for those with the ability and inclination to focus intently and work tirelessly. But where do those personal qualities come from? We don't know, precisely, other than to say that they come from some combination of genes and the environment. ...

"In some unknown proportion, genetic and environmental factors largely explain whether someone gets up in the morning feeling eager to begin work. If you're such a person, which most of the time I am not, you're fortunate."

So, even if you got to where you are by working hard, Frank says, that's still luck! Because, you're lucky to have the kind of personality that sees the value of hard work.

I don't disagree with Frank that the kind of person you are, in terms of morals and virtues, is partly determined by luck. But, in that case, what *isn't* luck?


That's the problem with Frank's argument. Drill down deep enough, and everything is luck. You don't even need a book for that; I can do it in one paragraph, like this:

There are seven billion people in the world right now. Which one I am, out of those seven billion, is random, as far as I'm concerned; I had no say in which person I would be born as. Therefore, if I wind up being Bill Gates, the richest man in the world, I hit a 6,999,999,999 to 1 shot, and I am very, very lucky!

What Frank never explicitly addresses is: what kind of success does he consider NOT caused by luck? I don't think that anywhere, in his 200-page book, he even gives one example. 

We can kind of figure it out, though. At various points in the book, Frank illustrates his own personal lucky moments. There was the time he got his professor job at Cornell by the skin of his teeth (he was the last professor hired, in a year where Cornell hired more economics professors than ever before). Then, there was the time he almost drowned while windsurfing, but just in time managed to free himself from under his submerged sail. "Survival is somtimes just a matter of pure dumb luck, and I was clearly luck's beneficiary that day."

Frank's instances of luck are those that occurred on his path while he was already himself. He doesn't say how he was almost born in Nepal and destined for a life of poverty, or he was lucky that one of his cells didn't mutate while in the womb to make him intellectually disabled. 

I'll presume, then, that the luck Frank is talking about is the normal kind of career and life luck that most of us think about, and that the "your success is mostly luck because you were born smart" is just a rhetorical flourish.


We don't have a definition problem in our usual analysis of baseball luck, because we are careful to talk about what we consider luck and what we don't. For a team's W-L record, we specify that the "luck" we're talking about is the difference between the team's talent and the team's outcome. So, if a team is good enough to finish with an average of 88 wins, but it actually wins 95 games, we say it was lucky by 7 games.

We specifically ignore certain types of luck, such as injuries and weather and bad calls by the umpire. And, we specifically exclude certain types of luck, like how an ace pitcher randomly happened to meet and marry a woman from Seattle, which led him to sign at a discount with the Mariners, which meant that they wound up more talented than they would have otherwise.

By specifically defining what's luck and what's not, we can come up with a specific answer to the specific question. We know the difference between talent (as we define it) and luck (as we define it) can be measured by the binomial approximation to the normal distribution. So, we can calculate that the effect of luck is a standard deviation of about 6.4 games per season, and the variation in talent is about 9 games per season.

From that, we can calculate a bunch of other things. Such as: on average, a team that finishes with a 96-66 record is most likely a 91-71 team that got lucky. In other words, if the season were replayed again, like in an APBA simulation, that team would be more likely to finish with 91 wins than with 96.

I think that's the question Frank really wants to answer -- that if you took Bill Gates, and made him play his life over, he wouldn't come close to being the richest man in the world. He just had a couple of very lucky breaks, breaks that probably wouldn't have come is way if God rolled the dice again in his celestial APBA simulation of humanity.


Another reason to think that's what Frank means is that, when he gets down to mathematical business, that seems to be the definition he uses. There, he talks about luck as distinct from "skill" and "effort". 

When Frank does that, his view of success and luck is a lot like the sabermetrician's view of success and luck. We assume a person (or team) has a certain level of talent, and the observed level of success might be higher or lower than expectations depending on whether good luck or bad luck dominates.

In his Chapter 4, and its appendix, Frank tries to work that out mathematically.

Suppose everyone has a skill level distributed uniformly between 0 and 100, and a level of luck distributed uniformly between 0 and 100 (where 50 is average). And, suppose that the level of success is determined 95 percent by skill and 5 percent by luck.

Even though luck creates only 5 percent of the outcome, it's enough to almost ensure that the most skilled person winds up NOT the most successful. With 1,000 participants, the most skilled will "win" about 55 percent of the time. With 100,000 participants, the most skilled will win less than 13 percent of the time.

Frank gives an excellent explanation of why that happens:

"The most skilled competitor in a field of 1,000 would have an expected skill level of 99.9, but an expected luck level of only 50.
"It follows that the expected performance level of the most skillful of 1,000 contestants is P=0.95 * 99.9 + 0.05 * 50 = 97.4 ... but with 999 other contestants, that score usually won't be good enough to win.

"With 1,000 contestants, we expect that 10 will have skill levels of 99 or higher. Among those 10, the highest expected luck level is ... 90.9. The highest expected peformance socre among 1,000 contestants must therefore be at least P = 0.95 * 99 + 0.05 * 90.9 = 98.6, which is 1.2 points higher than the expected performance score of the most skillful contestant. 

"... The upshot is that even when luck counts only for a tiny fraction of total performance, the winner of a large contest will seldom be the most skillful contestant but will usually be one of the luckiest."*

(* I feel like I should point out that this sentence, while true, is maybe misleading. Frank is comparing the chance of being the *very highest* in skill with the chance of being *one of the highest* in luck. When skill is more important than luck (it's 19 times as important in Frank's example), it's also true (perhaps "19 times as true") that "the winner of a large contest will seldom be the luckiest contestant but will usually be one of the most skillful."  And, it's also true that "the winner of a large contest will seldom be the most skillful contestant, but even more seldom be the most lucky.")


So, the most skilled of 10,000 competitors will wind up the winner only 55 percent of the time. Doesn't that prove that success is largely due to luck?

It depends what you mean by "largely due to luck."  Frank's experiment does show that, often, the luckier competitor wins over the more skillful competitor. Whether that alone constitutes "largely" is up to you, I guess. 

You could argue otherwise. As it turns out, the competitor with the most skill is still the one most likely to win the tournament, with a 55 percent chance. The person with the most luck is much less likely to win. Indeed, in Frank's simulation, perfect luck is only a 2.5 point bonus over average luck. So if the luckiest competitor isn't in the top 5 percent of skill, he or she CANNOT win.

It's true that the most successful competitors were likely to have been very lucky. But it's not true that the luckiest competitors were also the most successful.

Having said that ... I agree that in Frank's simulation, luck was indeed important, and the winner of the competition should realize that he or she was probably lucky -- especially in the 100,000 case, where the best player wins only 13 percent of the time. But Frank doesn't just talk about winners -- he talks about "successful" people. And you can be successful without finishing first. More on that later.


A big problem with Frank's simulation is that the results wind up enormously overinflated on the side of luck. That's because he uses uniform distributions for both luck and skill, rather than a bell-shaped (normal) distribution. This has the effect of artificially increasing competition at the top, which makes skill look much less important than it actually is. 

Out of 100,000 people in Frank's uniform distribution, more than 28,000 are within 1 SD of the highest-skilled competitor. But in a normal distribution, that number would be ... 70. So Frank inflates the relevant competition by a factor of 400 times.

To correct that, I created a version of Frank's simulation that used normal distributions instead of uniform. 

What happened? Instead of the top-skilled player winning only 13 percent of the time, that figure jumped to 88 percent.

Still ... Frank's use of the uniform distribution doesn't actually ruin his basic argument. That's because he assumed only 5 percent luck, and 95 percent skill. This, I think, vastly understates the amount of luck inherent in everyday life. 

It's easy to see that luck is important. The important question is: *how* important? I don't know how to find the answer to that, and when I discovered Frank's book, I was hoping he'd at least have taken a stab at it.

But, since we don't know, I'm just going to pick an arbitrary amount of luck and see where that leads. The arbitrary amount I'm going to pick is: 40 percent luck, and 60 percent skill. Why those numbers? Because that's roughly the breakdown of an MLB team's season record. Most readers of this blog have an intuitive idea of how much luck there is in a season, how often a team surprises the oddsmakers and its fans.

In effect, we're asking: suppose there are 100,000 teams in MLB, with only one division. How often does the most talented team finish at the top of the standings?

The answer to that question appears to be: about 11 percent of the time. 

(That's pretty close to the 13 percent that Frank gave, but it's coincidence that his uniform distribution with a 5/95 luck/talent split is close to my normal distribution with a 40/60 split.)

Here's something that surprised me. Suppose now, instead of 100,000 competitors, you make the competition ten times as big, so there's 1,000,000. How often does the best competitor win now?

I would have expected it to drop significantly lower than 11 percent. It doesn't. It actually rises a bit, to 14 percent. (Both these numbers are from simulations, so I'm not sure they're "statistically significantly" different.)

Why does this happen? I think it's because of the way the normal distribution works. The larger the population, the farther the highest value pulls away from the pack. 

On average, the most talented one-millionth of the population are more than around 4.75 SD from the mean. Suppose the average of those is 4.9 SD. So, we'll say the best competitor out of a million is around 4.9 SD from the mean.

If "catching distance" is 0.7 SD, you need to be 4.2 SD from the mean, which means your main competition consists of 13 competitors (out of a million).

But if there are only 100,000 in the pool, the most talented player is only around 4.4 SD from the mean, and "catching distance" only 3.7 SD. How many competitors are there above 3.7 SD? About 11 (out of 100,000).

The more competitors, the farther out a lead the best one has, which means the fewer competitors there are with a decent chance to catch him.


I decided to use the larger simulation, with a million competitors. A couple of results:

On average, the top performer in the simulation was the 442nd overall in talent. At first that may sound like merit doesn't matter much, but 442nd out of one million is still the top one-fiftieth of one percent -- the 99.95 percentile.

Going the other way, if you searched for the top player by talent, how did he or she perform? About 99th overall, or the 99.99 percentile. 


We know (from Tango and others) that to get from observed performance to talent, we regress to the mean by this amount:

1 - (SD(talent)/SD(observed))^2

Assume SD(talent) = 60, and SD(luck) = 40. That means that SD(observed) = 72.1, which is the square root of 60 squared plus 40 squared.

So, we regress to the mean by 1-(60/72.1)^2, which is about 31 percent. 

If our top performer is at 4.9 SD observed, that's 72.1*4.9 = 353.29 units above average. Regressing 31 percent gives us an estimate of 243.77 units of talent. Since talent has an SD of 60, that's the equivalent 4.06 SD of talent.

That means if the top performer comes in at 4.9 SD above zero, his or her likeliest talent is 4.06 SD. That's about 27th out of a million, or some such.

In other words, the player with top performance should be around 27th in talent.

(Why, then, did the simulation come up with 442nd instead of 27th? I think it's because converting SDs to rankings isn't symmetrical when you can vary a lot.

For instance: suppose you wind up with two winners, one at 3.06 SD and one at 5.06 SD. The average of the SDs is 4.06, like we said. But, the 5.06 ranks first, while the 3.06 ranks 1000th or something. The average of the ranks doesn't wind up at 27 -- it's about 500.)


The book is called "Success and Luck," but it really could be called "Money and Luck," because when Frank talks about "success," he really means "high income."  The point about luck is to support his idea of a consumption tax on the rich.

Frank's argument is that successful people should be willing to put up with higher taxes. His case, paraphrased, goes like this: "Look, the ultra-rich got that way because they were very lucky. So, they shouldn't mind paying more, especially once they understand how much their success depended on luck, and not their own actions."

About half the book is devoted to Frank discussing his proposal to change the tax system to get the ultra-rich to pay more. That plan comes from his 1999 book, "Luxury Fever." There and here, Frank argues that the ultra-rich don't actually value luxuries for their intrinsic value, but, rather, for their ability to flaunt their success. If we tax high consumption at a high rate, Frank argues, the wealthiest person will buy a $100K watch instead of a $700K watch (since the $100K watch will still cost $700K after tax) -- but he or she will still be as happy, since his or her social competitors will also downgrade the price of their watch, and the wealthiest person will still have the most expensive watch, which was his or her primary goal in the first place.

So, the rich still get the status of their expensive purchases, but the government has an extra $600K to spend on infrastructure, and that benefits everyone, including the rich. 

There are only a few pictures in the book, but one of them is a cartoon showing a $150,000 Porsche on a smooth road, as compared to a $333,000 Ferrari on a potholed road. Wouldn't the rich prefer to spend the extra $183,000 on taxes, Frank asks, so that the government can pave the roads properly and they can have a better driving experience overall? 

Almost every chapter of the book mentions that consumption tax ... especially Chapter 7, which is completely devoted to Frank's earlier proposal.


Since money is really the topic here, it would be nice to translate luck into dollars, instead of just standard deviations. Especially if we want to make sure Frank's consumption tax burden is fair, when compared to estimates of luck.

If money were linear with talent, it would be easy: we just regress 31 percent to the mean, and we're done. But, it's not. Income accelerates all the way up the percentile scale: slowly at the bottom, but increasingly as you get to the top. 

If you look at the bottom 97% of income tax filers, their income goes from zero to about a million dollars. If income were linear, the top 3% would go from $1 million to $1.03 million, right? But it doesn't: it explodes. In fact, the top 3% go from $1 million to maybe $500 million or more. 

(Income numbers come from IRS Table 1.1 here, for 2015, and articles discussing the 400 highest-income Americans.)

That means plain old regression to the mean won't work. So, I ran another simulation.

Well, it's actually the same simulation, but I added one thing. I assigned each performance rank an income, based on the IRS table, in order down, as the actual value of "talent". I assumed the most talented person "deserved" $500 million, and that's what he or she would earn if there were no luck involved. I assigned the second most talented person $300 million, and the third $200 million. Then, I used the IRS table to assign incomes all the way down the list of the 1 million people in the simulation. I rescaled the table to a million people, of course, and I assumed income was linear within an IRS category.

(BTW, if you disagree with the idea that even the most talented individuals deserve the high incomes seen in the IRS chart, that's fine. But that's a separate issue that has nothing to do with luck, and isn't discussed in the book.)

With the IRS table, I was able to calculate, for all performance ranks, how much they "should have" earned if their luck was actually zero.

The best performer earned $500 million. How much would he or she have earned based on talent alone, and no luck? A lot less: $129 million. The second-place finisher earned $300 million but deserved only $78 million. The third-place finisher earned $100 million instead of $48 million.

So, the top three finishers were lucky by $371 million, $222 million, and $52 million, respectively.

The 4-10 finishers were lucky by an average of $62 million. 

The 11 to 100 finishers were lucky by less, only $40 million.

The 101 to 500 finishers were lucky by a bit more, $42 million each.

At this point, we're only at the first 500 competitors out of a million. You'd expect that the trend to continue, that the next few thousand high-earners would also have been lucky, right? I mean, we're still in the multi-million-dollar range.

But, no.

At around 500, luck turns *negative*. Starting there, the participants actually made *less* than their skill was worth.

Those who finished 501-1000 are still in the income stratosphere -- they're the top 0.05% to the top 0.1%, earning between $10 million and $2.3 million. But, on average, their incomes were $460,000 less than what each would have earned based on skill alone.

It continues unlucky from there. The next 8000 people -- that is, the top 0.2 to 0.9 percent -- lost significant income to luck, more than $250,000 each. It's not just random noise in the simulation, either, because (a) every group shows unlucky, (b) there's a fairly smooth trend, and (c) I ran multiple simulations and they all came out roughly equivalent.

Here's a chart of all the ranges, dollar figures in thousands:

     1-10   +$61107
   11-100   +$39906
  101-500   +$ 4227
 501-1000   -$  460
1001-2000   -$  503
2001-3000   -$  401
3001-4000   -$  320
4001-5000   -$  265
5001-6000   -$  135
6001-7000   -$  224
7001-8000   -$  178
8001-9000   -$  201

(My chart stops at 9,000, because 9000 was about all I could keep track of with the software I was using. I believe the results would soon swing from unlucky back to lucky, and stay lucky until the average income of around $68,000.)

If we believe the data, we find that it's true that the ultra, ultra rich benefitted from good luck, at least the top 0.05% of the population. The "only" ultra-rich, the 0.05 to 0.9 percentile, the vast majority of the "one percenters" -- those people actually *lost* income due to *bad* luck.

This surprised me, but then I thought about it, and it makes sense. It's a consequence of the fact that income rises so dramatically at the top, where the top 0.01 percent earn ten times as much as the next 0.99 percent.

Suppose you finish 3,000th in performance, earning $1 million. If you're 2500th in talent, you should have had $2 million. If you were 3500th in talent but lucky, you should have earned maybe $900,000.

If you were lucky, you gained $100,000. If you were unlucky, you lost $1 million. 

So if those two have equal probabilities (which they almost do, in this case), the unlucky lose much more than the lucky gain. And that's why the "great but not really great" finishers were, on average, unlucky in terms of income.


Here's a baseball analogy. 

Normally, we think of team luck in MLB in terms of wins. But, instead, think of it in terms of pennants. 

The team that wins the pennant was clearly lucky, winning 100% of a pennant instead of (say) the true 40% probability given its talent. The other teams must have all been unlucky.

Which teams were the *most* unlucky? Clearly not the second division, which wouldn't have come close to winning the pennant even if the winning team hadn't gotten hot. The most unlucky, obviously, must be the teams that came close. Those are that teams where, if the winning team had had worse luck, they would have been able to take advantage and finish first instead.

In our income simulation, the top 100 is like a pennant, since it's worth so much more than the rankings farther below. So, when a participant gets lucky and finishes in the top 100, where did the offsetting bad luck fall? On the participants who actually had a good chance, but didn't make it.

Suppose only the top 1 percent in skill have an appreciable chance to make the top 100 in income. That means that if the top 0.01 had good luck and made more than they were worth, it must have been the next 0.99 percent who had bad luck and made less than they were worth, since they were the only ones whose failure to make the top 100 was due to luck at all.


Frank does seem to understand that it's the very top of the scale that's benefitted disproportionately from luck. In 1995, he co-wrote a book called "The Winner-Take-All Society", which argues that, over time, the rewards from being the best rise much faster than the rewards from being the second best or third best.

Recapping that previous book, Frank writes:

"[Co-author Philip] Cook and I argued that what's been changing is that new technologies and market institutions have been providing growing leverage for the talents of the ablest individuals. The best option available to patients suffering from a rare illness was once to consult with the most knowledgeable local practitioner. But now that medical records can be sent anywhere with a single click, today's patients can receive advice from the world's leading authority on that illness.

"Such changes didn't begin yesterday. Alfred Marshall, the great nineteenth-century British economist, described how advances in transportation enabled the best producers in almost every domain to extend their reach. Piano manufacturing, for example, was once widely dispersed, simply because pianos were so costly to transport ...

"But with each extension of the highway, rail, and canal systems, shipping costs fell sharply, and at each stop production became more concentrated. Worldwide, only a handful of the best piano producers now survive. It's of course a good thing that their superior offerings are now available to more people. but an inevitable side effect has been that producers with even a slight edge over their rivals went on to capture most of the industry's income.

"Therein lies a hint about why chance events have grown more important even as markets have become more competitive ..."

In other words: these days, the best doctor nationally has taken business away from the best doctor locally. But, the best doctor is the best doctor in part because of luck. So, luck rewards the best doctor nationally, but hurts the best doctor locally. And the best doctor locally is still pretty successful, maybe one of the richest people in town.

Which is what we see here, that the "ultra rich" gained from luck, and the merely "very rich" were actually hurt by it. Frank writes about the first part, but ignores the second part.


Frank's implicit argument is that if people's success is more due to luck, it's more appropriate to tax them at a higher rate. I say "implicit" because I don't think he actually says it outright. I can't say for sure without rereading the book, but I think Frank's explicit argument is that if the rich are made to realize that they got where they were substantially because of good luck, they would be less resistant to his proposed high-rate consumption tax.

But if Frank *does* believe the lucky should pay tax at a higher rate, it follows logically that he has to also believe that the unlucky should pay tax at a lower rate. If Joe has been taxed more than Mary (at an identical income) because he was luckier, then Mary must have been taxed less than Joe because she was unluckier.

By Frank's own logic (but my simulation), that would mean that those who earned between $3,000,000 and $300,000 last year were unlucky, and deserve to pay less tax. I bet that's not what Frank had in mind.


Of course, the model and numbers are debatable. In fact, they're almost certainly wrong. The biggest problem is probably the assumption that luck is normally distributed. There must be thousands of cases where a bit of luck turns a skilled performer, maybe someone normally in the $100K range, into a multi-million-dollar CEO or something. 

But who knows who those people are? They must be the minority, if we continue to assume that talent matters more than luck. But how small a minority, and how can we identify them to tax them more?

Anyway, regardless of what model you use, it does seem to me that the "second tier" of success, whoever those are, must be unlucky overall. 

In most cases, when you look at whoever is at the top of their category, they were probably lucky. If they hadn't been, who would be at the top instead? Probably the second or third in the category. Steve Wozniak instead of Bill Gates. Betamax (Sony) instead of VHS (JVC). Al Gore instead of George W. Bush. 

It seems pretty obvious to me that Wozniak, Betamax, and Al Gore have been very, very successful -- but not nearly as successful as they could have been, in large part because of bad luck. 

The main point of "The Winner-Take-All Society" is that the lucky (rich) winner winds up with a bigger share of the pie compared to the unlucky (but still rich) second-best, the unlucky (but still pretty rich) third best, and so on. In other words, the more "winner take all" there is, the bigger the difference between first and second place. 

The same forces that make the winner's income that much more a matter of good luck, must make the second-place finisher's income that much more a matter of bad luck. In a "Winner-Take-All Society," where only pennants pay off ... that's where luck becomes less important to the second division, not more.

Labels: , , ,

Friday, November 17, 2017

How Elo ratings overweight recent results

"Elo" is a rating system widely used to rate players in various games, most notably chess. Recently, FiveThirtyEight started using it to maintain real-time ratings for professional sports teams. Here's the Wikipedia page for the full explanation and formulas.

In Elo, everyone gets a number that represents their skill. The exact number only matters in the difference between you and the other players you're being compared to. If you're 1900 and your opponent is 1800, it's exactly the same as if you're 99,900 and your opponent is 99,800.

In chess, they start you off with a rating of 1200, because that happens to be the number they picked. You can interpret the 1200 either as "beginner," or "guy who hasn't played competitively yet so they don't know how good he is."  In the NBA system, FiveThirtyEight decided to start teams off with 1500, which represents a .500 team. 

A player's rating changes with every game he or she plays. The winner gets points added to his or her rating, and the loser gets the same number of points subtracted. It's like the two players have a bet, and the loser pays points to the winner.

How many points? That depends on two things: the "K factor," and the odds of winning.

The "K factor" is chosen by the organization that does the ratings. I think of it as double the number of points the loser pays for an evenly matched game. If K=20 (which FiveThirtyEight chose for the NBA), and the odds are even, the loser pays 10 points to the winner.

If it's not an even match, the loser pays the number of points by which he underperformed expectations. If the Warriors had a 90% chance of beating the Knicks, and they lost, they'd lose 18 points, which is 90% of K=20. If the Warriors win, they only gain 2 points, since they only exceeded expectations by 10 percentage points.

How does Elo calculate the odds? By the difference between the two players' ratings. The Elo formula is set so that a 400-point difference represents a 10:1 favorite. An 800 point favorite is 100:1 (10 for the first 400, multiplied by 10 for the second 400). A 200 point favorite is 3.16 to 1 (3.16 is the square root of 10). A 100 point favorite is 1.78 to 1 (the fourth root of 10), and so on.

In chess, the K factor varies depending on which chess federation is doing the ratings, and the level of skill of the players. For experienced non-masters, K seems to vary between 15 and 32 (says Wikipedia). 


Suppose A and B have equal ratings of 1600, and A beats B. With K=20, A's rating jumps to 1610, and B's rating falls to 1590. 

A and B are now separated by 20 points in the ratings, so A is deemed to have odds of 1.12:1 of beating B. (That's because the "400/20th" root of 10 is 1.12.)  That corresponds to an expected winning percentage of .529.

After lunch, they play again, and this time B beats A. Because B was the underdog, he gets more than 10 points -- 10.6 points (.529 times K=20), to be exact. And A loses the identical 10.6 points.

That means after the two games, A has a rating of 1599.4, and B has a rating of 1600.6.


That example shows one of the properties of Elo -- it weights recent performance higher than past performance. In their two games, A and B effectively tied, each going 1-1. But B winds up with a higher rating than A because his win was more recent.

Is that reasonable? In a way, it is. People's skill at chess changes over their lifetimes, and it would be weird to give the same weight to a game Garry Kasparov played when he was 8, as you would to a game Garry Kasparov played as World Champion.

But in the A and B case, it seems weird. A and B played both games the same day, and their respective skills couldn't have changed that much during the hour they took their lunch break. In this case, it would make more sense to weight the games equally.

Well, according to Wikipedia, that's might would actually happen. Instead of updating the ratings every game, the Federation would wait until the end of the tournament, and then compare each player to his or her overall expectation based on ratings going in. In this case, A and B would be expected to go 1-1 in two games, which they did, so their ratings wouldn't change at all.

But, if A and B's games were days or weeks apart, as part of different tournaments, the two games would be treated separately, and B might indeed wind up 1.2 points ahead of A.


Is that a good thing, giving a higher weight to recency? It depends how much higher a weight. 

People's skill does indeed change daily, based on mood, health, fatigue -- and, of course, longer-term trends in skill. In the four big North American team sports, it's generally true that players tend to improve in talent until a certain age (27 in baseball), then decline. And, of course, there are non-age-related talent changes, like injuries, or cases where players just got a lot better or a lot worse partway through their careers.

That's part of the reason we tend to evaluate players based on their most recent season. If a player hit 15 home runs last year, but 20 this year, we expect the 20 to be more indicative of what we can expect next season.

Still, I think Elo gives far too much weight to recent results, when applied to professional sports teams. 

Suppose you're near the end of the season, and you're looking at a team with a 40-40 record -- the Bulls, say. From that, you'd estimate their talent as average -- they're a .500 team.

Now, they win an even-money game, and they're 41-40, which is .506. How do you evaluate them now? You take the .506, and regress to the mean a tiny bit, and maybe estimate they're a .505 talent. (I'll call that the "traditional method," where you estimate talent by taking the W-L record and regressing to the mean.)

What would Elo say? Before the 81st game, the Bulls were probably rated at 1500. After the game, they've gained 10 points for the win, bringing them to 1510.

But, 1510 represents a .514 record, not .505. So, Elo gives that one game almost three times the weight that the traditional method does.

Could that be right? Well, you'd have to argue that maybe because of personnel changes, team talent changes so much from the beginning of the year to the end that the April games are worth three times as much as the average game. But ... well, that still doesn't seem right. 


Techincal note: I should mention that FiveThirtyEight adds a factor to their Elo calculation -- they give more or fewer points to the winner of the game based on the score differential. If a favorite wins by 1 point, they'll get a lot fewer points than if they won by 15 points. Same for the underdog, except that the underdog always gets more points than the favorite for the same point differential -- which makes sense.

FiveThirtyEight doesn't say so explicitly, but I think they set the weighting factor so that the average margin of victory corresponds to the number points the regular Elo would award the winner.

Here's the explanation of their system.


Elo starts with a player's rating, then updates it based on results. But, when it updates it, it has no idea how much evidence was behind the player's rating in the first place. If a team is at 1500, and then it wins an even game, it goes to 1510 regardless of whether it was at 1500 because it's an expansion team's first game, or because it was 40-40, or (in the case of chess) it's 1000-1000.

The traditional method, on the other hand, does know. If a team goes from 1-1 to 2-1, that's a move of .167 points (less after regressing to estimate talent, of course). If a team goes from 40-40 to 41-40, that's a move of only .005 points. 

Which makes sense; the more evidence you already have, the less your new evidence should move your prior. But if your prior moves the same way regardless of the previous evidence, you're seriously underweighting that previous evidence (which means you're overweighting the new evidence).

The chess Federations implicitly understand this, that you should give less weight to new results when you have more older results. That's why they vary the K-values based on who's playing. 

FIDE, for instance, weights new players at K=40, experienced players at K=20, and masters (who presumably have the most experience) at K=10.


As I said last post, I did a simulation. I created a team that was exactly average in talent, and assumed that FiveThirtyEight had accurately given them an average rating at the beginning of the year. I played out 1000 random seasons, and, on average, the team wound up at right where it started, just as you would expect.

Then, I modified the simulation as if FiveThirtyEight had underrated the team by 50 points, which would peg them as a .429 team. (They use their "CARM-Elo" player projection system for those pre-season ratings. I'm not saying that system is wrong, just checking what happens when a projection happens to be off.)

It turned out that, at the end of the 82-game season, Elo had indeed figured out the team was better than their initial rating, and had restored 45 of the 50 points. They were still underrated, but only by 5 points (.493) instead of 50 (.429). 

Effectively, the current season wiped out 90% of the original rating. Since the original rating was based on the previous seasons, that means that, to get the final rating, Elo effectively weighted this year at 90%, and the previous years at 10%. 

10% is close to 12.5%. I'll use that because it makes the calculation a bit easier. At 12.5%, which is one-eighth, it means the NBA season contains three "half lives" of about 27 games each. 

That is: after 27 games, the gap of 50 points is reduced by half, to 25. After another 27 games, it's down to 12. After a third 27 games, it's down to 6, which is 12.5% of where the gap started.

That means that to calculate a final rating, the thirds of seasons are effectively weighted in a ratio of 1:2:4. A game in April has four times the weight of a game in November. Last post, I argued why I think that's too high.


Here's another way of illustrating how recency matters. 

I tweaked the simulation to do something a little different. Instead of creating 1,000 different seasons, I created only one season, but randomly reordered the games 1,000 times. The opponents and final scores were the same; only the sequence was different. 

Under the traditional method, the talent estimates would be the same, since all 1,000 teams had the same W-L record. But the Elo ratings varied, because of recency effects. They varied with an SD of about 26 points. That's about .037 in winning percentage, or 3 wins per 82 games.

If you consider the SD to be, in a sense, the "average" discrepancy, that means that, on average, Elo will misestimate a team's talent by 3 wins. That's for teams with the same actual record -- based only on the randomness of *when* they won or lost. 

And you can't say, "well, those three wins might be because talent changed over time."  Because, that's just the random part. Any actual change in talent is additional to that. 


If all NBA games were pick-em, the SD of team luck in an 82-game season would be around 4.5 wins. Because there are lots of mismatches, which are more predictable, the actual SD is lower, maybe, say, 4.1 games. 

Elo ratings are fully affected by that 4.1 games of binomial luck, but also by another 3 games worth of luck for the random order in which games are won or lost. 

Why would you want to dilute the accuracy of your talent estimate by adding 3 wins worth of randomness to your SD? Only if you're gaining 3 wins worth of accuracy some other way. Like, for instance, if you're able to capture team changes in talent from the beginning of the season to the end. If teams vary in talent over time, like chess players, maybe weighting recent games more highly could give you a better estimate of a team's new level of skill.

Do teams vary in talent, from the beginning to the end of the year, by as much as 3 games (.037)?

Actually, 3 games is a bit of a red herring. You need more than 3 games of talent variance to make up for the 3 games of sequencing luck.

Because, suppose a team goes smoothly from a 40-win talent at the beginning of the year to a 43-win talent at the end of the year. That team will have won 41.5 games, not 40, so the discrepancy between estimate and talent won't be 3 games, but just one-and-a-half games.

As expected, Elo does improve on the 1.5 game discrepancy you get from the traditional method. I ran the simulation again, and found that Elo picked up about 90% of the talent difference rather than 50%. That means that Elo would peg the (eventual) 43-game talent at 42.7.

For a team that transitions from a 40- to a 43-game talent, the traditional method was off by 1.5 games. The Elo method was off by only 0.3 games. 

It looks like Elo is only a 1.2 game improvement over the traditional method, in its ability to spot changes in talent. But it "costs" a 3-game SD for extra randomness. So it doesn't seem like it's a good deal.

To compensate for the 3-game recency SD, you'd need the average within-season talent change to be much more than 3 games. You'd need it to be 7.5 games.

Do teams really change in talent, on average, by 7.5/82 games, over the course of a single season? Sure, some teams must, like they have injury problems to their star players. But on average? That doesn't seem plausible.


Besides, what's stopping you from adjusting teams on a case-by-case basis? If Stephen Curry gets hurt ... well, just adjust the Warriors down. If you think Curry is worth 15 games a season, just drop the Warriors' estimate by that much until he comes back.

It's when you try to do things by formula that you run into problems. If you expect Elo to automatically figure out that Curry is hurt, and adjust the Warriors accordingly ... well, sure, that'll happen. Eventually. As we saw, it will take 27 games, on average, until Elo adjusts just for half of Curry's value. And, when he comes back, it'll take 27 games until you get back only half of what Elo managed to adjust by. 

In our example, we assumed that talent changed constantly and slowly over the course of the season. That makes it very easy for Elo to track. But if you lose Curry suddenly, and get him back suddenly 27 games later ... then Elo isn't so good. If losing Curry is worth -.100 in winning percentage, Elo will start at .000 Curry's first game out, and only reach -.050 by Curry's 27th game out. Then, when he's back, Elo will take another 27 games just to bounce back from -.050 to -.025.

In other words, Elo will be significantly off for at least 54 games. Because Elo does weight recent games more heavily, it'll still be better than the traditional method. But neither method really distinguishes itself. When you have a large, visible shock to team talent, I don't see why you wouldn't just adjust for it based on fundamentals, instead of waiting a whole season for your formula to figure it out.


Anyway, if you disagree with me, and believe that team talent does change significantly, in a smooth and gradual way, here's how you can prove it.

Run a regression to predict a team's last 12 games of the season, from their previous seven ten-game records (adjusted for home/road and opponent talent, if you can). 

You'll get seven coefficients. If the seventh group has a significantly higher coefficient than the first group, then you have evidence it needs to be weighted higher, and by how much.

If the weight for the last group turns out to be three or four times as high as the weight for the first group, then you have evidence that Elo might be doing it right after all.

I doubt that would happen. I could be wrong. 

Labels: , , , , ,

Wednesday, November 01, 2017

Does previous playoff experience matter in the NBA?

Conventional wisdom says that playoff experience matters. All else being equal, players who have been in the post-season before are more capable of adapting to the playoff environment -- the pressure, the intensity, the refereeing, and so on.

FiveThirtyEight now has a study they say confirms this in the NBA:

"In the NBA postseason since 1980, the team with the higher initial Elo rating has won 74 percent of playoff series. But if a team has both a higher Elo rating and much more playoff experience, that win percentage shoots up to 86 percent. Conversely, teams with the higher Elo rating but much less playoff experience have won just 52 percent of playoff series. These differences are highly statistically and practically significant."

I don't dispute their results, but I disagree that they have truly found evidence that playoff experience matters.


There's a certain amount of random luck in the outcomes of games. NBA results have less luck than most other sports, but there's significant randomness nonetheless.

So, if you have two teams, each of which finish with identical records and identical Elo ratings, they're not necessarily equal in talent. One team is probably better than the other, but just had worse luck in the regular season. That's the team from which you would expect better playoff performance. 

But how do you tell them apart? If you have two teams, each of which finishes 55-27, with an Elo of (say) 1600, how can you tell which one is better?

One way is to look at their previous season records. If team A was .500 last year, while team B was .650, it's more likely that B is better. Sure, maybe not: it could be that team B lost a hall-of-famer in the off-season, while team A got a superstar back from injury. But, most of the time, that didn't happen, and you're going to find that team B is still the better team.

If you're looking at last season, the team with the better record is probably the team that got farther in the playoffs. And the team that got farther in the playoffs is probably the one whose players have more playoff experience.

So, when FiveThirtyEight notices that teams with playoff experience tend to outperform Elo expectations, it's not necessarily the actual playoff experience that's the cause. It could be that it's actually that the teams are better than their ratings -- a situation that correlates with playoff experience.

And, of course, "team being better" is a much more plausible explanation for good performance than "team has more playoff experience."


Here's a possible counterargument. 

The study doesn't just look at players' *last year's* playoff experience -- it looks at their *career* playoff experience. You'd think that would dilute the effect, somewhat. But, still. Teams tend to stay good or bad for a few years before you could say their talent has changed significantly. Also, players with a lot of playoff experience, even with other teams, are more likely to be good players, and good players tend to play for more talented teams (even if all that makes the team "more talented" comes from them).


Another counterargument might be: well, the previous season's performance is already baked into Elo. If a team did well last season, it starts out with a higher rating than a team that didn't. So, checking again what a team did last season shouldn't make any difference. It would be like checking which team played better on Mondays. That shouldn't matter, because the Elo's conclusion that the teams are equal has already used the results of Monday games. 

That's a strong argument, and it would hold up if Elo did, in fact, give last season the appropriate consideration. But I don't think it does. When it calculates the rating, Elo gives previous seasons a very low weighting.

I did a little simulation (which I'll describe next post), and found that, when two NBA teams start with different Elo ratings, but perform identically, half the difference is wiped out after about 27 games.

So, team A starts out with a rating of 1500 (projection of 41-41). Team B starts out with 1600 (52-30). After 27 games playing identically against identical opponents, the Elo difference drops from 100 points to 50. 

After a second 27 games, the difference gets cut in half again, and the teams are now only 25 points apart. After a third 27 games, the difference cuts in half again, to around 12 points. That takes us to 81 games, roughly an NBA season. 

So, at the beginning of the season, Elo thought team B was 100 points better than team A. Now, because both teams had equal seasons, Elo thinks B is only 12 points better than A.

And that's even after considering personnel changes between seasons. If the two ratings started out 100 points apart, their performance last season was actually 133 points apart, because FiveThirtyEight regresses a quarter the way to the mean during the off-season, to account for player aging and roster changes. 

133 points is about 15 games out of 82. So, last year B was 56-26, while A was 41-41. They now have identical 49-33 seasons, and Elo thinks B is only 1.4 games better than A.

In other words, after a combined 162 games, the system thinks A was less than two games luckier than B.

That seems like too much convergence. 


Under the FiveThirtyEight system, previous years' performance contributes only 12 percent of the rating; this year's performance is the remaining 88 percent.  And that's *in addition* to adjusting for personnel changes off-season. 

That's far lower than traditional sabermetric standards. For baseball player performance, Bill James (and Tango, and others) have traditionally put previous seasons at 50 percent. They use a "1/2/3" weighting. By contrast, the NBA is using "1/5/44".

Of course, the "1/2/3" is for players; for teams, it should be lower, because of personnel changes. Especially in the NBA, where personnel changes make a much bigger difference because a superstar has such a big impact. 

But, still, 12 percent is far too little weight to give to previous NBA seasons. That's why, when you want to know whose Elo ratings are unlucky, you actually do add valuable information about talent, by checking which teams played much better the last few years than they did this year. 

And that's why playoff experience seems to matter. It correlates highly with teams that did well in the past. 


I could be wrong; it shouldn't be too hard to test. 

1. If this hypothesis is right, then playoff experience won't just predict playoff success; it will also predict late regular season success, because the same logic holds. That might be tricky to test because some teams might not give their stars as many minutes in those April games. But you could still check using teams that are fighting for a playoff spot.

2. Instead of breaking Elo ties by looking at previous playoff experience, look instead at the teams' start-of-season rating. I bet you find an even larger effect. And I bet that after you adjust for that, the apparent effect of playoff experience will be much smaller.

3. For teams whose Elo ratings at the end of the season are close to their ratings at the beginning of the season, I predict that the apparent effect of playoff experience will be much smaller. That's because those teams will tend to have been less lucky or unlucky, so you won't need to look as hard at previous performance (playoff experience) to counterbalance the luck.

4. Instead of using Elo as an estimate of team talent entering the playoffs, estimate talent from Vegas odds in April regular-season games (or late-March games, if you're worried about teams who bench their stars in April). I predict you'll find much less of a "playoff experience" effect.

Hat Tip and thanks: GuyM, for link and e-mail discussion

Labels: , , ,

Friday, August 04, 2017

Deconstructing an NBA time-zone regression

Warning: for regression geeks only.


Recently, I came across an NBA study that found an implausibly huge effect of teams playing in other time zones. The study uses a fairly simple regression, so I started thinking about what could be happening. 

My point here isn't to call attention to the study, just to figure out the puzzle of how such a simple regression could come up with such a weird result. 


The authors looked at every NBA regular-season game from 1991-92 to 2001-02. They tried to predict which team won, using these variables:

-- indicator for home team / season
-- indicator for road team / season
-- time zones east for road team
-- time zones west for road team

The "time zones" variable was set to zero if the game was played in the road team's normal time zone, or if it was played in the opposite direction. So, if an east-coast team played on the west coast, the west variable would be 3, and the east variable would be 0.

The team indicators are meant to represent team quality. 


When the authors ran the regression, they found the "number of time zones" variable large and statistically significant. For each time zone moving east, teams played .084 better than expected (after controlling for teams). A team moving west played .077 worse than expected. 

That means a .500 road team on the West Coast would actually play .756 ball on the East Coast. And that's regardless of how long the visiting team has been in the home team's time zone. It could be a week or more into a road trip, and the regression says it's still .756.

The authors attribute the effect to "large, biological effects of playing in different time zones discovered in medicine and physiology research." 


So, what's going on? I'm going to try to get to the answer, but I'll start with a couple of dead ends that nonetheless helped me figure out what the regression is actually doing. I should say in advance that I can't prove any of this, because I don't have their data and I didn't repeat their regression. This is just from my armchair.

Let's start with this. Suppose it were true, that for physiological reasons, teams always play worse going west, and teams always play better going east. If that were the case, how could you ever know? No matter what you see in the data, it would look EXACTLY like the West teams were just better quality than the East teams. (Which they have been, lately.)  

To see that argument more easily: suppose the teams on the West Coast are all NBA teams. The MST teams are minor-league AAA. The CST teams are AA. And the East Coast teams are minor league A ball. But all the leagues play against each other.

In that case, you'd see exactly the pattern the authors got: teams are .500 against each other in the same time zone, but worse when they travel west to play against better leagues, and better when they travel east to play against worse leagues.

No matter what results you get, there's no way to tell whether it's time zone difference, or team quality.

So is that the issue, that the regression is just measuring a quality difference between teams in different time zones? No, I don't think so. I believe the "time zone" coefficient of the regression is measuring something completely irrelevant (and, in fact, random). I'll get to that in a bit. 


Let's start by considering a slightly simpler version of this regression. Suppose we include all the team indicator variables, but, for now, we don't include the time-zone number. What happens?

Everything works, I think. We get decent estimates of team quality, both home and road, for every team/year in the study. So far, so good. 

Now, let's add a bit more complexity. Let's create a regression with two time zones, "West" and "East," and add a variable for the effect of that time zone change.

What happens now?

The regression will fail. There's an infinite number of possible solutions. (In technical terms, the regression matrix is "singular."  We have "collinearity" among the variables.)

How do we know? Because there's more than one set of coefficients that fits the data perfectly. 

(Technical note: a regression will always fail if you have an indicator variable for every team. To get around this, you'll usually omit one team (and the others will come out relative to the one you omitted). The collinearity I'm talking about is even *after* doing that.)

Suppose the regression spit out that the time-zone effect is actually  .080, and it also spit out quality estimates for all the teams.

From that solution, we can find another solution that works just as well. Change the time-zone effect to zero. Then, add .080 to the quality estimate of every West team. 

Every team/team estimate will wind up working out exactly the same. Suppose, in the first result, the Raptors were .400 on the road, the Nuggets were .500 at home, and the time-zone effect is .080. In that case, the regression will estimate the Raptors at .320 against the Nuggets. (That's .400 - (.500 - .500) - .080.)

In the second result, the regression leaves the Raptors at .400, but moves the Nuggets to .580, and the time-zone effect to zero. The Raptors are still estimated at .320 against the Nuggets. (This time, it's .400 - (.580 - .500) - .000.)

You can create as many other solutions as you like that fit the data identically: just add any X to the time-zone estimate, and add the same X to every Western team.

The regression is able to figure out that the data doesn't give a unique solution, so it craps out, with a message that the regression matrix is singular.


All that was for a regression with only two time zones. If we now expand to include all four zones, that gives six different effects each direction (E moving to C, C to M, M to P, E to M, C to P, and M to P). What if we include six time-zone variables, one for each effect?

Again, we get an infinity of solutions. We can produce new solutions almost the same way as before. Just take any solution, subtract X from each E team quality, and add X to the E-C, E-M and E-P coefficients. You wind up with the same estimates.


But the authors' regression actually did have one unique best fit solution. That's because they did one more thing that we haven't done.

We can get to their regression in two steps.

First, we collapse the six variables into three -- one for "one time zone" (regardless of which zone it is), one for "two time zones," and one for "three time zones". 

Second, we collapse those three variables into one, "number of time zones," which implicitly forces the two-zone effect and three-zone effect to be double and triple, respectively, the value of the one-zone effect. I'll call that the "x/2x/3x rule" and we'll assume that it actually does hold.

So, with the new variable, we run the regression again. What happens?

In the ideal case, the regression fails again. 

By "ideal case," I mean one where all the error terms are zero, where every pair of teams plays exactly as expected. That is, if the estimates predict the Raptors will play .350 against the Nuggets, they actually *do* play .350 against the Nuggets. It will never happen that every pair will go perfectly in real life, but maybe assume that the dataset is trillions of games and the errors even out.

In that special "no errors" case, you still have an infinity of solutions. To get a second solution from a first, you can, for instance, double the time zone effects from x/2x/3x to 2x/4x/6x. Then, subtract x from each CST team, subtract 2x from each MST team, and subtract 3x from each PST team. You'll wind up with exactly the same estimates as before.


For this particular regression to not crap out, there have to be errors. Which is not a problem for any real dataset. The Raptors certainly won't go the exact predicted .350 against the Nuggets, either because of luck, or because it's not mathematically possible (you'd need to go 7-for-20, and the Raptors aren't playing 20 games a season in Denver).

The errors make the regression work.

Why? Before, x/2x/3x fit all the observations perfectly. So you could create duplicate solutions by adding and subtracting X and 2X from the teams, and adding X and 2X to the one-zone effects and two-zone effects. Now, because of errors, not all the observed two-zone effects are exactly double the one-zone effects. So not everything cancels out, and you get different residuals. 

That means that this time there's a unique solution, and the regression spits it out.


In this new, valid, regression, what's the expected value of the estimate for the time-zone effect?

I think it must be zero.

The estimate of the coefficient is a function of the observed error terms in the data. But the errors are, by definition, just as likely to be negative as positive. I believe (but won't prove) that if you reverse the signs of all the error terms, you also reverse the sign of the time zone coefficient estimate.

So, the coefficient is as likely to be negative as positive, which means by symmetry, its expected value must be zero.

In other words: the coefficient in the study, the one that looks like it's actually showing the physiological effects of changing time zone ... is actually completely random, with expected value zero.

It literally has nothing at all to do with anything basketball-related!


So, that's one factor that's giving the weird result, that the regression is fitting the data to randomness. Another factor, and (I think) the bigger one, is that the model is wrong. 

There's an adage, "All models are wrong; some models are useful." My argument is that this model is much too wrong to be useful. 

Specifically, the "too wrong" part is the requirement that the time-zone effect must be proportional to the number of zones -- the "x/2x/3x" assumption.

It seems like a reasonable assumption, that the effect should be proportional to the time lag. But, if it's not, that can distort the results quite a bit. Here's a simplified example showing how that distortion can happen.

Suppose you were to run the regression without the time-zone coefficient, and you get talent estimates for the teams, and you look at the errors in predicted vs. actual. For East teams, you find the errors are

+.040 against Central
+.000 against Mountain
-.040 against Pacific

That means that East teams played .040 better than expected against Central teams (after adjusting for team quality). They played exactly as expected against Mountain Time teams, and .040 worse than expected against West Coast teams.

The average of those numbers is zero. Intuitively, you'd look at those numbers and think: "Hey, there's no appreciable time-zone effect. Sure, the East teams lost a little more than normal against the Pacific teams, but they won a little more than normal against the Central teams, so it's mostly a wash."

Also, you'd notice that it really doesn't look like the observed errors follow x/2x/3x. The closest fit seems to be when you make x equal to zero, to get 0/0/0.

So, does the regression see that and spit out 0/0/0, accepting the errors it found? No. It actually finds a way to make everything fit perfectly!

To do that, it increases its estimates of every Eastern team by .080. Now, every East team appears to underperform by .080 against each of the three other time zones. Which means the observed errors are now 

-.040 against Central
-.080 against Mountain
-.120 against Pacific

And that DOES follow the x/2x/3x model -- which means you can now fit the data perfectly. Using 0/0/0, the .500 Raptors were expected to be .500 against an average Central team (.500 minus 0), but they actually went .540. Using -.040/-.080/.120, the .580 Raptors are expected to be .540 against an average Central team (.580 minus .040), and that's exactly what they did.

So the regression says, "Ha! That must be the effect of time zone! It follows the x/2x/3x requirement, and it fits the data perfectly, because all the errors now come out to zero!"

So you conclude that 

(a) over a 20-year period, the East teams were .580 teams but played down to .500 because they suffered from a huge time-zone effect.

Well, do you really want to believe that? 

You have at least two other options you can justify: 

(b) over a 20-year period, the East teams were .500 teams and there was a time-zone effect of +40 points playing in CST, and -40 points playing in PST, but those effects weren't statistically significant.

(c) over a 20-year period, the East teams were .500 teams and due to lack of statistical significance and no obvious pattern, we conclude there's no real time-zone effect.

The only reason to choose (a) is if you are almost entirely convinced of two things: first, that x/2x/3x is the only reasonable model to consider, and, second, that 40/80/120 points is plausible enough to not assume that it's just random crap, despite the statistical significance.

You have to abandon your model at this point, don't you? I mean, I can see how, before running the regression, the x/2x/3x assumption seemed as reasonable as any. But, now, to maintain that it's plausible, you have to also believe it's plausible that an Eastern team loses .120 points of winning percentage when it plays on the West Coast. Actually, it's worse than that! The .120 was from this contrived example. The real data shows a drop of more than .200 when playing on the West Coast!

The results of the regression should change your mind about the model, and alert you that the x/2x/3x is not the right hypothesis for how time-zone effects work.


Does this seem like cheating? We try a regression, we get statistically-significant estimates, but we don't like the result so we retroactively reject the model. Is that reasonable?

Yes, it is. Because, you have to either reject the model, or accept its implications. IF we accept the model, then we're forced to accept that there's 240-point West-to-East time zone effect, and we're forced to accept that West Coast teams that play at a 41-41 level against other West Coast teams somehow raise their game to the 61-21 level against East Coast teams that are equal to them on paper.

Choosing the x/2x/3x model led you to an absurd conclusion. Better to acknowledge that your model, therefore, must be wrong.

Still think it's cheating? Here's an analogy:

Suppose I don't know how old my friend's son is. I guess he's around 4, because, hey, that's a reasonable guess, from my understanding of how old my friend is and how long he's been married. 

Then, I find out the son is six feet tall.

It would be wrong for me to keep my assumption, wouldn't it? I can't say, "Hey, on the reasonable model that my friend's son is four years old, the regression spit out a statistically significant estimate of 72 inches. So, I'm entitled to conclude my friend's son is the tallest four-year-old in human history."

That's exactly what this paper is doing.  

When your model spews out improbable estimates for your coefficients, the model is probably wrong. To check, try a different, still-plausible model. If the result doesn't hold up, you know the conclusions are the result of the specific model you chose. 


By the way, if the statistical significance is concerning you, consider this. When the authors repeated the analysis for a later group of years, the time-zone effect was much smaller. It was .012 going east and -.008 going west, which wasn't even close to statistical significance. 

If the study had combined both samples into one, it wouldn't have found significance at all.

Oh, and, by the way: it's a known result that when you have strong correlation in your regression variables (like here), you get wide confidence intervals and weird estimates (like here). I posted about that a few years ago.  


The original question was: what's going on with the regression, that it winds up implying that a .500 team on the West Coast is a .752 team on the East Coast?

The summary is: there are three separate things going on, all of which contribute:

1.  there's no way to disentangle time zone effects from team quality effects.

2.  the regression only works because of random errors, and the estimate of the time-zone coefficient is only a function of random luck.

3.  the x/2x/3x model leads to conclusions that are too implausible to accept, given what we know about how the NBA works. 


UPDATE, August 6/17: I got out of my armchair and built a simulation. The results were as I expected. The time-zone effect I built in wound up absorbed by the team constants, and the time-zone coefficient varied around zero in multiple runs.

Labels: ,