Monday, October 01, 2018

When is defense more valuable than offense?

Is it possible, as a general rule, for a run prevented to be worth more than a run scored?

I don't think so. 

Suppose every team in the league scored one fewer run, and allowed one fewer run. If runs prevented were more valuable than runs scored, every team would improve. But, then, the league would no longer balance out to .500.

But the values of offensive and defensive runs *are* different for individual teams.

Suppose a team scores 700 runs and allows 600. That's an expected winning percentage of .57647 (Pythagoras, exponent 2). 

Suppose it gains a run of offense, so it scores 701 instead of 700. At 701-600, its expectation becomes .57717, an improvement of .00070.

Now, instead, suppose its extra run comes on defense, and it goes 700-599. Now, its expectation is .57728, an improvement of .00081.

So, for that team, the run saved is more valuable than the run scored.

It turns out that if a team scores more than it allows, a run on defense is more valuable than a run on offense. If a team allows more than it scores, the opposite is true. 


Just recently, I figured out an intuitive way to show why that happens, without having to use Pythagoras at all. I'm going to switch from baseball to hockey, because if you assume that goals scored have a Poisson distribution, the explanation works out easier.

Suppose the Edmonton Oilers score 5 goals per game, and allow 4. If they improve their offense by a goal a game, the 5-4 advantage becomes 6-4. If they improve their defense by a goal, the 5-4 becomes 5-3.

Which is better? 

Even though both scenarios have the Oilers scoring an average two more goals than the opposition, that doesn't happen every game, because there's random variation in how the goals are distributed among the games. With zero variation, the Oilers win every game 5-3 or 6-4. But, with the kind of variation that actually occurs, there's a good chance that the Oilers will lose some games. 

For instance, Edmonton might win one game 7-1, but lose the next 5-3. Over those two games, the Oilers do indeed outscore their opponents by two goals a game, on average, but they lose one of the two games.

The average is "Oilers finish the game +2". The Oilers lose when the result is at least two goals against them. In other words, when the result varies from expectation by -2 goals or greater.

The more variation around the mean of +2, the greater the chance the Oilers lose. Which  means the team with the advantage wants less variation in scores, and the underdog wants more variation.

Now, let's go to the assumption that goals follow a Poisson distribution.*  

(*Poisson is the distribution you get if you assume that in any given moment, each team has its own fixed probability of scoring, independent of what happened before. In hockey, that's a reasonable approximation -- not perfect, but close enough to be useful.)

For a Poisson distribution, the SD of the difference in goals is exactly the square root of the total goals scored.

In the 5-3 case, the SD of goal differential is the square root of 8. In the 6-4 case, the SD is the square root of 10. Since root-10 is higher than root-8, the underdog should prefer 6-4, but the favored Oilers should prefer 5-3.

Which means, for the favorite, a goal of defense is more valuable than a goal of offense.

This "proof" is only for Poisson, but, for the other sports, the same logic holds. In baseball, football, soccer, and basketball, the more goals/runs/points per game, the more variation around the expectation.

Think about what a two goal/point/run spread means in the various sports leagues. In the NBA, where 200 points are scored per game, a 2-point spread is almost nothing. In the NFL, it means more. In MLB, it means a lot more. In the NHL, more still. And, in soccer, where the average is fewer than three goals per game, a two-goal advantage is almost insurmountable.

Labels: , , , ,

Thursday, September 13, 2018

Are soccer goals scored less valuable than goals prevented?

During this year's World Cup of Soccer, I found a sabermetric soccer book discounted at a Toronto bookstore. It's called "The Numbers Game," and subtitled "Why Everything You Know About Soccer Is Wrong."

Actually, I don't know that much about soccer, but much of the book fails to convince me -- for instance, when the authors argue that defense is more important than offense:

"To see if attacking leads to more wins, and whether defense leads to fewer wins and more draws, we conducted a set of rigorous, sophisticated regression analyses on our Premier League data."

As far as I can tell, the regressions tried to predict team wins based on team goals scored and conceded. The results:

0.230 wins -- value of a goal scored
0.216 wins -- value of a goal conceded

The authors write,

"That means goals created and goals prevented contribute about equally to manufacturing wins in English soccer."

But, when it came to losses:

0.176 losses -- value of a goal scored
0.235 losses -- value of a goal conceded


"... defense provides a more powerful statistical explanation for why teams lose. ... when it came to avoiding defeat, the goals that clubs didn't concede were each 33 percent more valuable than the goals they scored."


The authors argue that 

(a) goals scored and conceded contribute equally to wins;
(b) goals conceded contribute more to losses than goals scored.

Except ... aren't those results logically inconsistent with each other?

Suppose you look at the last 20 games where Chelsea faced Arsenal. From (b), you would deduce, 

If Chelsea had scored one goal fewer, but also conceded one goal fewer, they'd probably have had fewer losses.

That's because, according to the author's numbers, the lost goal would have cost Chelsea 0.176 losses, but the goal prevented would have saved them 0.235 losses. Net gain: 0.059 fewer losses.

But Chelsea's goals scored are Arsenal's goals conceded, and vice versa. Also, Chelsea's losses are Arsenal's wins, and vice versa. So, you can rephrase that last quote as,

If Arsenal had conceded one goal fewer, but also scored one goal fewer, they'd probably have had fewer wins.

Except ... the authors just argued that goals scored and conceded are *equal* in terms of wins.

Without realizing it, the book simultaneously makes two contradictory arguments!


So why did the coefficents for goals scored and goals allowed come out so different in the regression? I think it's just random chance.

If a team scores 20 goals and concedes 20 goals, you'd expect them to win as many games as they lose. But that might not happen if the goals aren't evenly distributed over games. For instance, the team might have lost 19 games by a score of 1-0, while winning a 20th game 20-1. 

In other words, team wins and losses vary randomly from their goal differential expectation. If the teams that underperformed happened to be teams that scored more than they conceded, and the teams that overperformed happened to be teams that conceded more than they scored ... in that case, the regression notices that overperformance is correlated with defense, and adjusts accordingly. And you wind up with the result the authors got.

(Another source of error is that performance isn't linear in terms of goals; it's pythagorean. But that's probably a minor issue compared to simple randomness.)

I'd bet that, for the "wins" regression, there was no pattern for which teams randomly outperformed their win projections. But for the "losses" regression, there *was* that kind of pattern, where the teams with better defense did lose fewer games than projected.

I'd bet that if you grouped the games differently, and reran the regression, you'd get a different result. Instead of your regression rows being team-based, like "Chelsea's 38 games from 2007-08," make them time-based, like "the first four weeks of the 2007-08 schedule." That will scramble up the projection anomalies a different way, and I'd bet that the four coefficient estimates wind up much closer to each other.

Labels: ,

Thursday, May 03, 2018

NHL referees balance penalty calls between teams

That finding, from Michael Lopez, shows that the next penalty in an NHL game is significantly less likely to go to the team that's had more penalties so far in the game.

That was a new finding to me. A few years ago, I found that the next penalty is more likely to go to the team that had the (one) most recent penalty -- but I hadn't realized that quantity matters, too.

(My previous research can be found here: part one, two, three.)

So, I dug out my old hockey database and see if I could extend Michael's results. All the findings here are based on the same data as my other study -- regular season NHL games from 1953-54 to 1984-85, as provided by the Hockey Summary Project as at the end of 2011.


Quickly revisiting the old finding: referees do appear to call "make-up" penalties. The team that got the benefit of the most recent power play is almost 50 percent more likely to have the next call go against them. That team got the next penalty 59.7% of the time, versus only 40.3% for the previously penalized team.

39599/98167 .403 -- team last penalized
58568/98167 .597 -- other team

Now, let's look at total numbers of penalties instead. I've split the data into home and road teams, because road teams do get more penalties -- 52 percent vs. 48 percent overall.  (That difference is mitigated by the fact that referees balance out the calls. The first penalty of the game goes to the road team 54 percent of the time. The drop from 54 percent for the first call, down to 52 percent overall, is due to the referees balancing out the next call or calls.)

So far, nothing exciting. But here's something. It turns out that the *second* call of the game is much more likely than average to be a makeup call:

.703 -- visiting penalty after home penalty
.297 -- home penalty after home penalty

.653 -- home penalty after visiting penalty 

.347 -- visiting penalty after visiting penalty

Those numbers are huge. Overall, there are more than twice as many "make up" calls as "same team" calls.

In this case, quantity and recency are the same thing. Let's move on to the third penalty of the game, where they can be different.  From now on, I'll show the results in chart form:

.705 0-2 
.462 1-1
.243 2-0

Here's how to read the chart: when the home team has gone "0-2" in penalties -- that is, both previous penalties to the visiting team -- it gets 70.5% of the third penalties. When the previous two penalties were split, the home team gets 46.2%, similar to the overall average. When the home team got both previous penalties, though, it draws the third only 24.3% of the time (in other words, the visiting team drew 75.7%).

Here's the fourth penalty. I've added sample sizes, in parentheses.

.701 0-3 (755)
.559 1-2 (6951)
.373 2-1 (5845)
.261 3-0 (468)

It's a very smooth progression, from .701 down to .261, exactly what you would expect given that make-up calls are so common. 

Here's the fifth penalty:

.677 0-4 ( 195)
.619 1-3 (3244)
.465 2-2 (6950)
.351 3-1 (2306)
.316 4-0 ( 117)

That's the chart that corresponds to Michael Lopez's tweet, and if you scroll back up you'll see that these numbers are pretty close to his.

Sixth penalty:

.667 0-5 (  48)
.637 1-4 (1182)
.520 2-3 (4930)
.413 3-2 (4134)
.323 4-1 ( 773)
.226 5-0 (  31)

Again, the percentages drop every step ("monotonically," as they say in math).

Seventh penalty:

.692 0-6 (  13)

.585 1-5 ( 369)
.577 2-4 (2528)
.489 3-3 (4140)
.399 4-2 (1798)
.379 5-1 ( 219)
.200 6-0 (  13)

Eighth penalty:

.667 0-7 (   3)
.607 1-6 ( 122)
.588 2-5 ( 969)
.527 3-4 (2721)
.422 4-3 (2414)
.374 5-2 ( 652)
.412 6-1 (  68)
.000 7-0 (   1)

Still a perfect pattern.  It breaks up just a little bit here, for the ninth penalty, but that's probably just small sample size.

.000 0-8 (   1)
.553 1-7 (  38)
.586 2-6 ( 348)
.566 3-5 (1358)
.484 4-4 (2063)
.392 5-3 (1037)
.340 6-2 ( 191)
.333 7-1 (  21)

(This is getting boring, so here's a technical note to break the monotony. I included all penalties, including misconducts. I omitted all cases where both teams took a penalty at the same time, even if one team took more penalties than the other. In fact, I treated those as if they never happened, so they don't break the string. This may cause the results to be incorrect in some cases: for instance, maybe Boston takes a minor, then there's a fight and Montreal gets a major and a minor while Boston gets only a major. Then, Montreal takes a minor. In that case, the study will treat the Montreal minor as a make-up call, when it's really not. I think this happens infrequently enough that the results are still valid.)

I'll give two more cases. Here's the twelfth penalty:

.692 2-9 ( 13)
.623 3-8 ( 61)
.532 4-7 (250)
.506 5-6 (478)
.488 6-5 (459)
.449 7-4 (198)
.457 8-3 ( 35)
.200 9-2 (  5)

Almost perfect.  But ... the pattern does seems to break down later on, at the 14th to 16th penalty (I stopped at 16), probably due to sample size issues. Here's the fourteenth, which I think is the most random-looking of the bunch. You could almost argue that it goes the "wrong way":

.000  2-11 (  1)
.375  3-10 (  8)
.333  4- 9 ( 27)
.516  5- 8 ( 95)
.438  6- 7 (169)
.480  7- 6 (148)
.465  8- 5 ( 71)
.577  9- 4 ( 26)
.600 10- 3 (  5)

Still, I think the overall conclusion isn't threatened, that quantity is a factor in make-up calls.


OK, so now we know that quantity matters. But couldn't that mean that recency doesn't matter? We did find that the team with the most recent penalty was less likely to get the next one -- but that might just be because that team is also more likely to have a higher quantity at that point. After all, when a team takes three of the first four penalties, there's a 75 percent chance* it also took the most recent one. 

(* It's actually not 75 percent, because make-up calls make the sequence non-random. But the point remains.)

So, maybe the recency effect is just an illusion, by the quantity effect. Or vice versa.

So, here's what I did: I broke down every row in every table by who got the more recent call. It turns out: recency does matter.

Let's take that 3-for-4 example I just used:

.613 home team overall     (3244)
.508 after VVVH            ( 486)
.639 after other sequences (2758)

From this, it looks like there's both aspects here. When the home team is "up 3-1" in penalty advantage, it gets only 51 percent of the penalties if its previous penalty was the last of the four. That's still more than the 46.1 percent it gets to start the game, or the 46.5 percent it would get if it had been 2-2 instead of 3-1.

This seems to be true for most of the breakdowns -- maybe even all the ones with large enough sample sizes. I'll just arbitrarily pick one to show you ... the ninth penalty, home team 3-5.

.392 home team overall     (1037)
.362 when most recent was H (743)
.469 when most recent was V (294)

Even better: here's the entire chart for the eighth penalty: overall vs. last penalty went to home team ("last H") vs. last penalty went to visiting team "last V". 

overall   last H    last V
 .607      .750      .596      1-6 
 .588      .477      .609      2-5 
 .527      .446      .584      3-4 
 .422      .372      .518      4-3 
 .374      .357      .466      5-2 
 .412      .406      .500      6-1 

Clearly, both recency and quantity matter. Holding one constant, the other still follows the "make-up penalty" pattern. 

Can we figure out *how much* is recency and *how much* is quantity?  It's probably pretty easy to get a rough estimate with a regression. I'm about to leave for the weekend, but I'll look at that next week. Or you can download the results (speadsheet here) and do it yourself.

Labels: , , ,

Tuesday, February 20, 2018

How much of success in life is luck instead of skill?

How much of MLB teams' success is due to skill, and how much due to luck? We have a pretty good idea of the answer to that. But what about success in life, in general? If a person is particularly successful in their chosen field, how much of that success is due to luck?

That's the question Robert Frank asks in his 2016 book, "Success and Luck."  He believes that luck is a substantial contributor to success, as evidenced by his subtitle: "Good Fortune and the Myth of Meritocracy."

On the basic question, I agree with him that luck is a huge factor in how someone's life turns out. There is a near-infinite number of alternative paths our lives could have taken. If a butterfly had flapped its wings differently in China decades ago, I might not even exist now, never mind be sitting here typing this blog post.

In his preface, Frank favorably quotes Nicholas Kristof:

"America's successful people['s] ... big break came when they were conceived in middle-class American families how loved them, read them stories, and nurtured them .... They were programmed for success by the time they were zygotes."

But ... that's not a very practical observation, is it? Sure, I am phenomenally lucky that my parents decided to have sex that particular moment that they did, and that the winning sperm cell turned out to be me. In that light, you could say that luck explains almost 100 percent of my success. 

So, maybe a better question is: suppose I was born as me, but in random circumstances, in a random place and time. How much more or less successful would I be, on average?

As Frank writes:

"I often think of Birkhaman Rai, the young hill tribesman from Bhutan who was my cook long ago when I was a Peace Corps volunteer in a small village in Nepal. To this day, he remains perhaps the most enterprising and talented person I've ever met....

"... Even so, the meager salary I was able to pay him was almost certainly the high point of his life's earnings trajectory. If he'd grown up in the United States or some other rich country, he would have been far more prosperous, perhaps even spectacularly successful."

Agreed. Those of us who are alive in a wealthy society in 2017 are pretty much the luckiest people, in terms of external circumstances, of anyone in the history of the world.  For all of us, almost all of our success is due to having been born at the right time in the right place. 

But, again, that's not a very useful answer, is it? Even the most talented, hardest-working person would have nothing if he had been born in the wrong place and time, so you have to conclude that every successful person has been overwhelmingly lucky.

I think we have to hold our personal characteristics as a given, too. Because, almost everyone who is successful in a given field has far-above-average talent or interest in that field. I was lucky to have been born with a brain that likes math. Wilt Chamberlain was lucky to have been born with a genetic makeup that made him grow tall. Bach was born with the brain of a musical genius.

It gets even worse if you consider not just innate talent for a particular field, but other mental characteristics that we usually consider character rather than luck. Suppose you have an ability to work hard, or to persevere under adversity. Those likely have at least some genetic -- which is to say, random -- basis. So when someone with only average musical talent becomes a great composer by hard work, we can say, "well, sure, but he was lucky to have been born with that kind of drive to succeed."

Frank says:

"I hope we can agree that success is much more likely for people with talents that are highly valued by others, and also for those with the ability and inclination to focus intently and work tirelessly. But where do those personal qualities come from? We don't know, precisely, other than to say that they come from some combination of genes and the environment. ...

"In some unknown proportion, genetic and environmental factors largely explain whether someone gets up in the morning feeling eager to begin work. If you're such a person, which most of the time I am not, you're fortunate."

So, even if you got to where you are by working hard, Frank says, that's still luck! Because, you're lucky to have the kind of personality that sees the value of hard work.

I don't disagree with Frank that the kind of person you are, in terms of morals and virtues, is partly determined by luck. But, in that case, what *isn't* luck?


That's the problem with Frank's argument. Drill down deep enough, and everything is luck. You don't even need a book for that; I can do it in one paragraph, like this:

There are seven billion people in the world right now. Which one I am, out of those seven billion, is random, as far as I'm concerned; I had no say in which person I would be born as. Therefore, if I wind up being Bill Gates, the richest man in the world, I hit a 6,999,999,999 to 1 shot, and I am very, very lucky!

What Frank never explicitly addresses is: what kind of success does he consider NOT caused by luck? I don't think that anywhere, in his 200-page book, he even gives one example. 

We can kind of figure it out, though. At various points in the book, Frank illustrates his own personal lucky moments. There was the time he got his professor job at Cornell by the skin of his teeth (he was the last professor hired, in a year where Cornell hired more economics professors than ever before). Then, there was the time he almost drowned while windsurfing, but just in time managed to free himself from under his submerged sail. "Survival is somtimes just a matter of pure dumb luck, and I was clearly luck's beneficiary that day."

Frank's instances of luck are those that occurred on his path while he was already himself. He doesn't say how he was almost born in Nepal and destined for a life of poverty, or he was lucky that one of his cells didn't mutate while in the womb to make him intellectually disabled. 

I'll presume, then, that the luck Frank is talking about is the normal kind of career and life luck that most of us think about, and that the "your success is mostly luck because you were born smart" is just a rhetorical flourish.


We don't have a definition problem in our usual analysis of baseball luck, because we are careful to talk about what we consider luck and what we don't. For a team's W-L record, we specify that the "luck" we're talking about is the difference between the team's talent and the team's outcome. So, if a team is good enough to finish with an average of 88 wins, but it actually wins 95 games, we say it was lucky by 7 games.

We specifically ignore certain types of luck, such as injuries and weather and bad calls by the umpire. And, we specifically exclude certain types of luck, like how an ace pitcher randomly happened to meet and marry a woman from Seattle, which led him to sign at a discount with the Mariners, which meant that they wound up more talented than they would have otherwise.

By specifically defining what's luck and what's not, we can come up with a specific answer to the specific question. We know the difference between talent (as we define it) and luck (as we define it) can be measured by the binomial approximation to the normal distribution. So, we can calculate that the effect of luck is a standard deviation of about 6.4 games per season, and the variation in talent is about 9 games per season.

From that, we can calculate a bunch of other things. Such as: on average, a team that finishes with a 96-66 record is most likely a 91-71 team that got lucky. In other words, if the season were replayed again, like in an APBA simulation, that team would be more likely to finish with 91 wins than with 96.

I think that's the question Frank really wants to answer -- that if you took Bill Gates, and made him play his life over, he wouldn't come close to being the richest man in the world. He just had a couple of very lucky breaks, breaks that probably wouldn't have come is way if God rolled the dice again in his celestial APBA simulation of humanity.


Another reason to think that's what Frank means is that, when he gets down to mathematical business, that seems to be the definition he uses. There, he talks about luck as distinct from "skill" and "effort". 

When Frank does that, his view of success and luck is a lot like the sabermetrician's view of success and luck. We assume a person (or team) has a certain level of talent, and the observed level of success might be higher or lower than expectations depending on whether good luck or bad luck dominates.

In his Chapter 4, and its appendix, Frank tries to work that out mathematically.

Suppose everyone has a skill level distributed uniformly between 0 and 100, and a level of luck distributed uniformly between 0 and 100 (where 50 is average). And, suppose that the level of success is determined 95 percent by skill and 5 percent by luck.

Even though luck creates only 5 percent of the outcome, it's enough to almost ensure that the most skilled person winds up NOT the most successful. With 1,000 participants, the most skilled will "win" about 55 percent of the time. With 100,000 participants, the most skilled will win less than 13 percent of the time.

Frank gives an excellent explanation of why that happens:

"The most skilled competitor in a field of 1,000 would have an expected skill level of 99.9, but an expected luck level of only 50.
"It follows that the expected performance level of the most skillful of 1,000 contestants is P=0.95 * 99.9 + 0.05 * 50 = 97.4 ... but with 999 other contestants, that score usually won't be good enough to win.

"With 1,000 contestants, we expect that 10 will have skill levels of 99 or higher. Among those 10, the highest expected luck level is ... 90.9. The highest expected peformance socre among 1,000 contestants must therefore be at least P = 0.95 * 99 + 0.05 * 90.9 = 98.6, which is 1.2 points higher than the expected performance score of the most skillful contestant. 

"... The upshot is that even when luck counts only for a tiny fraction of total performance, the winner of a large contest will seldom be the most skillful contestant but will usually be one of the luckiest."*

(* I feel like I should point out that this sentence, while true, is maybe misleading. Frank is comparing the chance of being the *very highest* in skill with the chance of being *one of the highest* in luck. When skill is more important than luck (it's 19 times as important in Frank's example), it's also true (perhaps "19 times as true") that "the winner of a large contest will seldom be the luckiest contestant but will usually be one of the most skillful."  And, it's also true that "the winner of a large contest will seldom be the most skillful contestant, but even more seldom be the most lucky.")


So, the most skilled of 10,000 competitors will wind up the winner only 55 percent of the time. Doesn't that prove that success is largely due to luck?

It depends what you mean by "largely due to luck."  Frank's experiment does show that, often, the luckier competitor wins over the more skillful competitor. Whether that alone constitutes "largely" is up to you, I guess. 

You could argue otherwise. As it turns out, the competitor with the most skill is still the one most likely to win the tournament, with a 55 percent chance. The person with the most luck is much less likely to win. Indeed, in Frank's simulation, perfect luck is only a 2.5 point bonus over average luck. So if the luckiest competitor isn't in the top 5 percent of skill, he or she CANNOT win.

It's true that the most successful competitors were likely to have been very lucky. But it's not true that the luckiest competitors were also the most successful.

Having said that ... I agree that in Frank's simulation, luck was indeed important, and the winner of the competition should realize that he or she was probably lucky -- especially in the 100,000 case, where the best player wins only 13 percent of the time. But Frank doesn't just talk about winners -- he talks about "successful" people. And you can be successful without finishing first. More on that later.


A big problem with Frank's simulation is that the results wind up enormously overinflated on the side of luck. That's because he uses uniform distributions for both luck and skill, rather than a bell-shaped (normal) distribution. This has the effect of artificially increasing competition at the top, which makes skill look much less important than it actually is. 

Out of 100,000 people in Frank's uniform distribution, more than 28,000 are within 1 SD of the highest-skilled competitor. But in a normal distribution, that number would be ... 70. So Frank inflates the relevant competition by a factor of 400 times.

To correct that, I created a version of Frank's simulation that used normal distributions instead of uniform. 

What happened? Instead of the top-skilled player winning only 13 percent of the time, that figure jumped to 88 percent.

Still ... Frank's use of the uniform distribution doesn't actually ruin his basic argument. That's because he assumed only 5 percent luck, and 95 percent skill. This, I think, vastly understates the amount of luck inherent in everyday life. 

It's easy to see that luck is important. The important question is: *how* important? I don't know how to find the answer to that, and when I discovered Frank's book, I was hoping he'd at least have taken a stab at it.

But, since we don't know, I'm just going to pick an arbitrary amount of luck and see where that leads. The arbitrary amount I'm going to pick is: 40 percent luck, and 60 percent skill. Why those numbers? Because that's roughly the breakdown of an MLB team's season record. Most readers of this blog have an intuitive idea of how much luck there is in a season, how often a team surprises the oddsmakers and its fans.

In effect, we're asking: suppose there are 100,000 teams in MLB, with only one division. How often does the most talented team finish at the top of the standings?

The answer to that question appears to be: about 11 percent of the time. 

(That's pretty close to the 13 percent that Frank gave, but it's coincidence that his uniform distribution with a 5/95 luck/talent split is close to my normal distribution with a 40/60 split.)

Here's something that surprised me. Suppose now, instead of 100,000 competitors, you make the competition ten times as big, so there's 1,000,000. How often does the best competitor win now?

I would have expected it to drop significantly lower than 11 percent. It doesn't. It actually rises a bit, to 14 percent. (Both these numbers are from simulations, so I'm not sure they're "statistically significantly" different.)

Why does this happen? I think it's because of the way the normal distribution works. The larger the population, the farther the highest value pulls away from the pack. 

On average, the most talented one-millionth of the population are more than around 4.75 SD from the mean. Suppose the average of those is 4.9 SD. So, we'll say the best competitor out of a million is around 4.9 SD from the mean.

If "catching distance" is 0.7 SD, you need to be 4.2 SD from the mean, which means your main competition consists of 13 competitors (out of a million).

But if there are only 100,000 in the pool, the most talented player is only around 4.4 SD from the mean, and "catching distance" only 3.7 SD. How many competitors are there above 3.7 SD? About 11 (out of 100,000).

The more competitors, the farther out a lead the best one has, which means the fewer competitors there are with a decent chance to catch him.


I decided to use the larger simulation, with a million competitors. A couple of results:

On average, the top performer in the simulation was the 442nd overall in talent. At first that may sound like merit doesn't matter much, but 442nd out of one million is still the top one-fiftieth of one percent -- the 99.95 percentile.

Going the other way, if you searched for the top player by talent, how did he or she perform? About 99th overall, or the 99.99 percentile. 


We know (from Tango and others) that to get from observed performance to talent, we regress to the mean by this amount:

1 - (SD(talent)/SD(observed))^2

Assume SD(talent) = 60, and SD(luck) = 40. That means that SD(observed) = 72.1, which is the square root of 60 squared plus 40 squared.

So, we regress to the mean by 1-(60/72.1)^2, which is about 31 percent. 

If our top performer is at 4.9 SD observed, that's 72.1*4.9 = 353.29 units above average. Regressing 31 percent gives us an estimate of 243.77 units of talent. Since talent has an SD of 60, that's the equivalent 4.06 SD of talent.

That means if the top performer comes in at 4.9 SD above zero, his or her likeliest talent is 4.06 SD. That's about 27th out of a million, or some such.

In other words, the player with top performance should be around 27th in talent.

(Why, then, did the simulation come up with 442nd instead of 27th? I think it's because converting SDs to rankings isn't symmetrical when you can vary a lot.

For instance: suppose you wind up with two winners, one at 3.06 SD and one at 5.06 SD. The average of the SDs is 4.06, like we said. But, the 5.06 ranks first, while the 3.06 ranks 1000th or something. The average of the ranks doesn't wind up at 27 -- it's about 500.)


The book is called "Success and Luck," but it really could be called "Money and Luck," because when Frank talks about "success," he really means "high income."  The point about luck is to support his idea of a consumption tax on the rich.

Frank's argument is that successful people should be willing to put up with higher taxes. His case, paraphrased, goes like this: "Look, the ultra-rich got that way because they were very lucky. So, they shouldn't mind paying more, especially once they understand how much their success depended on luck, and not their own actions."

About half the book is devoted to Frank discussing his proposal to change the tax system to get the ultra-rich to pay more. That plan comes from his 1999 book, "Luxury Fever." There and here, Frank argues that the ultra-rich don't actually value luxuries for their intrinsic value, but, rather, for their ability to flaunt their success. If we tax high consumption at a high rate, Frank argues, the wealthiest person will buy a $100K watch instead of a $700K watch (since the $100K watch will still cost $700K after tax) -- but he or she will still be as happy, since his or her social competitors will also downgrade the price of their watch, and the wealthiest person will still have the most expensive watch, which was his or her primary goal in the first place.

So, the rich still get the status of their expensive purchases, but the government has an extra $600K to spend on infrastructure, and that benefits everyone, including the rich. 

There are only a few pictures in the book, but one of them is a cartoon showing a $150,000 Porsche on a smooth road, as compared to a $333,000 Ferrari on a potholed road. Wouldn't the rich prefer to spend the extra $183,000 on taxes, Frank asks, so that the government can pave the roads properly and they can have a better driving experience overall? 

Almost every chapter of the book mentions that consumption tax ... especially Chapter 7, which is completely devoted to Frank's earlier proposal.


Since money is really the topic here, it would be nice to translate luck into dollars, instead of just standard deviations. Especially if we want to make sure Frank's consumption tax burden is fair, when compared to estimates of luck.

If money were linear with talent, it would be easy: we just regress 31 percent to the mean, and we're done. But, it's not. Income accelerates all the way up the percentile scale: slowly at the bottom, but increasingly as you get to the top. 

If you look at the bottom 97% of income tax filers, their income goes from zero to about a million dollars. If income were linear, the top 3% would go from $1 million to $1.03 million, right? But it doesn't: it explodes. In fact, the top 3% go from $1 million to maybe $500 million or more. 

(Income numbers come from IRS Table 1.1 here, for 2015, and articles discussing the 400 highest-income Americans.)

That means plain old regression to the mean won't work. So, I ran another simulation.

Well, it's actually the same simulation, but I added one thing. I assigned each performance rank an income, based on the IRS table, in order down, as the actual value of "talent". I assumed the most talented person "deserved" $500 million, and that's what he or she would earn if there were no luck involved. I assigned the second most talented person $300 million, and the third $200 million. Then, I used the IRS table to assign incomes all the way down the list of the 1 million people in the simulation. I rescaled the table to a million people, of course, and I assumed income was linear within an IRS category.

(BTW, if you disagree with the idea that even the most talented individuals deserve the high incomes seen in the IRS chart, that's fine. But that's a separate issue that has nothing to do with luck, and isn't discussed in the book.)

With the IRS table, I was able to calculate, for all performance ranks, how much they "should have" earned if their luck was actually zero.

The best performer earned $500 million. How much would he or she have earned based on talent alone, and no luck? A lot less: $129 million. The second-place finisher earned $300 million but deserved only $78 million. The third-place finisher earned $100 million instead of $48 million.

So, the top three finishers were lucky by $371 million, $222 million, and $52 million, respectively.

The 4-10 finishers were lucky by an average of $62 million. 

The 11 to 100 finishers were lucky by less, only $40 million.

The 101 to 500 finishers were lucky by a bit more, $42 million each.

At this point, we're only at the first 500 competitors out of a million. You'd expect that the trend to continue, that the next few thousand high-earners would also have been lucky, right? I mean, we're still in the multi-million-dollar range.

But, no.

At around 500, luck turns *negative*. Starting there, the participants actually made *less* than their skill was worth.

Those who finished 501-1000 are still in the income stratosphere -- they're the top 0.05% to the top 0.1%, earning between $10 million and $2.3 million. But, on average, their incomes were $460,000 less than what each would have earned based on skill alone.

It continues unlucky from there. The next 8000 people -- that is, the top 0.2 to 0.9 percent -- lost significant income to luck, more than $250,000 each. It's not just random noise in the simulation, either, because (a) every group shows unlucky, (b) there's a fairly smooth trend, and (c) I ran multiple simulations and they all came out roughly equivalent.

Here's a chart of all the ranges, dollar figures in thousands:

     1-10   +$61107
   11-100   +$39906
  101-500   +$ 4227
 501-1000   -$  460
1001-2000   -$  503
2001-3000   -$  401
3001-4000   -$  320
4001-5000   -$  265
5001-6000   -$  135
6001-7000   -$  224
7001-8000   -$  178
8001-9000   -$  201

(My chart stops at 9,000, because 9000 was about all I could keep track of with the software I was using. I believe the results would soon swing from unlucky back to lucky, and stay lucky until the average income of around $68,000.)

If we believe the data, we find that it's true that the ultra, ultra rich benefitted from good luck, at least the top 0.05% of the population. The "only" ultra-rich, the 0.05 to 0.9 percentile, the vast majority of the "one percenters" -- those people actually *lost* income due to *bad* luck.

This surprised me, but then I thought about it, and it makes sense. It's a consequence of the fact that income rises so dramatically at the top, where the top 0.01 percent earn ten times as much as the next 0.99 percent.

Suppose you finish 3,000th in performance, earning $1 million. If you're 2500th in talent, you should have had $2 million. If you were 3500th in talent but lucky, you should have earned maybe $900,000.

If you were lucky, you gained $100,000. If you were unlucky, you lost $1 million. 

So if those two have equal probabilities (which they almost do, in this case), the unlucky lose much more than the lucky gain. And that's why the "great but not really great" finishers were, on average, unlucky in terms of income.


Here's a baseball analogy. 

Normally, we think of team luck in MLB in terms of wins. But, instead, think of it in terms of pennants. 

The team that wins the pennant was clearly lucky, winning 100% of a pennant instead of (say) the true 40% probability given its talent. The other teams must have all been unlucky.

Which teams were the *most* unlucky? Clearly not the second division, which wouldn't have come close to winning the pennant even if the winning team hadn't gotten hot. The most unlucky, obviously, must be the teams that came close. Those are that teams where, if the winning team had had worse luck, they would have been able to take advantage and finish first instead.

In our income simulation, the top 100 is like a pennant, since it's worth so much more than the rankings farther below. So, when a participant gets lucky and finishes in the top 100, where did the offsetting bad luck fall? On the participants who actually had a good chance, but didn't make it.

Suppose only the top 1 percent in skill have an appreciable chance to make the top 100 in income. That means that if the top 0.01 had good luck and made more than they were worth, it must have been the next 0.99 percent who had bad luck and made less than they were worth, since they were the only ones whose failure to make the top 100 was due to luck at all.


Frank does seem to understand that it's the very top of the scale that's benefitted disproportionately from luck. In 1995, he co-wrote a book called "The Winner-Take-All Society", which argues that, over time, the rewards from being the best rise much faster than the rewards from being the second best or third best.

Recapping that previous book, Frank writes:

"[Co-author Philip] Cook and I argued that what's been changing is that new technologies and market institutions have been providing growing leverage for the talents of the ablest individuals. The best option available to patients suffering from a rare illness was once to consult with the most knowledgeable local practitioner. But now that medical records can be sent anywhere with a single click, today's patients can receive advice from the world's leading authority on that illness.

"Such changes didn't begin yesterday. Alfred Marshall, the great nineteenth-century British economist, described how advances in transportation enabled the best producers in almost every domain to extend their reach. Piano manufacturing, for example, was once widely dispersed, simply because pianos were so costly to transport ...

"But with each extension of the highway, rail, and canal systems, shipping costs fell sharply, and at each stop production became more concentrated. Worldwide, only a handful of the best piano producers now survive. It's of course a good thing that their superior offerings are now available to more people. but an inevitable side effect has been that producers with even a slight edge over their rivals went on to capture most of the industry's income.

"Therein lies a hint about why chance events have grown more important even as markets have become more competitive ..."

In other words: these days, the best doctor nationally has taken business away from the best doctor locally. But, the best doctor is the best doctor in part because of luck. So, luck rewards the best doctor nationally, but hurts the best doctor locally. And the best doctor locally is still pretty successful, maybe one of the richest people in town.

Which is what we see here, that the "ultra rich" gained from luck, and the merely "very rich" were actually hurt by it. Frank writes about the first part, but ignores the second part.


Frank's implicit argument is that if people's success is more due to luck, it's more appropriate to tax them at a higher rate. I say "implicit" because I don't think he actually says it outright. I can't say for sure without rereading the book, but I think Frank's explicit argument is that if the rich are made to realize that they got where they were substantially because of good luck, they would be less resistant to his proposed high-rate consumption tax.

But if Frank *does* believe the lucky should pay tax at a higher rate, it follows logically that he has to also believe that the unlucky should pay tax at a lower rate. If Joe has been taxed more than Mary (at an identical income) because he was luckier, then Mary must have been taxed less than Joe because she was unluckier.

By Frank's own logic (but my simulation), that would mean that those who earned between $3,000,000 and $300,000 last year were unlucky, and deserve to pay less tax. I bet that's not what Frank had in mind.


Of course, the model and numbers are debatable. In fact, they're almost certainly wrong. The biggest problem is probably the assumption that luck is normally distributed. There must be thousands of cases where a bit of luck turns a skilled performer, maybe someone normally in the $100K range, into a multi-million-dollar CEO or something. 

But who knows who those people are? They must be the minority, if we continue to assume that talent matters more than luck. But how small a minority, and how can we identify them to tax them more?

Anyway, regardless of what model you use, it does seem to me that the "second tier" of success, whoever those are, must be unlucky overall. 

In most cases, when you look at whoever is at the top of their category, they were probably lucky. If they hadn't been, who would be at the top instead? Probably the second or third in the category. Steve Wozniak instead of Bill Gates. Betamax (Sony) instead of VHS (JVC). Al Gore instead of George W. Bush. 

It seems pretty obvious to me that Wozniak, Betamax, and Al Gore have been very, very successful -- but not nearly as successful as they could have been, in large part because of bad luck. 

The main point of "The Winner-Take-All Society" is that the lucky (rich) winner winds up with a bigger share of the pie compared to the unlucky (but still rich) second-best, the unlucky (but still pretty rich) third best, and so on. In other words, the more "winner take all" there is, the bigger the difference between first and second place. 

The same forces that make the winner's income that much more a matter of good luck, must make the second-place finisher's income that much more a matter of bad luck. In a "Winner-Take-All Society," where only pennants pay off ... that's where luck becomes less important to the second division, not more.

Labels: , , ,

Friday, November 17, 2017

How Elo ratings overweight recent results

"Elo" is a rating system widely used to rate players in various games, most notably chess. Recently, FiveThirtyEight started using it to maintain real-time ratings for professional sports teams. Here's the Wikipedia page for the full explanation and formulas.

In Elo, everyone gets a number that represents their skill. The exact number only matters in the difference between you and the other players you're being compared to. If you're 1900 and your opponent is 1800, it's exactly the same as if you're 99,900 and your opponent is 99,800.

In chess, they start you off with a rating of 1200, because that happens to be the number they picked. You can interpret the 1200 either as "beginner," or "guy who hasn't played competitively yet so they don't know how good he is."  In the NBA system, FiveThirtyEight decided to start teams off with 1500, which represents a .500 team. 

A player's rating changes with every game he or she plays. The winner gets points added to his or her rating, and the loser gets the same number of points subtracted. It's like the two players have a bet, and the loser pays points to the winner.

How many points? That depends on two things: the "K factor," and the odds of winning.

The "K factor" is chosen by the organization that does the ratings. I think of it as double the number of points the loser pays for an evenly matched game. If K=20 (which FiveThirtyEight chose for the NBA), and the odds are even, the loser pays 10 points to the winner.

If it's not an even match, the loser pays the number of points by which he underperformed expectations. If the Warriors had a 90% chance of beating the Knicks, and they lost, they'd lose 18 points, which is 90% of K=20. If the Warriors win, they only gain 2 points, since they only exceeded expectations by 10 percentage points.

How does Elo calculate the odds? By the difference between the two players' ratings. The Elo formula is set so that a 400-point difference represents a 10:1 favorite. An 800 point favorite is 100:1 (10 for the first 400, multiplied by 10 for the second 400). A 200 point favorite is 3.16 to 1 (3.16 is the square root of 10). A 100 point favorite is 1.78 to 1 (the fourth root of 10), and so on.

In chess, the K factor varies depending on which chess federation is doing the ratings, and the level of skill of the players. For experienced non-masters, K seems to vary between 15 and 32 (says Wikipedia). 


Suppose A and B have equal ratings of 1600, and A beats B. With K=20, A's rating jumps to 1610, and B's rating falls to 1590. 

A and B are now separated by 20 points in the ratings, so A is deemed to have odds of 1.12:1 of beating B. (That's because the "400/20th" root of 10 is 1.12.)  That corresponds to an expected winning percentage of .529.

After lunch, they play again, and this time B beats A. Because B was the underdog, he gets more than 10 points -- 10.6 points (.529 times K=20), to be exact. And A loses the identical 10.6 points.

That means after the two games, A has a rating of 1599.4, and B has a rating of 1600.6.


That example shows one of the properties of Elo -- it weights recent performance higher than past performance. In their two games, A and B effectively tied, each going 1-1. But B winds up with a higher rating than A because his win was more recent.

Is that reasonable? In a way, it is. People's skill at chess changes over their lifetimes, and it would be weird to give the same weight to a game Garry Kasparov played when he was 8, as you would to a game Garry Kasparov played as World Champion.

But in the A and B case, it seems weird. A and B played both games the same day, and their respective skills couldn't have changed that much during the hour they took their lunch break. In this case, it would make more sense to weight the games equally.

Well, according to Wikipedia, that's might would actually happen. Instead of updating the ratings every game, the Federation would wait until the end of the tournament, and then compare each player to his or her overall expectation based on ratings going in. In this case, A and B would be expected to go 1-1 in two games, which they did, so their ratings wouldn't change at all.

But, if A and B's games were days or weeks apart, as part of different tournaments, the two games would be treated separately, and B might indeed wind up 1.2 points ahead of A.


Is that a good thing, giving a higher weight to recency? It depends how much higher a weight. 

People's skill does indeed change daily, based on mood, health, fatigue -- and, of course, longer-term trends in skill. In the four big North American team sports, it's generally true that players tend to improve in talent until a certain age (27 in baseball), then decline. And, of course, there are non-age-related talent changes, like injuries, or cases where players just got a lot better or a lot worse partway through their careers.

That's part of the reason we tend to evaluate players based on their most recent season. If a player hit 15 home runs last year, but 20 this year, we expect the 20 to be more indicative of what we can expect next season.

Still, I think Elo gives far too much weight to recent results, when applied to professional sports teams. 

Suppose you're near the end of the season, and you're looking at a team with a 40-40 record -- the Bulls, say. From that, you'd estimate their talent as average -- they're a .500 team.

Now, they win an even-money game, and they're 41-40, which is .506. How do you evaluate them now? You take the .506, and regress to the mean a tiny bit, and maybe estimate they're a .505 talent. (I'll call that the "traditional method," where you estimate talent by taking the W-L record and regressing to the mean.)

What would Elo say? Before the 81st game, the Bulls were probably rated at 1500. After the game, they've gained 10 points for the win, bringing them to 1510.

But, 1510 represents a .514 record, not .505. So, Elo gives that one game almost three times the weight that the traditional method does.

Could that be right? Well, you'd have to argue that maybe because of personnel changes, team talent changes so much from the beginning of the year to the end that the April games are worth three times as much as the average game. But ... well, that still doesn't seem right. 


Techincal note: I should mention that FiveThirtyEight adds a factor to their Elo calculation -- they give more or fewer points to the winner of the game based on the score differential. If a favorite wins by 1 point, they'll get a lot fewer points than if they won by 15 points. Same for the underdog, except that the underdog always gets more points than the favorite for the same point differential -- which makes sense.

FiveThirtyEight doesn't say so explicitly, but I think they set the weighting factor so that the average margin of victory corresponds to the number points the regular Elo would award the winner.

Here's the explanation of their system.


Elo starts with a player's rating, then updates it based on results. But, when it updates it, it has no idea how much evidence was behind the player's rating in the first place. If a team is at 1500, and then it wins an even game, it goes to 1510 regardless of whether it was at 1500 because it's an expansion team's first game, or because it was 40-40, or (in the case of chess) it's 1000-1000.

The traditional method, on the other hand, does know. If a team goes from 1-1 to 2-1, that's a move of .167 points (less after regressing to estimate talent, of course). If a team goes from 40-40 to 41-40, that's a move of only .005 points. 

Which makes sense; the more evidence you already have, the less your new evidence should move your prior. But if your prior moves the same way regardless of the previous evidence, you're seriously underweighting that previous evidence (which means you're overweighting the new evidence).

The chess Federations implicitly understand this, that you should give less weight to new results when you have more older results. That's why they vary the K-values based on who's playing. 

FIDE, for instance, weights new players at K=40, experienced players at K=20, and masters (who presumably have the most experience) at K=10.


As I said last post, I did a simulation. I created a team that was exactly average in talent, and assumed that FiveThirtyEight had accurately given them an average rating at the beginning of the year. I played out 1000 random seasons, and, on average, the team wound up at right where it started, just as you would expect.

Then, I modified the simulation as if FiveThirtyEight had underrated the team by 50 points, which would peg them as a .429 team. (They use their "CARM-Elo" player projection system for those pre-season ratings. I'm not saying that system is wrong, just checking what happens when a projection happens to be off.)

It turned out that, at the end of the 82-game season, Elo had indeed figured out the team was better than their initial rating, and had restored 45 of the 50 points. They were still underrated, but only by 5 points (.493) instead of 50 (.429). 

Effectively, the current season wiped out 90% of the original rating. Since the original rating was based on the previous seasons, that means that, to get the final rating, Elo effectively weighted this year at 90%, and the previous years at 10%. 

10% is close to 12.5%. I'll use that because it makes the calculation a bit easier. At 12.5%, which is one-eighth, it means the NBA season contains three "half lives" of about 27 games each. 

That is: after 27 games, the gap of 50 points is reduced by half, to 25. After another 27 games, it's down to 12. After a third 27 games, it's down to 6, which is 12.5% of where the gap started.

That means that to calculate a final rating, the thirds of seasons are effectively weighted in a ratio of 1:2:4. A game in April has four times the weight of a game in November. Last post, I argued why I think that's too high.


Here's another way of illustrating how recency matters. 

I tweaked the simulation to do something a little different. Instead of creating 1,000 different seasons, I created only one season, but randomly reordered the games 1,000 times. The opponents and final scores were the same; only the sequence was different. 

Under the traditional method, the talent estimates would be the same, since all 1,000 teams had the same W-L record. But the Elo ratings varied, because of recency effects. They varied with an SD of about 26 points. That's about .037 in winning percentage, or 3 wins per 82 games.

If you consider the SD to be, in a sense, the "average" discrepancy, that means that, on average, Elo will misestimate a team's talent by 3 wins. That's for teams with the same actual record -- based only on the randomness of *when* they won or lost. 

And you can't say, "well, those three wins might be because talent changed over time."  Because, that's just the random part. Any actual change in talent is additional to that. 


If all NBA games were pick-em, the SD of team luck in an 82-game season would be around 4.5 wins. Because there are lots of mismatches, which are more predictable, the actual SD is lower, maybe, say, 4.1 games. 

Elo ratings are fully affected by that 4.1 games of binomial luck, but also by another 3 games worth of luck for the random order in which games are won or lost. 

Why would you want to dilute the accuracy of your talent estimate by adding 3 wins worth of randomness to your SD? Only if you're gaining 3 wins worth of accuracy some other way. Like, for instance, if you're able to capture team changes in talent from the beginning of the season to the end. If teams vary in talent over time, like chess players, maybe weighting recent games more highly could give you a better estimate of a team's new level of skill.

Do teams vary in talent, from the beginning to the end of the year, by as much as 3 games (.037)?

Actually, 3 games is a bit of a red herring. You need more than 3 games of talent variance to make up for the 3 games of sequencing luck.

Because, suppose a team goes smoothly from a 40-win talent at the beginning of the year to a 43-win talent at the end of the year. That team will have won 41.5 games, not 40, so the discrepancy between estimate and talent won't be 3 games, but just one-and-a-half games.

As expected, Elo does improve on the 1.5 game discrepancy you get from the traditional method. I ran the simulation again, and found that Elo picked up about 90% of the talent difference rather than 50%. That means that Elo would peg the (eventual) 43-game talent at 42.7.

For a team that transitions from a 40- to a 43-game talent, the traditional method was off by 1.5 games. The Elo method was off by only 0.3 games. 

It looks like Elo is only a 1.2 game improvement over the traditional method, in its ability to spot changes in talent. But it "costs" a 3-game SD for extra randomness. So it doesn't seem like it's a good deal.

To compensate for the 3-game recency SD, you'd need the average within-season talent change to be much more than 3 games. You'd need it to be 7.5 games.

Do teams really change in talent, on average, by 7.5/82 games, over the course of a single season? Sure, some teams must, like they have injury problems to their star players. But on average? That doesn't seem plausible.


Besides, what's stopping you from adjusting teams on a case-by-case basis? If Stephen Curry gets hurt ... well, just adjust the Warriors down. If you think Curry is worth 15 games a season, just drop the Warriors' estimate by that much until he comes back.

It's when you try to do things by formula that you run into problems. If you expect Elo to automatically figure out that Curry is hurt, and adjust the Warriors accordingly ... well, sure, that'll happen. Eventually. As we saw, it will take 27 games, on average, until Elo adjusts just for half of Curry's value. And, when he comes back, it'll take 27 games until you get back only half of what Elo managed to adjust by. 

In our example, we assumed that talent changed constantly and slowly over the course of the season. That makes it very easy for Elo to track. But if you lose Curry suddenly, and get him back suddenly 27 games later ... then Elo isn't so good. If losing Curry is worth -.100 in winning percentage, Elo will start at .000 Curry's first game out, and only reach -.050 by Curry's 27th game out. Then, when he's back, Elo will take another 27 games just to bounce back from -.050 to -.025.

In other words, Elo will be significantly off for at least 54 games. Because Elo does weight recent games more heavily, it'll still be better than the traditional method. But neither method really distinguishes itself. When you have a large, visible shock to team talent, I don't see why you wouldn't just adjust for it based on fundamentals, instead of waiting a whole season for your formula to figure it out.


Anyway, if you disagree with me, and believe that team talent does change significantly, in a smooth and gradual way, here's how you can prove it.

Run a regression to predict a team's last 12 games of the season, from their previous seven ten-game records (adjusted for home/road and opponent talent, if you can). 

You'll get seven coefficients. If the seventh group has a significantly higher coefficient than the first group, then you have evidence it needs to be weighted higher, and by how much.

If the weight for the last group turns out to be three or four times as high as the weight for the first group, then you have evidence that Elo might be doing it right after all.

I doubt that would happen. I could be wrong. 

Labels: , , , , ,