Charlie Pavitt: Steroids and the Hall of Fame
Privately and publicly, Bill James, Tom Tango, and Joe Posnanski have been arguing about Baseball Reference's version of Wins Above Replacement. Specifically, they're questioning the 2016 WAR totals for Justin Verlander and Rick Porcello:
Verlander is evaluated to have created 1.6 more wins than Porcello. But their stat lines aren't that much different:
W-L IP K BB ERA
Verlander 16-9 227 254 57 3.04
Porcello 22-4 223 189 32 3.15
So why does Verlander finish so far ahead of Porcello?
Baseball Reference credits Verlander with an extra 13 runs, compared to Porcello, to adjust for team fielding. 13 runs corresponds to 1.3 WAR -- roughly, a half-run per nine innings pitched.
Why so big an adjustment? Because the Red Sox fielders were much better than the Tigers'. Baseball Info Solutions (who evaluate fielding performance from ball trajectory data), had Boston at 108 runs better than Detroit for the season. The 13-run difference between Porcello and Verlander is their share of that difference.
It all seems to make sense, except ... it doesn't. Posnanski, backed by the stats, thinks that even though Detroit's defense was worse than Boston's, the difference didn't affect those two particular pitchers that much. Posnanski argues, plausibly, that even though Detroit's fielders didn't play well over the season as a whole, they DID play well when Verlander was on the mound:
"For one thing, I think it’s quite likely that Detroit played EXCELLENT defense behind Verlander, even if they were shaky behind everyone else. I’m not sure how you can expect a defense to allow less than a .256 batting average on balls in play (the second-lowest of Verlander’s career and second lowest in the American League in 2016) or allow just three runners to reach on error all year (the lowest total of Verlander’s career).
"For another, the biggest difference in the two defenses was in right and centerfield. The Red Sox centerfielder and rightfielder saved 44 runs, because Jackie Bradley and Mookie Betts are awesome. The Tigers centerfield and rightfielder cost 49 runs because Cameron Maybin, J.D. Martinez and a cast of thousands are not awesome.
"But the Tigers outfield certainly didn’t cost Verlander. He allowed 216 fly balls in play, and only 16 were hits. Heck, the .568 average he allowed on line drives was the lowest in the American League. I find it almost impossible to believe that the Boston outfield would have done better than that."
In 2016, team forecasts for the National League turned out more accurate than they had any right to be, with FiveThirtyEight's predictions coming in with a standard error (SD) of only 4.5 wins. The forecasts for the American League, however, weren't nearly as accurate ... FiveThirtyEight came in at 8.9, and Bovada at 8.8.
That isn't all that great. You could have hit 11.1 just by predicting each team to duplicate their 2015 record. And, 11 wins is about what you'd get most years if you just forecasted every team at 81-81.
Which is kind of what the forecasters did! Well, not every team at 81-81 exactly, but every team *close* to 81-81. If you look at FiveThirtyEight's actual predictions, you'll see that they had a standard deviation of only 3.4 wins. No team was predicted to win or lose more than 87 games.
Generally, team talent has an SD of around 9 wins. If you were a perfect evaluator of talent, your forecasts would also have an SD of 9. If, however, you acknowledge that there are things that you don't know (and many that can't be known, like injuries and suspensions), you'll forecast with an SD somewhat less than 9 -- maybe 6 or 7.
But, 3.4? That seems way too narrow.
Why so narrow? I think it was because, last year, the AL standings were themselves exceptionally narrow. In 2015, no American League team won or lost more than 95 games. Only three teams were at 89 or more.
The SD of team wins in the 2015 AL was 7.2. That's much lower than the usual figure of around 11. In fact, 7.2 is the lowest for either league since 1961. In fact, I checked, and it's the lowest for any league in baseball history! (Second narrowest: the 1974 American League, at 7.3.)
Why were the standings so compressed? There are three possibilities:
1. The talent was compressed;
2. There was less luck than normal;
3. The bad teams had good luck and the good teams had bad luck, moving both sets closer to .500.
I don't think it was #1. In 2016, the SD of standings wins was back near normal, at 10.2. The year before, 2014, it was 9.6. It doesn't really make sense that team talent regressed so far to the mean between 2014 and 2015, and then suddenly jumped back to normal in 2016. (I could be wrong -- if you can find trades and signings those years that showed good teams got significantly worse in 2015 and then significantly better in 2016, that would change my mind.)
And I don't think it was #2, based on Pythagorean luck. The SD of the discrepancy in "first-order wins" was 4.3, which larger than the usual 4.0.
So, that leaves #3 -- and I think that's what it was. In the 2015 AL, the correlation between first-order-wins and Pythagorean luck was -0.54 instead of the expected 0.00. So, yes, the good teams had bad luck and the bad teams had good luck. (The NL figure was -0.16.)
When that happens, that luck compresses the standings, it definitely makes forecasting harder. Because, there's not as much information on how teams differ. To see that, consider the extreme case. If, by some weird fluke, every team wound up 81-81, how would you know which teams were talented but unlucky, and which were less skilled but lucky? You wouldn't, and so you wouldn't know what to expect next season.
Of course, that's only a problem if there *is* a wide spread of talent, one that got overcompressed by luck. If the spread of talent actually *is* narrow, then forecasting works OK.
That's what many forecasting methods assume, that if the standings are narrow, the talent must be narrow. If you do the usual "just take the standings and regress to the mean" operation, you'll wind up implicitly assuming that the spread of talent shrank at the same time as the spread in the standings shrank.
Which is fine, if that's what you think happened ... but, do you really think that's plausible? The AL talent distribution was pretty close to average in 2014. It makes more sense to me to guess that the difference between 2014 and 2015 was luck, not wholesale changes in personnel that made the bad teams better and the good teams worse.
Of course, I have the benefit of hindsight, knowing that the AL standings returned to near-normal in 2016 (with an SD of 10.2). But it's happened before -- the record-low 7.3 figure for the 1974 AL jumped back to an above-average 11.9 in 1975.
I'd think when I was forecasting the 2016 standings, I might want to make an effort to figure out which teams were lucky and which ones weren't, in order to be able to forecast a more realistic talent SD than 3.5 wins.
Besides, you have more than the raw standings. If you adjust for Pythagoras, the SD jumps from 7.2 to 8.6. And, according to Baseball Prospectus, when you additionally adjust for cluster luck, the SD rises to 9.4. (As I wrote in the P.S. to the last post, I'm not confident in that number, but never mind for now.)
An SD of 9.4 is still smaller than 11, but it should be workable.
Anyway, my gut says that you should be able to differentiate the good teams from the bad with a spread higher than 3.4 games ... but I could be wrong. Especially since Bovada's spread was even smaller, at 3.3.
It's a bad idea to second-guess the bookies, but let's proceed anyway.
Suppose you thought that the standings compression of 2015 was a luck anomaly, and the distribution of talent for 2016 should still be as wide as ever. So, you took FiveThirtyEight's projections, and you expanded them, by regressing them away from the mean, by a factor of 1.5. Since FiveThirtyEight predicted the Red Sox at four games above .500 (85-77), you bump that up to six games (87-75).
If you did that, the SD of your actual predictions is now a more reasonable 5.1. And those predictions, it turns out, would have been better. The accuracy of your new predictions would have been an SD of 8.4. You would have beat FiveThirtyEight and Bovada.
If that's too complicated, try this. If you had tried to take advantage of Bovada's compressed projections by betting the "over" on their top seven teams, and the "under" on their bottom seven teams, you would have gone 9-5 on those bets.
Now, I'm not going to so far as to say this is a workable strategy ... bookmakers are very, very good at what they do. Maybe that strategy just turned out to be lucky. But it's something I noticed, and something to think about.
If compressed standings make predicting more difficult, then a larger spread in the standings should make it easier.
Remember how the 2016 NL predictions were much more accurate than expected, with an SD of 4.5 (FiveThirtyEight) and 5.5 (Bovada)? As it turns out, last year, the SD of the 2015 NL standings was higher than normal, at 12.65 wins. That's the highest of the past three years:
2014 AL= 9.59, NL= 9.20
2015 AL= 6.98, NL=12.65
2016 AL=10.15, NL=10.71
It's not historically high, though. I looked at 1961 to 2011 ... if the 2015 NL were included, it would be well above average, but only 70th percentile.*
(* If you care: of the 10 most extreme of the 102 league-seasons in that timespan, most were expansion years, or years following expansion. But the 2001, 2002, and 2003 AL made the list, with SDs of 15.9, 17.1, and 15.8, respectively. The 1962 National League was the most extreme, at 20.1, and the 2002 AL was second.)
A high SD won't necessarily make your predictions beat the speed of light, and a low SD won't necessarily make them awful. But both contribute. As an analogy: just because you're at home doesn't mean you're going to pitch a no-hitter. But if you *do* pitch a no-hitter, odds are, you had the help of home-field advantage.
So, given how accurate the 2016 NL forecasts were, I'm not surprised that the SD of the 2015 NL standings was higher than normal.
Can we quantify how much compressed standings hurt next year's forecasts? I was curious, so I ran a little simulation.
First, I gave every team a random 2015 talent, so that the SD of team talent came out between 8.9 and 9.1 games. Then, I ran a simulated 2015 season. (I ran each team with 162 independent games, instead of having them play each other, so the results aren't perfect.)
Then, I regressed each team's 2015 record to the mean, to get an estimate of their talent. I assumed that I "knew" that the SD of talent was around 9, so I "unregressed" each regressed estimate away from the mean by the exact amount that gets the SD of talent to exactly 9.00. That became the official forecast for 2016.
Finally, I ran a simulation of 2016 (with team talent being the same as 2015). I compared the actual to the forecast, and calculated the SD of the forecast errors.
The results came out, I think, very reasonable.
Over 4,000 simulated seasons, the average accuracy was an SD of 7.9. But, the higher the SD of last year's standings, the better the accuracy:
SD Standings SD next year's forecast
7.0 8.48 (2015 AL)
12.6 7.54 (2015 NL)
20.1 6.29 (1962 NL)
So, by this reckoning, you'd expect the 2016 NL predictions to have been one win more accurate than than the AL predictions.
They were "much more accurater" than that, of course, by 3.4 or 4.5. The main reason, of course, is that there's a lot of luck involved. Less importantly, this simulation is very rough. The model is oversimplified, and there's no assurance that the relationship is actually linear. (In fact, the relationship *can't* be linear, since the "speed of light" limit is 6.4, and the model says the 1974 AL would beat that, at 6.3).
It's just a very rough regression to get a very rough estimate.
But the results seem reasonable to me. In 2016, we had (a) the narrowest standings in baseball history in the 2015 AL, and (b) a wider-than-average, 70th percentile spread in the 2016 NL. In that light, an expected difference of 1 win, in terms of forecasting accuracy, seems very plausible.
So that's my explanation of why this year's NL forecasts were so accurate, while this year's AL forecasts were mediocre. A large dose of luck -- assisted by a small (but significant) dose of extra information content in the standings.
FiveThirtyEight predicted the National League surprisingly accurately this year.
The standard error of their predictions -- that is, the SD of the difference between their team forecasts, and what actually happened -- was only 4.5 games.* (Here's the link to their forecast -- go to the bottom and choose "April 2".)
(* The SD is the square root of the average squared error. If you prefer just the average error, in this case, it was three-and-a-third games. But I'll be using just the SD in the rest of this post. In most cases, to estimate average error when you only have the SD, you can multiply by 2/pi (approximately 0.64).)
4.5 games is very, very good. In fact, it's so good it can't possibly be all skill. The "speed of light" limit on forecasting MLB is about 6.4 games. That is, even if you knew absolutely everything about the talent of a team and its opposition, every game, an SD of 6.4 is the very best you could expect to do.
Of course, you can get lucky, and beat 6.4 games. You could even get to zero, if fortune smiles on you and every team hits your projection exactly. But, 6.4 is the best you can do by skill.**
(** Actually, it might be a bit less, 6.3 or something, because 6.4 is what you get when teams are evenly matched ... mismatches are somewhat easier to predict. But never mind.)
How unusual is an SD of 4.5? Well, not *that* unusual. By my estimate, the SD of the observed SD -- sorry if that's a little confusing -- is somewhere around 1.7, for a league of 15 teams. So, FiveThirtyEight was a little over one standard deviation lucky, which isn't really a big deal. Even taking into account that FiveThirtyEight couldn't have been perfectly accurate in their talent assessments, it's still not that big a deal. If they were off, on talent, by around 3 games per team, that would bring them to only about 1.5 SDs of luck.
Still not a huge deal, but interesting nonetheless.
It wasn't just FiveThirtyEight whose projections did well ... the Vegas bookmakers did OK too. Well, at least the one I looked at, Bovada. (I assume the others would be pretty close.) They had an SD of 5.5 games, which is also better than the "speed of light." (I can't find the page I got them from, but this one, from a month earlier, is close.)
That suggests that it probably wasn't any particular flash of brilliance from either FiveThirtyEight or Bovada ... it must have been something about the way the season unfolded.
Maybe, in 2016, there was less random deviation than usual? One type of random variation is whether a team exceeds their Pythagorean Projection -- that is, whether they win more (or fewer) games than you'd expect from their runs scored and allowed. To check that, I used Baseball Prospectus's numbers -- specifically, the difference between actual and "first-order wins."***
(*** Why didn't I use second-order wins? See the P.S. at the bottom of the post.)
In the National League in 2016, the SD of Pythagorean error was 3.55. That is indeed a little smaller than the average of around 4.0. But that small difference isn't nearly enough to explain why the projections were so good.
Here's what I think is the bigger factor -- actually, a combination of two factors.
First, by random chance, the better teams happened to undershoot their Pythagorean expectation, and the worse teams happened to exceed it.
The Cubs were the best team in the league, and also the team with the most bad luck, -4.8 games. The Phillies were the worst team in the league with luck removed; you'd expect them to have won only 61.4 games, but they but played +9.6 games above their Pythagorean projection to go 71-91.
Those two were the most obvious examples, but the pattern continued through the league. Overall, the correlation between first-order wins (which is an approximation of talent) and Pythagorean error was huge: +0.61. Normally, you'd expect it to be close to zero. (In the American league, it was, indeed, close to zero, at -0.06.)
Second, there was a similar, offsetting relationship in the predictions themselves.
It turns out that the forecast errors had a strong pattern this year. Instead of being random, they came out too "conservative" -- they underestimated the talent of the better teams, and overestimated the talent of the worse teams. Here's the distribution of FiveThirtyEight's forecast errors, with the teams sorted by their forecast:
Top 5 teams: average error -4 wins (underestimate)
Mid 5 teams: average error +4 win (overestimate)
Btm 5 teams: average error +1 win (overestimate)
So, in summary:
-- FiveThirtyEight predicted teams too close to the mean
-- Teams' Pythagorean luck moved them closer to the mean
Those two things cancelled each other out to a significant extent. And that's why FiveThirtyEight was so accurate.
Next post: The American League, which is interesting for completely different reasons.
P.S. Baseball Prospectus also produces "second-order wins," which attempts to remove a second kind of luck, what I call "Runs Created luck" (and others call "cluster luck"), which is teams scoring more or fewer runs than would be expected by their batting line. I started to do that, but ... I stopped, because I found something weird.
When you remove luck from the standings, you expect to make them tighter, to bring teams closer together. (To see that better, imagine removing luck from coin tosses. Every team reverts to .500.)
Removing first-order (Pythagorean) luck does seem to reduce the SD of the standings. But, removing second-order (Cluster) luck seems to do the *opposite*.
I checked four seasons of BP data, and, in every case, the SD of second-order wins (for the set of all 30 teams) was higher than the SD of first-order wins:
Actual First-order Second-order
2016 10.7 10.8 13.1
2015 10.4 10.1 11.8
2014 9.6 8.9 9.6
2013 12.2 12.2 12.8
So, either the good teams got lucky all four years, or there's something weird about how BP is computing second-order luck.
"My only note is that Phil's assuming implicitly in these examples that the two teams' "scoreboard scores" are statistically independent. Under that assumption, what he's written here is right (at least to first order).
However, the "latent score" that underlies the Bradley-Terry-Luce-log5 model is not necessarily the "scoreboard score" that you see on the scoreboard (hence why I'm calling it the "scoreboard score"). It is possible for log5 still to work (or, work better than these calculations suggest) even if scoreboard scores are not extreme-value distributed, if the scoreboard scores aren't independent.
But that is a fine point, because, in fact, the number of types of sport or game in which log5 works precisely is exactly zero. The number of them in which log5 works reasonably well seems to be quite large (depending on how much accuracy you demand)."
Six years ago, Tom Tango described a hypothetical sprinting tournament where log5 didn't seem to give accurate results. I think I have an understanding, finally, of why it doesn't work, thanks to
(a) Ted Turocy,
(b) a paper from Kristi Tsukida and Maya R. Gupta (.pdf; see section 3.4, keeping in mind that "Bradley-Terry model" basically means "log5"), and
(c) this excellent post by John Cook.
(You don't have to follow the links now; I'll give them again later, in context.)
It turns out that the log5 formula makes a certain assumption about the sport, an assumption that makes the log5 formula work out perfectly. That assumption is: that the set of score differentials follows a logistic distribution.
What's the logistic distribution? It's a lot like the normal distribution, a bell-shaped curve. They can be so similar in shape that I'd have trouble telling them apart by eye. But, the logistic distribution has fatter tails relative to the "bell". In other words: the logistic distribution predicts rare events, like teams beating expectations by large amounts, will happen more often than the normal distribution would predict. And it predicts that certain common events, like close games, will happen less often.
The log5 formula works perfectly when scores are distributed logistically. That's been proven mathematically. But, where scores aren't actually logistic, the formula will fail, in proportion to how real life varies from the particular logistic distribution the log5 formula assumes.
That's why the formula didn't work in the sprinting example. Tango explicitly made the results normally distributed. Then, he found that log5 started to break down when the competitors became seriously mismatched. That is: log5 started to fail in the tail of the distribution, exactly where normal and logistic differ the most.
Here's a rudimentary basketball game. Two teams each get 100 possessions, and have a certain probability (talent) of scoring on each possession. Defense doesn't matter, and all baskets are two points.
Suppose you have A, a 55 percent team (expected score: 110) and B, a 45 percent team (expected score, 90). We expect each team's score to be normally distributed, with an SD of an SD of almost exactly 10 points for each team's individual score.*
(* This is just the normal approximation to binomial. To get the SD, calculate .45 multiplied by (1-.45) divided by 100. Take the square root. Multiply by 2 since each basket is 2 points. Then, multiply by 100 for the number of possessions. You get 9.95.)
Since the two teams are independent, the SD of the score difference is the square root of 2 times as big as the individual standard deviations. So, the SD of the differential is 14 points.
By talent, A is a 20-point favorite over underdog B. For B to win, it must beat the spread by 20 points. 20 divided by 14 equals 1.42 SD. Under the normal distribution, the probability of getting a result greater than 1.42 SD is 0.0778.
That means B has a 7.78 percent chance of winning, and A a 92.22 chance. The odds ratio for A is 92.22 divided by 7.78, which is 11.85.
So, that's the answer. Now, how well does log5 approximate it? To figure that out, we need to figure the talent of A and B against a .500 team.
Let's say Team C is that .500 team. Against C, team A has an advantage of 0.71 SD. The normal distribution table says that's a winning percentage of .7611, which is an odds ratio of 3.186. Similarly, team B is 0.71 SD worse than C, so it wins with probability (1 - .7611), or odds ratio 1 / 3.186.
Using log5, what's the estimated odds ratio of A over B? It's 3.186 squared, or 10.15. That works out to a win probability of only .910, an underestimate of the .922 true probability.
Log5 estimate: .910, odds ratio 10.15
Correct answer: .922, odds ratio 11.85
To recap what we just did: we started with the correct, theoretically-derived probabilities of A beating C, and of B beating C. When we plugged those exact probabilities into log5, we got the wrong answer.
Why the wrong answer? Because of the log5 formula's implicit assumption: that the distribution of the score difference between A and B is logistic, rather than normal.
What specific logistic distribution does log5 expect? The one where the mean is the log of the odds ratio and the SD is about 1.8. (Logistic distributions are normally described by a "shape parameter, which is the SD divided by 1.8 (pi divided by the square root of 3, to be exact). So, actually, the log5 formula assumes a logistic distribution with a shape parameter of exactly 1.)
So, in this case, the log5 formula treats the score distribution as if it's logistic, with a mean of 2.3 (the log of the log5-calculated odds ratio of 10.15) and an SD of 1.8 (shape parameter 1).
We can rescale that to be more intuitive, more basketball-like, and still get the same probability answer. We just multiply the mean, SD, and shape parameter by a constant. That's like taking a normal curve for height denominated in feet, and multiplying the mean and SD by 12 to convert to inches. The proportion of people under 5 feet under the first curve works out to the same as the proportion of people under 60 inches in the second curve.
In this case, we want the mean to be 20 (basketball points) instead of 2.3. So, we can multiply both the mean and the scale by 20/2.3. That gives us a new logistic distribution with mean 20 and SD 15.6.
That's reasonably close to the actual normal curve we know is correct:
Normal: mean 20, SD 14
Logistic: mean 20, SD 15.6
It's reasonably close, but still incorrect. In this case, log5 overestimates the underdog in two different ways. First, it assumes a logistic distribution instead of normal, which means fatter tails. Second, it assumes a higher SD than actually occurs, which again means fatter tails and more extreme performances.
Here, I'll give you some stolen visuals. I'm swiping this diagram from that great post John Cook post I mentioned, which compares and contrasts the two distributions. Here's a comparison of a normal and logistic distribution with the same SD: