Friday, August 04, 2017

Deconstructing an NBA time-zone regression

Warning: for regression geeks only.

----

Recently, I came across an NBA study that found an implausibly huge effect of teams playing in other time zones. The study uses a fairly simple regression, so I started thinking about what could be happening. 

My point here isn't to call attention to the study, just to figure out the puzzle of how such a simple regression could come up with such a weird result. 

------

The authors looked at every NBA regular-season game from 1991-92 to 2001-02. They tried to predict which team won, using these variables:

-- indicator for home team / season
-- indicator for road team / season
-- time zones east for road team
-- time zones west for road team

The "time zones" variable was set to zero if the game was played in the road team's normal time zone, or if it was played in the opposite direction. So, if an east-coast team played on the west coast, the west variable would be 3, and the east variable would be 0.

The team indicators are meant to represent team quality. 

------

When the authors ran the regression, they found the "number of time zones" variable large and statistically significant. For each time zone moving east, teams played .084 better than expected (after controlling for teams). A team moving west played .077 worse than expected. 

That means a .500 road team on the West Coast would actually play .756 ball on the East Coast. And that's regardless of how long the visiting team has been in the home team's time zone. It could be a week or more into a road trip, and the regression says it's still .756.

The authors attribute the effect to "large, biological effects of playing in different time zones discovered in medicine and physiology research." 

------

So, what's going on? I'm going to try to get to the answer, but I'll start with a couple of dead ends that nonetheless helped me figure out what the regression is actually doing. I should say in advance that I can't prove any of this, because I don't have their data and I didn't repeat their regression. This is just from my armchair.

Let's start with this. Suppose it were true, that for physiological reasons, teams always play worse going west, and teams always play better going east. If that were the case, how could you ever know? No matter what you see in the data, it would look EXACTLY like the West teams were just better quality than the East teams. (Which they have been, lately.)  

To see that argument more easily: suppose the teams on the West Coast are all NBA teams. The MST teams are minor-league AAA. The CST teams are AA. And the East Coast teams are minor league A ball. But all the leagues play against each other.

In that case, you'd see exactly the pattern the authors got: teams are .500 against each other in the same time zone, but worse when they travel west to play against better leagues, and better when they travel east to play against worse leagues.

No matter what results you get, there's no way to tell whether it's time zone difference, or team quality.

So is that the issue, that the regression is just measuring a quality difference between teams in different time zones? No, I don't think so. I believe the "time zone" coefficient of the regression is measuring something completely irrelevant (and, in fact, random). I'll get to that in a bit. 

------

Let's start by considering a slightly simpler version of this regression. Suppose we include all the team indicator variables, but, for now, we don't include the time-zone number. What happens?

Everything works, I think. We get decent estimates of team quality, both home and road, for every team/year in the study. So far, so good. 

Now, let's add a bit more complexity. Let's create a regression with two time zones, "West" and "East," and add a variable for the effect of that time zone change.

What happens now?

The regression will fail. There's an infinite number of possible solutions. (In technical terms, the regression matrix is "singular."  We have "collinearity" among the variables.)

How do we know? Because there's more than one set of coefficients that fits the data perfectly. 

(Technical note: a regression will always fail if you have an indicator variable for every team. To get around this, you'll usually omit one team (and the others will come out relative to the one you omitted). The collinearity I'm talking about is even *after* doing that.)

Suppose the regression spit out that the time-zone effect is actually  .080, and it also spit out quality estimates for all the teams.

From that solution, we can find another solution that works just as well. Change the time-zone effect to zero. Then, add .080 to the quality estimate of every West team. 

Every team/team estimate will wind up working out exactly the same. Suppose, in the first result, the Raptors were .400 on the road, the Nuggets were .500 at home, and the time-zone effect is .080. In that case, the regression will estimate the Raptors at .320 against the Nuggets. (That's .400 - (.500 - .500) - .080.)

In the second result, the regression leaves the Raptors at .400, but moves the Nuggets to .580, and the time-zone effect to zero. The Raptors are still estimated at .320 against the Nuggets. (This time, it's .400 - (.580 - .500) - .000.)

You can create as many other solutions as you like that fit the data identically: just add any X to the time-zone estimate, and add the same X to every Western team.

The regression is able to figure out that the data doesn't give a unique solution, so it craps out, with a message that the regression matrix is singular.

------

All that was for a regression with only two time zones. If we now expand to include all four zones, that gives six different effects each direction (E moving to C, C to M, M to P, E to M, C to P, and M to P). What if we include six time-zone variables, one for each effect?

Again, we get an infinity of solutions. We can produce new solutions almost the same way as before. Just take any solution, subtract X from each E team quality, and add X to the E-C, E-M and E-P coefficients. You wind up with the same estimates.

------

But the authors' regression actually did have one unique best fit solution. That's because they did one more thing that we haven't done.

We can get to their regression in two steps.

First, we collapse the six variables into three -- one for "one time zone" (regardless of which zone it is), one for "two time zones," and one for "three time zones". 

Second, we collapse those three variables into one, "number of time zones," which implicitly forces the two-zone effect and three-zone effect to be double and triple, respectively, the value of the one-zone effect. I'll call that the "x/2x/3x rule" and we'll assume that it actually does hold.

So, with the new variable, we run the regression again. What happens?

In the ideal case, the regression fails again. 

By "ideal case," I mean one where all the error terms are zero, where every pair of teams plays exactly as expected. That is, if the estimates predict the Raptors will play .350 against the Nuggets, they actually *do* play .350 against the Nuggets. It will never happen that every pair will go perfectly in real life, but maybe assume that the dataset is trillions of games and the errors even out.

In that special "no errors" case, you still have an infinity of solutions. To get a second solution from a first, you can, for instance, double the time zone effects from x/2x/3x to 2x/4x/6x. Then, subtract x from each CST team, subtract 2x from each MST team, and subtract 3x from each PST team. You'll wind up with exactly the same estimates as before.

-------

For this particular regression to not crap out, there have to be errors. Which is not a problem for any real dataset. The Raptors certainly won't go the exact predicted .350 against the Nuggets, either because of luck, or because it's not mathematically possible (you'd need to go 7-for-20, and the Raptors aren't playing 20 games a season in Denver).

The errors make the regression work.

Why? Before, x/2x/3x fit all the observations perfectly. So you could create duplicate solutions by adding and subtracting X and 2X from the teams, and adding X and 2X to the one-zone effects and two-zone effects. Now, because of errors, not all the observed two-zone effects are exactly double the one-zone effects. So not everything cancels out, and you get different residuals. 

That means that this time there's a unique solution, and the regression spits it out.

-------

In this new, valid, regression, what's the expected value of the estimate for the time-zone effect?

I think it must be zero.

The estimate of the coefficient is a function of the observed error terms in the data. But the errors are, by definition, just as likely to be negative as positive. I believe (but won't prove) that if you reverse the signs of all the error terms, you also reverse the sign of the time zone coefficient estimate.

So, the coefficient is as likely to be negative as positive, which means by symmetry, its expected value must be zero.

In other words: the coefficient in the study, the one that looks like it's actually showing the physiological effects of changing time zone ... is actually completely random, with expected value zero.

It literally has nothing at all to do with anything basketball-related!

-------

So, that's one factor that's giving the weird result, that the regression is fitting the data to randomness. Another factor, and (I think) the bigger one, is that the model is wrong. 

There's an adage, "All models are wrong; some models are useful." My argument is that this model is much too wrong to be useful. 

Specifically, the "too wrong" part is the requirement that the time-zone effect must be proportional to the number of zones -- the "x/2x/3x" assumption.

It seems like a reasonable assumption, that the effect should be proportional to the time lag. But, if it's not, that can distort the results quite a bit. Here's a simplified example showing how that distortion can happen.

Suppose you were to run the regression without the time-zone coefficient, and you get talent estimates for the teams, and you look at the errors in predicted vs. actual. For East teams, you find the errors are

+.040 against Central
+.000 against Mountain
-.040 against Pacific

That means that East teams played .040 better than expected against Central teams (after adjusting for team quality). They played exactly as expected against Mountain Time teams, and .040 worse than expected against West Coast teams.

The average of those numbers is zero. Intuitively, you'd look at those numbers and think: "Hey, there's no appreciable time-zone effect. Sure, the East teams lost a little more than normal against the Pacific teams, but they won a little more than normal against the Central teams, so it's mostly a wash."

Also, you'd notice that it really doesn't look like the observed errors follow x/2x/3x. The closest fit seems to be when you make x equal to zero, to get 0/0/0.

So, does the regression see that and spit out 0/0/0, accepting the errors it found? No. It actually finds a way to make everything fit perfectly!

To do that, it increases its estimates of every Eastern team by .080. Now, every East team appears to underperform by .080 against each of the three other time zones. Which means the observed errors are now 

-.040 against Central
-.080 against Mountain
-.120 against Pacific

And that DOES follow the x/2x/3x model -- which means you can now fit the data perfectly. Using 0/0/0, the .500 Raptors were expected to be .500 against an average Central team (.500 minus 0), but they actually went .540. Using -.040/-.080/.120, the .580 Raptors are expected to be .540 against an average Central team (.580 minus .040), and that's exactly what they did.

So the regression says, "Ha! That must be the effect of time zone! It follows the x/2x/3x requirement, and it fits the data perfectly, because all the errors now come out to zero!"

So you conclude that 

(a) over a 20-year period, the East teams were .580 teams but played down to .500 because they suffered from a huge time-zone effect.

Well, do you really want to believe that? 

You have at least two other options you can justify: 

(b) over a 20-year period, the East teams were .500 teams and there was a time-zone effect of +40 points playing in CST, and -40 points playing in PST, but those effects weren't statistically significant.

(c) over a 20-year period, the East teams were .500 teams and due to lack of statistical significance and no obvious pattern, we conclude there's no real time-zone effect.

The only reason to choose (a) is if you are almost entirely convinced of two things: first, that x/2x/3x is the only reasonable model to consider, and, second, that 40/80/120 points is plausible enough to not assume that it's just random crap, despite the statistical significance.

You have to abandon your model at this point, don't you? I mean, I can see how, before running the regression, the x/2x/3x assumption seemed as reasonable as any. But, now, to maintain that it's plausible, you have to also believe it's plausible that an Eastern team loses .120 points of winning percentage when it plays on the West Coast. Actually, it's worse than that! The .120 was from this contrived example. The real data shows a drop of more than .200 when playing on the West Coast!

The results of the regression should change your mind about the model, and alert you that the x/2x/3x is not the right hypothesis for how time-zone effects work.

-------

Does this seem like cheating? We try a regression, we get statistically-significant estimates, but we don't like the result so we retroactively reject the model. Is that reasonable?

Yes, it is. Because, you have to either reject the model, or accept its implications. IF we accept the model, then we're forced to accept that there's 240-point West-to-East time zone effect, and we're forced to accept that West Coast teams that play at a 41-41 level against other West Coast teams somehow raise their game to the 61-21 level against East Coast teams that are equal to them on paper.

Choosing the x/2x/3x model led you to an absurd conclusion. Better to acknowledge that your model, therefore, must be wrong.

Still think it's cheating? Here's an analogy:

Suppose I don't know how old my friend's son is. I guess he's around 4, because, hey, that's a reasonable guess, from my understanding of how old my friend is and how long he's been married. 

Then, I find out the son is six feet tall.

It would be wrong for me to keep my assumption, wouldn't it? I can't say, "Hey, on the reasonable model that my friend's son is four years old, the regression spit out a statistically significant estimate of 72 inches. So, I'm entitled to conclude my friend's son is the tallest four-year-old in human history."

That's exactly what this paper is doing.  

When your model spews out improbable estimates for your coefficients, the model is probably wrong. To check, try a different, still-plausible model. If the result doesn't hold up, you know the conclusions are the result of the specific model you chose. 

------

By the way, if the statistical significance is concerning you, consider this. When the authors repeated the analysis for a later group of years, the time-zone effect was much smaller. It was .012 going east and -.008 going west, which wasn't even close to statistical significance. 

If the study had combined both samples into one, it wouldn't have found significance at all.

Oh, and, by the way: it's a known result that when you have strong correlation in your regression variables (like here), you get wide confidence intervals and weird estimates (like here). I posted about that a few years ago.  

-------

The original question was: what's going on with the regression, that it winds up implying that a .500 team on the West Coast is a .752 team on the East Coast?

The summary is: there are three separate things going on, all of which contribute:

1.  there's no way to disentangle time zone effects from team quality effects.

2.  the regression only works because of random errors, and the estimate of the time-zone coefficient is only a function of random luck.

3.  the x/2x/3x model leads to conclusions that are too implausible to accept, given what we know about how the NBA works. 





-----

UPDATE, August 6/17: I got out of my armchair and built a simulation. The results were as I expected. The time-zone effect I built in wound up absorbed by the team constants, and the time-zone coefficient varied around zero in multiple runs.



Labels: ,

Friday, June 23, 2017

Juiced baseballs, part II

Last post, I showed how MGL found the variation (SD) of MLB baseballs to be in the range of about 7 feet difference for a typical fly ball. I wondered if that were truly the case, or if some of it wasn't real, just imprecision due to measurement error.

After some Twitter conversations that led me to other sources, I'm leaning to the conclusion that the variance is real.

------

Two of the three measurements in MGL's study (co-authored with Ben Lindbergh) were the circumference of the baseball and its average seam height. For both of those factors, the higher the measure, the more air resistance, and therefore shorter distance travelled.

It occurred to me -- why not measure distance directly, if that's what you're interested in? MGL told me, on Twitter, that that's been done. I found one study via a Google search (a study that Kevin later linked to in a comment).

That study took a box of one dozen MLB balls, fired them from a cannon one by one, and observed how far each travelled. Crucially, the authors adjusted that distance for the original speed and angle, because the cannon itself produces variations in intial conditions. So, what remains is mostly about the ball.

For one of the two boxes, the balls varied (SD) by 8 feet. For the second box, the SD was only 3 feet.

It's still possible that some of that variation is due to initial conditions that weren't controlled for, like small fluctuations in temperature, or air movement within the flight path, or whatever. Fortunately, the authors repeated the procedure, but for a single ball fired multiple times. 

The SD for the single ball was 3 feet.

Using the usual method, we know

SD for different balls ^ 2 = SD for a single ball ^ 2 + SD caused by ball differences ^ 2

That means for the first box, we estimate that the balls vary by 7 feet. For the second box, it's 0 feet. That's a big difference. Fortunately again, the authors repeated the procedure for different types of balls.

NCAA balls have higher seams and therefore less carry. The study found an overall SD of 11 feet, and single ball variation of 2 feet. That means different balls vary by an expected 10.8 feet, which I'll round to 11. 

For minor league balls, the study found an SD of 8 feet overall, but didn't test single balls. Taking 3 feet as a representative estimate for single-ball variation, we get that MiLB balls vary by 7 feet. (8 squared minus 3 squared equals 7 squared, roughly.)

So we have:

-- MLB  balls vary  0 feet in air resistance
-- MLB  balls vary  7 feet in air resistance
-- MiLB balls vary  7 feet in air resistance
-- NCAA balls vary 11 feet in air resistance

In that light, the 7 feet found in MGL's study doesn't seem out of line. Actually, that 7 feet is a bit of an overestimate. It includes variation in COR (bounciness), which doesn't factor into air resistance, as far as I can tell. Limiting only to air resistance, MGL's study found an SD of only 6 feet.

-----

One thing I noticed in the MGL data is that even for balls within the same era, the COR "bouciness" measure correlates highly to both circumference (-.46 overall) and seam height (-.35 overall). (For the 10 balls after the 2016 All-Star break, it's -.36 and -.56, respectively.)

I don't know if those measures are related on some kind of physics basis, or if it's just coincidence that they varied together that way. 

-----

One thing I wonder: are balls within the same batch (whether the definition of "batch" is a box, a case, or a day's production) more uniform than balls from different batches? I haven't found a study that tells us that. From MGL's data, and treating day of use as a "batch," my eyeballs say batches are slightly more uniform than expected, but not much. My eyeballs could be wrong.

If batches *are* more uniform, teams could get valuable information by grabbing a few balls from today's batch, and getting them tested in advance. They'd be more likely to know, then, if they were dealing with livelier or deader balls that night.

Even if there's no difference within batches compared to between batches, it's still worth the testing. I don't know if any teams actually did this, but if any of them were testing balls in 2016, they'd have had advance knowledge that the balls were getting livelier. 

I have no idea what a team would do with that information, that home runs were about to jump significantly over last year ... but you'd think it would be valuable some way.

-----

MGL tweeted, and I agreed, that it doesn't take much variation in a ball to make a huge difference to home run rates. He also thinks that any change in liveliness is likely to have been inadvertent on the part of the manufacturer, since it takes so little to make balls fly farther. I agree with that too.

But, why are MLB standards so lenient? As Lindbergh quotes from an earlier report,


" ... two baseballs could meet MLB specifications for construction but one ball could be theoretically hit 49.1 feet further."

Why doesn't MLB just put tighter control on the baseballs it uses? If the manufacturers can't make baseballs that precise, just put out a net at a standard distance, fire all the balls, and discard (or save for batting practice) all the balls that land outside the net. (That can't be so hard, can it? It can't be that the cannon would damage the balls too much, since MLB reuses balls that have been hit for line drives, which is a much more violent impact.)

You could even assign the balls to different liveliness groups, and require that different batches be stored at different humidor settings to equalize their bounciness.

Even if that's not practical, couldn't MLB, at least, test the balls regularly, so as to notice the variation before it shows up so obviously in the HR totals?

-----

Finally, one last thought I had. If a ball is hit for a deep fly ball, doesn't that suggest that, at least as a matter or probability, it's juicier than average? If I were the pitching team, I might not want to pitch that ball again. It might be an expected difference of only a foot or two, but every little bit helps.





Labels: , , ,

Monday, June 19, 2017

Are some of today's baseballs twice as lively as others?

Over at The Ringer, Ben Lindbergh and Mitchel Lichtman (MGL) claim to have evidence of a juiced ball in MLB.

They got the evidence in the most direct way possible -- by obtaining actual balls, and having them tested. MLB sells some of their game-used balls directly to the public, with certificates of authenticity that include the date and play in which the ball was used. MGL bought 36 of those balls, and sent them to a lab for testing.

It never once occurred to me that you could do that ... so simple an idea, and so ingenious! Kudos to MGL. I wonder why mainstream sports journalists didn't think of it? It would be trivial for Sports Illustrated or ESPN to arrange for that.

Anyway ... it turned out that the 13 more recent balls -- the ones used in 2016 -- were indeed "juicier" than the 10 older balls used before the 2015 All-Star break. Differences in COR (Coefficient of Restitution, a measure of "bounciness"), seam height, and circumference were all in the expected "juicy" direction in favor of the newer baseballs. (The difference was statistically significant at 2.6 SD.)

The article says,


"While none of these attributes in isolation could explain the increase in home runs that we saw in the summer of 2015, in combination, they can."

If I read that right, it means the magnitude of the difference in the balls matches the magnitude of the increase in home runs. The sum of the three differences translated to the equivalent of 7.1 feet in fly ball distance.

The authors posted the results of the lab tests, for each of the 36 balls in the study; you can find their spreadsheet here.

-------

One thing I noticed: there sure is a lot of variation between balls, even within the same era, even used on the same day. Consider, for instance, the balls marked "MSCC0041" and "MSCC0043," both used on June 15, 2016.

The "43" ball had a COR of .497, compared to .486 for the "41" ball. That's a difference of 8 feet (I extrapolated from the chart in the article).

The "43" ball had a seam height of .032 inches, versus .046 for the other ball. That's a difference of *17 feet*.

The "43" ball had a circumference of 9.06 inches, compared to 9.08. That's another 0.5 feet.

Add those up, and you get that one ball, used the same day as another, was twenty-five feet livelier

If 7.1 feet (what MGL observed between seasons) is worth, say, 30 percent more home runs, then the 25 foot difference means the "43" ball is worth DOUBLE the home runs of the "41" ball. And that's for two balls that look identical, feel identical, and were used in MLB game play on exactly the same day.

-----

That 25-foot difference is bigger than typical, because I chose a relative outlier for the example. But the average difference is still pretty significant. Even within eras, the SD of difference between balls (adding up the three factors) is 7 or 8 feet.

Which means, if you take two random balls used on the same day in MLB, on average, one of them is *40 percent likelier* to be hit for a home run.

Of course, you don't know which one. If it were possible to somehow figure it out in real time during a game, what would that mean for strategy?


-----

UPDATE: thinking further ... could it just be that the lab tests aren't that precise, and the observed differences between same-era balls are mostly random error? 

That would explain the unintuitive result that balls vary so hugely, and it would still preserve the observation that the eras are different.
















Labels: , , ,

Thursday, May 25, 2017

Pete Palmer on luck vs. skill

Pete Palmer has a new article on skill and luck in baseball, in which he crams a whole lot of results into five pages. 

It's called "Calculating Skill and Luck in Major League Baseball," and appears in the new issue of SABR's "Baseball Research Journal."  It's downloadable only by SABR members at the moment, but will be made publicly available when the next issue comes out this fall.

For most of the results, Pete uses what I used to call the "Tango method," which I should call the "Palmer method," because I think Pete was actually the first to use it in the context of sabermetrics, in the 2005 book "Baseball Hacks."  (The mathematical method is very old; Wikipedia says it's the "Bienaymé formula," discovered in 1853. But its use in sabermetrics is recent, as far as I can tell.)

Anyway, to go through the method yet one more time ... 

Pete found that the standard deviation (SD) of MLB season team wins, from 1981 to 1990, was 9.98. Mathematically, you can calculate that the expected SD of luck is 6.25 wins. Since a team's wins is the total of (a) its expected wins due to talent, and (b) deviation due to luck, the 1853 formula says

SD(actual)^2 = SD(talent)^2 + SD(luck)^2

Subbing in the numbers, we get

9.98 ^ 2 = SD(talent)^2 + 6.25^2 

Which means SD(talent) = 7.78.

In terms of the variation in team wins for single seasons from 1981 to 1990, we can estimate that differences in skill were only slightly more important than differences in luck -- 7.8 games to 6.3 games.

------

That 7.8 is actually the narrowest range of team talent for any decade. Team skill has been narrowing since the beginning of baseball, but seems to have widened a bit since 1990. Here's part of Pete's table:

decade 
ending   SD(talent)
-------------------
 1880     9.93
 1890    14.44
 1900    14.72
 1910    15.33
 1920    13.06
 1930    12.51
 1940    13.66
 1950    12.99
 1960    11.95
 1970    11.17
 1980     9.75
 1990     7.78
 2000     8.46
 2010     9.87
 2016     8.91

Anyway, we've seen that many times, in various forms (although perhaps not by decade). But that's just the beginning of what Pete provides. I don't want to give away his entire article, but here some of the findings I hadn't seen before, at least not in this form:

1. For players who had at least 300 PA in a season, the spread in their batting average is roughly evenly caused by luck and skill.

2. Switching from BA to NOPS (normalized on-base plus slugging), skill now surpasses luck, by an SD of 20 points to 15.

3. For pitchers with 150 IP or more, luck and skill are again roughly even.

In the article, these are broken down by decade. There's other stuff too, including comparisons with the NBA and NFL (OK, that's not new, but still). Check it out if you can.

-------

OK, one thing that surprised me. Pete used simulations to estimate the true talent of teams, based on their W-L record. For instance, teams who win 95-97 games are, on average, 5.6 games lucky -- they're probably 90 or 91-win talents rather than 96.

That makes sense, and is consistent with other studies that tried to figure out the same thing. But Pete went one step further: he found actual teams that won 95-97 games, and checked how they did next year.

For the year in question, you'd expect them to have been 91 win teams. For the following year, you'd expect them to be *worse* than 91 wins, though. Because, team talent tends to revert to .500 over the medium term, unless you're a Yankee dynasty or something.

But ... for those teams, the difference was only six-tenths of a win. Instead of being 91 wins (90.8), they finished with an average of 90.2.

I would have thought the difference would have been more than 0.6 wins. And it's not just this group. For teams who finished between 58 and 103 wins, no group regressed more than 1.8 wins beyond their luck estimate. 

I guess that makes sense, when you think about it. A 90-win team is really an 87-win talent. If they regress to 81-81 over the next five seasons, that's only about one win per year. It's my intuition that was off, and it took Pete's chart to make me see that.






Labels: , ,

Wednesday, May 17, 2017

The hot hand debate vs. the clutch hitting debate

In the "hot hand" debate between Guy Molyneux and Joshua Miller I posted about last time, I continue to accept Guy's position, that "the hot hand has a negligible impact on competitive sports outcomes."

Josh's counterargument is that some evidence for a hot hand has emerged, and it's big. That's true: after correcting for the error in the Gilovich paper, Miller and co-author Adam Sanjurjo did find evidence for a hot hand in the shooting data of Gilovich's experiment. They also found a significant hot hand in the NBA's three-point shooting contest

I still don't believe that those necessarily suggest a similar hot hand "in the wild" (as Guy puts it), especially considering that to my knowledge, none has been found in actual games. 

As Guy says,


"Personally, I find it easy to believe that humans may get into (and out of) a rhythm for some extremely repetitive tasks – like shooting a large number of 3-point baskets. Perhaps this kind of “muscle memory” momentum exists, and is revealed in controlled experiments."

-------

Of course, I keep an open mind: maybe players *do* get "hot" in real game situations, and maybe we'll eventually see evidence for it. 

But ... that evidence will be hard to find. As I have written before, and as Josh acknowledges himself, it's hard to pinpoint when a "hot hand" actually occurs, because streaks happen randomly without the player actually being "hot."

I think I've used this example in the past: suppose you have a 50 percent shooter when he's normal, but he turns in to a 60 percent shooter when he's "hot," which is one-tenth of the time. His overall rate is 51 percent.

Suppose that player makes three consecutive shots. Does that mean he's in his "hot" state? Not necessarily. Even when he's "normal," he's going to have times where he makes three consecutive shots just by random luck. And since he's "normal" nine times as often as he's "hot," the normal streaks will outweigh the hot streaks.

Specifically, only 19 percent of three-hit streaks will come when the player is hot. In other words, four out of five streaks are false positives.

(Normally, he makes three consecutive shots one time in 8. Hot, he makes three consecutive shots one time in 4.63. In 100 sequences, he'll be "normal" 90 times, for an average 11.25 streaks. In his 10 "hot" times, he'll make 2.16 streaks. That's about a 4:1 ratio.)

Averaging the real hotness with the fake hotness, the player will shoot 51.9 percent after a streak. But his overall rate is 51.0 percent. It takes a huge sample size to notice the difference between 51 percent and 51.9 percent.

Even if you do notice a difference, does it really make an impact on game decisions? Are you really going to give the player the ball more because his expectation is 0.9 percent higher, for an indeterminate amout of time?

-------

And that's my main disagreement with Josh's argument. I do acknowledge his finding that there's evidence of a "muscle memory" hot hand, and it does seem reasonable to think that if there's a hot hand in one circumstance, there's probably one in real games. After all, *part* of basketball is muscle memory ... maybe it fades when you don't take shots in quick succession, but it still seems plausible that maybe, some days you're more calibrated than others. If your muscles and brain are slightly different each day, or even each quarter, it's easy to imagine that some days, the mean of your instinctive shooting motion is right on the money, but, other days, it's a bit short.

But the argument isn't really about the *existence* of a hot hand -- it's about the *size* of the hot hand, whether it makes a real difference in games. And I think Guy is right that the effect has to be negligible. Because, even if you have a very large change in talent,  from 50 percent to 60 percent -- and a significant frequency of "hotness", 10 percent of the time -- you still only wind up with a 0.9 percent increased expectation after a streak of three hits. 

You could argue that, well, maybe 50 to 60 percent understates the true effect ... and you could get a stronger signal by looking at longer streaks.

That's true. But, to me, that argument actually *hurts* the case for the hot hand. Because, with so much data available, and so many examples of long streaks, a signal of high-enough strength should have been found by now, no? 


-------

This debate, it seems to me, echoes the clutch hitting debate almost perfectly.

For years, we framed the state of the evidence as "clutch hitting doesn't exist," because we couldn't find any evidence of signal in the noise. Then, a decade ago, Bill James published his famous "Underestimating the Fog" essay, in which he argued (and I agree) that you can't prove a negative, and the "fog" is so thick that there could, in fact, be a true clutch hitting talent, that we have been unable to notice.

That's true -- clutch hitting talent may, in fact, exist. But ... while we can't prove it doesn't exist, we CAN prove that if it does exist, it's very small. My study (.pdf) showed the spread (SD) among hitters would have to be less than 10 points of batting average (.010). "The Book" found it to be even smaller, .008 of wOBA (a metric that includes all offensive components, but is scaled to look like on-base percentage). 

To my experience, a sizable part of the fan community seizes on the "clutch hitting could be real" finding, but ignores the "clutch hitting can't be any more than tiny" finding. 

The implicit logic goes something like, 

1. Bill James thinks clutch hitting exists!
2. My favorite player came through in the clutch a lot more than normal!
3. Therefore, my favorite player is a clutch hitter who's much better than normal when it counts!

But that doesn't follow. Most strong clutch hitting performances will happen because of luck. Your great clutch hitting performance is probably a false positive. Sure, a strong clutch performance is more likely to happen given that a player is truly clutch, but, even then, with an SD of 10 points, there's no way your .250 hitter who hit .320 in the clutch is anything near a .320 clutch hitter. If you did the math, maybe you'd find that you should expect him to be .253, or something. 

Well, it's the same here, with the hot hand:

1. Miller and Sanjurjo found a real hot hand!
2. Therefore, hot hand is not a myth!
3. My favorite player just hit his last five three-point attempts!
4. Therefore, my player is hot and they should give him the ball more!

Same bad logic. Most streaks happen because of luck. The streak you just saw is probably a false positive. Sure, streaks will happen given that a player truly has a hot hand, but, even then, given how small the effect must be, there's no way your usual 50-percent-guy is anything near superstar level when hot. If you had the evidence and did the math, maybe you'd find that you should expect him to be 52 percent, or something.

-------

For some reason, fans do care about whether clutch hitting and the hot hand actually happen, but *don't* care how big the effect is. I bet psychologists have a cognitive fallacy for this, the "Zero Shades of Grey" fallacy or the "Give Them an Inch" fallacy or the "God Exists Therefore My Religion is Correct" fallacy or something, where people are unwilling to believe something into existence -- but, once given license to believe, are willing to assign it whatever properties their intuition comes up with.

So until someone shows us evidence of an observable, strong hot hand in real games, I would have to agree with Guy:


"... fans’ belief in the hot hand (in real games) is a cognitive error."  

The error is not in believing the hot hand exists, but in believing the hot hand is big enough to matter. 

Science may say there's a strong likelihood that intelligent life exists on other planets -- but it's still a cognitive error to believe every unexplained light in the sky is an alien flying saucer.



Labels: , ,

Wednesday, April 26, 2017

Guy Molyneux and Joshua Miller debate the hot hand

Here's a good "hot hand" debate between Guy Molyneux and Joshua Miller, over at Andrew Gelman's blog.

A bit of background, if you like, before you go there.

-----

In 1985, Thomas Gilovich, Robert Vallone, and Amos Tversky published a study refuting the "hot hand" hypothesis, which is the assumption that after a player has recently performed exceptionally well, he is likely to be "hot" and continue to perform exceptionally well.

The Gilovich [et al] study showed three results:

1. NBA players were actually *worse* after recent field goal successes than after recent failures;

2. NBA players showed no significant correlation between their first free throw and second free throw; and

3. In an experiment set up by Gilovich, which involved long series of repeated shots by college basketball players, there was no significant improvement after a series of hits.

-----

Then, in 2015-2016, Joshua Miller and Adam Sanjurjo found a flaw in Gilovich's reasoning. 

The most intuitive way to describe the flaw is this:

Gilovich assumed that if a player shot (say) 50 percent over the full sequence of 100 shots, you'd expect him to shoot 50 percent after a hit, and 50 percent after a miss.

But this is clearly incorrect. If a player hit 50 out of 100, then, if he made his (or her) first shot, what's left is 49 out of 99. You wouldn't expect 50%, then, but only about 49.5%. And, similarly, you'd expect 50.5% after a miss.

By assuming 50%, the Gilovich study set the benchmark too high, and would call a player cold or neutral when he was actually neutral or hot.

(That's a special case of the flaw Miller and Sanjurjo found, which applies only to the "after one hit" case. For what happens after a streak of two or more consecutive hits, it's more complicated. Coincidentally, the flaw is actually identical to one that Steven Landsburg posted for a similar problem, which I wrote about back in 2010. See my post here, or check out the Miller paper linked to above.)

------

The Miller [and Sanjurjo] paper corrected the flaw, and found that in Gilovich's experiment, there was indeed a hot hand, and a large one. In the Gilovich paper, shooters and observers were allowed to bet on whether the next shot would be made. The hit rate was actually seven percentage points higher when they decided to bet high, compared to when they decided to bet low (for example, 60 percent compared to 53 percent).

That suggests that the true hot hand effect must be higher than that -- because, if seven percentage points was what the participants observed in advance, who knows what they didn't observe? Maybe they only started betting when a streak got long, so they missed out on the part of the "hot hand" effect at the beginning of the streak.

However, there was no evidence of a hot hand in the other two parts of the Gilovich paper. In one part, players seem to hit field goals *worse* after a hit than after a miss -- but, corrected for the flaw, it seems (to my eye) that the effect is around zero. And, the "second free throw after the first" doesn't feature the flaw, so the results stand.

------

In addition, in a separate paper, Miller and Sanjurjo analyzed the results of the NBA's three-point contest, and found a hot hand there, too. I wrote about that in two posts in 2015. 

-------

From that, Miller argues that the hot hand *does* exist, and we now have evidence for it, and we need to take it seriously, and it's not a cognitive error to believe the hot hand represents something real, rather than just random occurrences in random sequences. 

Moreover, he argues that teams and players might actually benefit from taking a "hot hand" into account when formulating strategy -- not in any specific way, but, rather, that, in theory, there could be a benefit to be found somewhere.

He also uses an "absence of evidence is not evidence of absence"-type argument, pointing out that if all you have is binary data, of hits and misses, there could be a substantial hot hand effect in real life, but one that you'd be unable to find in the data unless you had a very large sample. I consider that argument a parallel to Bill James' "Underestimating the Fog" argument for clutch hitting -- that the methods we're using are too weak to find it even if it were there.

------

And that's where Guy comes in. 

Here's that link again. Be sure to check the comments ... most of the real debate resides there, where Miller and Guy engage each other's arguments directly.






Labels: , , ,

Friday, March 24, 2017

Career run support for starting pitchers

For the little study I did last post, I used Retrosheet data to compile run support stats for every starting pitcher in recent history (specifically, pitchers whose starts all came in 1950 or later).

Comparing every pitcher to his teammates, and totalling up everything for a career ...the biggest "hard luck" starter, in terms of total runs, is Greg Maddux. In Maddux's 740 starts, his offense scored 238 fewer runs than they did for his teammates those same seasons. That's a shortfall of 0.32 runs per game.

Here's the top six:

Runs   GS   R/GS  
--------------------------------
-238  740  -0.32  Greg Maddux
-199  773  -0.26  Nolan Ryan
-192  707  -0.27  Roger Clemens
-168  430  -0.39  A.J. Burnett
-167  690  -0.24  Gaylord Perry
-164  393  -0.42  Steve Rogers

Three four of the top five are in the Hall of Fame. You might expect that to be the case, since, to accumulate a big deficiency in run support, you have to pitch a lot of games ... and guys who pitch a lot of games tend to be good. But, on the flip side, the "good luck" starters, whose teams scored more for them than for their teammates, aren't nearly as good:

Runs   GS   R/GS  
--------------------------------
+238  364  +0.65  Vern Law
+188  458  +0.41  Mike Torrez
+170  254  +0.67  Bryn Smith
+151  297  +0.51  Ramon Martinez
+147  355  +0.41  Mike Krukow
+143  682  +0.21  Tom Glavine

The only explanation for the difference, that I can think of, is that to have a long career despite bad run support, you have to be a REALLY good pitcher. To have the same length career, with good run support, you can just be PRETTY good.

But, that assumes that teams pay a lot of attention to W-L record, which would be the biggest statistical reflection of run support. And, we're only talking about a difference of around half a run per game. 

Another possibility: pitchers who are the ace of the staff usually start on opening day, where they face the other team's ace. So, that game, against a star pitcher, they get below-average support. Maybe, because of the way rotations work, they face better pitchers more often, and that's what accounts for the difference. Did Bill James study this once?

In any event, just taking the opening day game .. if those games are one run below average for the team, and Nolan Ryan got 20 of those starts, there's 20 of his 199 runs right there.

--------

UPDATE: see the comments for suggestions from Tango and GuyM.  The biggest one: GuyM points out that good pitchers lead to more leads, which means fewer bottom-of-the-ninth runs when they pitch at home.  Back of the envelope estimate: suppose a great pitcher means the team goes 24-8 in his starts, instead of 16-16.  That's 8 extra wins, which is 4 extra wins at home, which is 2 runs over a season, which is 30 runs over 15 good seasons like that.
--------

Here are the career highs and lows on a per-game basis, minimum 100 starts:

Runs   GS   R/GS  
--------------------------------
- 85  106  -0.80  Ryan Franklin
- 94  134  -0.70  Shawn Chacon
-135  203  -0.66  Ron Kline
- 72  116  -0.62  Shelby Miller
-154  249  -0.62  Denny Lemaster
- 68  115  -0.59  Trevor Wilson

Runs   GS   R/GS  
--------------------------------
+127  164  +0.77  Bill Krueger
+ 82  108  +0.76  Rob Bell
+ 89  118  +0.76  Jeff Ballard
+ 81  110  +0.73  Mike Minor
+170  254  +0.67  Bryn Smith
+106  161  +0.66  Jake Arrieta
+238  364  +0.65  Vern Law

These look fairly random to me.

-------

Here's what happens if we go down to a minimum of 10 starts:

Runs   GS   R/GS  
---------------------------------
- 29   12  -2.40  Angel Moreno
- 30   13  -2.29  Jim Converse
- 23   11  -2.25  Mike Walker
- 20   11  -1.86  Tony Mounce
- 25   14  -1.81  John Gabler

Runs   GS   R/GS  
---------------------------------
+ 32   11  +2.91  J.D. Durbin
+ 43   17  +2.56  John Strohmayer
+ 58   25  +2.30  Colin Rea
+ 61   28  +2.16  Bob Wickman
+ 23   11  +2.33  John Rauch

-------

It seems weird that, for instance, Bob Wickman would get such good run support in as many as 28 starts, his team scoring more than two extra runs a game for him. But, with 2,169 pitchers in the list, you're going to get these kinds of things happening just randomly.

The SD of team runs in a game is around 3. Over 36 starts, the SD of average support is 3 divided by the square root of 36, which works out to 0.5. Over Wickman's 28 starts, it's 0.57. So, Wickman was about 3.8 SDs from zero.

But that's not quite right ... the support his teammates got is a random variable, too. Accounting for that, I get that Wickman was 3.7 SDs from zero. Not that big a deal, but still worth correcting for.

I'll call that "3.7" figure the "Z-score."  Here are the top and bottom career Z-scores, minimum 72 starts:


    Z   GS   R/GS  
--------------------------------
-3.06   72  -1.16  Kevin Gausman
-2.94  203  -0.66  Ron Kline
-2.89  249  -0.62  Denny Lemaster
-2.57  134  -0.70  Shawn Chacon
-2.57  740  -0.32  Greg Maddux

    Z   GS   R/GS  
--------------------------------
+3.79  364  +0.65  Vern Law
+3.24  254  +0.67  Bryn Smith
+3.16  164  +0.77  Bill Krueger
+3.12   93  +1.02  Roy Smith
+2.73  247  +0.56  Tony Cloninger

The SD of the overall Z-scores is 1.045, pretty close to the 1.000 we'd expect if everything were just random. But, that still leaves enough room that something else could be going on.

-------

I chose a cutoff 72 starts to include Kevin Gausman, who is still active. Last year, the Orioles starter went 9-12 despite an ERA of only 3.61. 

Not only is Gausman the highest Z-score of pitchers with 72 starts, he's also the highest Z-score of pitchers with as few as 10 starts!

Of the forty-two starters more extreme than Gausman's support shortfall of 1.16 runs per game, none of them have more than 41 starts. 

Gausman is a historical outlier, in terms of poor run support -- the hardluckest starting pitcher ever.

------

I've posted the full spreadsheet at my website, here.


UPDATE, 3/31: New spreadsheet (Excel format), updated to account for innings of run support, to correct any the bottom-of-the-ninth issues (as per GuyM's suggestion).  Actually, both methods are in separate tabs.


Labels: ,