Tuesday, July 21, 2015

A "hot hand" is found in the NBA three-point contest

A recent paper provides what I think is rare, persuasive evidence of a "hot hand" in a sporting event.

The NBA Three-Point Contest has been held annually since 1986 (with the exception of 1999), as part of the NBA All-Star Game event. A pair of academic economists, Joshua Miller and Adam Sanjurjo, found video recordings of those contests, and analyzed the results. (.pdf)

They found that players were significantly more likely to make a shot after a series of three hits than otherwise. Among the 33 shooters who had at least 100 shots in their careers, the average player hit 54 percent overall, but 58 percent after three consecutive hits ("HHH").  

(UPDATE: the 58 percent figure is approximate: the study reports an increase of four percentage points after HHH than after other sequences. Because the authors left out some of the shots in some of their calculations (as discussed later in this post), it might be more like 59% vs. 55%, or some such. None of the discussion to follow depends on the exact number.)

The authors corrected for two biases. I'll get to those in detail in a future post, but I'll quickly describe the most obvious one. And that is: after HHH, you'd expect a *lower than normal* hit rate -- that is, an apparent "mean-reverting hand" -- even if results were completely random. 

Why? Because, if a player hit exactly 54 of 100 shots, then, after HHH, the next shot must come out of what remains -- which is 51 remaining hits out of 97 remaining shots. That's only 52.6 percent. In other words, the hit rate not including the "HHH" must obviously be lower than the hit rate including "HHH". 

That might be easier to see if you imagine that the player hit only 3 out of 100 shots overall. In that case, the expectation following HHH must be 0 percent, not 3 percent, since there aren't enough hits to form HHHH!

After the authors corrected for this, and for the other bias they noted, the "hot hand" effect jumped from 4 percentage points to 6. 

------

UPDATE: Joshua Miller has replied to some of what follows, in the comments.  I have updated the post in a couple of places to reflect some of his responses.

------

That's a big effect, a difference of 6 percentage points. Maybe it's easier to picture this way:

Of the 33 players, 25 of them shot better after HHH than their overall rate. 

In other words, the "hot hand" beat the "mean-reverting hand" with a W-L record of 25-8. With the adjustments included, the hot hand jumps to 28-5.

------

Could the result be due to something other than a hot hand? Well, to some extent, it could be selective sampling of players.

In the contest, players shoot 25 attempts per round. To get to 100 attempts, and be included in the study, a shooter has to play at least four rounds in his career.  (By the way, here's a YouTube video of the 2013 competition.)

In any given contest, to survive to the next round, a player needs to do well in the current round. That means that players who got enough attempts were probably lucky early. That might select players who concentrated their hits in early rounds, compared to the late rounds, and create a bit of a "hot hand" effect just from that.

And I bet that's part of it ... but a very small part. Even if a player shot 60/60/50 in successive rounds, just by luck, that alone wouldn't be nearly enough to show an overall effect of 6 percentage points, or even 4, or (I think) even 1.

UPDATE: The authors control for this by stratifying by rounds, Dr. Miller replies.

------

One reason I believe the effect is real is that it makes much more intuitive sense to expect a hot hand in this kind of competition than in normal NBA play.

In each round of the contest, players shoot 5 consecutive balls from the same spot on the court, in immediate succession. That seems like the kind of task that would easily show an effect. It seems to me that a large part of this would be muscle memory -- once you figure out the shot, you just want to do exactly the same thing four more times (or however many balls you have left once you figure it out). 

After those five balls, you move to another spot on the arc for another five balls, and so on, and the round ends after you've thrown five balls from each of five locations. However, even though the locations move, the distances are not that much different, so some of the experience gained earlier might extend to the next set of five, making the hot hand even more pronounced.

There's one piece of the evidence that offers support for the "muscle memory" hypothesis. It turns out that the first two shots in each round were awful. The authors report that the first shot was made only 26 percent of the time, and the second shot only 39 percent. For the remaining twenty-three shots, the average success rate was 56 percent.

That "warm up" time is very consistent with a "muscle memory" hot hand.

-----

In fact, those first two shots were so miserable that the authors actually removed them from the dataset! If I understand the authors correctly, a player listed with 100 shots was analyzed for only 92 of those shots.

UPDATE: originally, I thought that rounds were stitched together, so removing those shots would increase observed streakiness from one round to the next. But Dr. Miller notes, in the comments, that they considered streaks within a single round only. In that case, as he notes, removing the first two shots has the effect of reducing "cold hand" streakiness, making the results more conservative.  

The removal of those shots, it seems to me, would be likely to overstate the findings a bit. The authors strung rounds together as if they were just one long series of attempts (even if they spanned different years; that seems a bit weird, that you'd say a player had a "hot hand" if he continued a 2004 streak in 2005, but never mind).

That means that when they string the last five shots of one round with the first five shots of the next, instead of something like


MHHHH MMHMH


they get 


MHHHH   HMH


which tends to create more streaks, since you're taking out shots that tend to be mostly misses, in the midst of a series of shots that tend to be mostly hits. ("M" represents a miss, as you probably gathered.)


I wonder if the significant effect the authors found would still have shown up without those omitted shots. I suspect it would have been, at least, significantly weaker. I may be wrong -- the authors showed streakiness both for hits and misses, so maybe the extra "MM" shots would have shown up in their "cold hand" numbers.


------

I bet you'd find a hot hand if you tried the equivalent contest yourself. Position a wastebasket somewhere in the room, a few feet away. Then, stay in one spot, and try to throw wads of paper into the basket. I'm guessing your first one will miss, and you'll adjust your shot, and then you'll get a bit better, and, eventually, you'll be sinking 80 to 90 percent of them. Which means, you have a "hot hand" -- once you get the hang of it, you'll be able to just repeat what you learned, which means hits will tend to follow hits.

Here's a more extreme analogy. Instead of throwing paper into a basket, you're shown a picture of a random member of the Kansas City Royals, and asked to guess his age exactly. After your guess, you're told how far you were off. And then you get another random player (which might be a repeat).

Your first time through the roster, you might get, say, 1/3 of them right. The second time through, you'll get at least 2/3 of them right -- the 1/3 from last time, and at least half the rest (now that you know how much you were off by, you only have to guess which direction). The third time through, you'll get 100%.

So, your list of attempts will look something like this (H for hit, M for miss):

MMMHMHMHMMHHHMMHHMHMMMHMHHHHMHHHMMHHHHHHHHMHHHHHHHH...

Which clearly demonstrates a hot hand.

And that's similar to what I think is happening here. 

------

The popular belief, among sportswriters and broadcasters, is that the hot hand - aka "momentum" or "streakiness" -- is real, that a team that has been successful should be expected to continue that way. But almost every study that has looked for such an effect has failed to find one.

That led to the coining of the term "hot hand fallacy" -- the belief that a momentum effect exists, when it does not. Hence the title of this paper: "Is it a Fallacy to Believe in the Hot Hand in the NBA Three Point Contest?"

So, does this study actually refute the hot hand fallacy? 

Well, it refutes it in its strongest form, which is the position that there NEVER exists a hot hand of ANY magnitude, in ANY situation. That's obviously wrong. You can prove it with the Kansas City Royals example, or ... well, you can prove it in your own life. If you score every word you misspelled as a miss, and the rest as a hit ... most of your misses are clustered early in life, when you were learning to read and write, so there's your hot hand right there.

The real "fallacy," as I see it, is not the idea that a hot hand exists at all, but the idea that it is a significant factor in predicting what's going to happen next. In most aspects of sports, the hot hand, when it does exist, is so small as to have almost no predictive value. 

Suppose a player has two kinds of days, equal and random -- "on," where he hits 60%, and "off" where he hits only 50%. That would give rise to a hot hand, obviously. But how big a hot hand? What should you predict as the chance of the player making his next shot?

Before the game, you'd guess 55% -- maybe he's on, or maybe he's off. But, now, he hits three straight shots. He has a hot hand! What do you expect now?

If my math is right, you should now expect him to shoot ... 56.3%. Not much different!

The "50/60 on/off" actually represents a huge variation in talent. The problem is that streaks are a weak indicator of whether the player is actually "on," versus whether he just had a lucky three shots. In real life, it's even weaker than a 1.3 percent indicator, because, for one thing, how do you know how long a player is "on" and how long he's "off"? I assumed a full game, but that's wildly unrealistic.

You can probably think of many reasons streakiness is a weak indicator. Here's just one more. 

The "56.3%" illustration was assuming that all shots were identical. In real life, if it's not a special case of a three-point contest ... well, when a player hits HHH, it might be evidence of a hot hand, but it also just could be that those shots were taken in easier conditions, that they were 60% shots instead of 50% shots because the defense didn't cover the shooter very well.

Real games are much more complicated and random than a three-point shooting contest. That's why I don't like the phrasing, that the authors of this NBA study found evidence of "THE hot hand effect". They found evidence of "A hot hand effect", one particular one that's large enough to show up in the contrived environment of a muscle-memory based All-Star novelty event. It doesn't necessarily translate to a regular NBA game, at least not unless you dilute it enough that it becomes irrelevant.

------

The "hot hand" issue reminds me of the "clutch hitting" issue. Both effects probably exist, but are so tiny that they're pretty much useless for any practical purposes. Academic studies fail to find statistically significant evidence, and imply that "absence of evidence" implies that no effect exists. We sabermetricians cheat a little bit, saving effort by saying there's "no effect" instead of "no effect big enough to measure."

So "no effect" becomes the consensus. Then, someone comes up with a finding that actually measures an effect -- this study for the hot hand, and "The Book" for clutch hitting. And those who never disbelieved in it jump on the news, and say, "Aha! See, I told you it exists!"  

But they still ignore effect size. 

People will still declare that their favorite hitter is certainly creating at least a win or two by driving in runs when it really counts. But now, they can add, "Because, clutch hitting exists, it's been proven!" In reality, there's still no way of knowing who the best clutch hitters are, and even if you could, you'd find their clutch contribution to be marginal.

And, now, I suspect, when the Yankees win five games in a row, the sportscasters will still say, "They have momentum! They're probably going to win tonight!" But now, they can add, "Because, the hot hand exists, it's been proven!" In reality, the effect is so attenuated that their "hotness" probably makes them a .501 expectation instead of .500 -- and, probably, even that one point is an exaggeration.  

My bet is: the "hot hand" narrative won't change, but now it will claim to have science on its side.




Labels: , , , ,

7 Comments:

At Tuesday, July 21, 2015 8:18:00 PM, Anonymous rob said...

The situation here, and the analogies you describe, aren't really "hot hand" situations, but rather ones where you improve over time as you get familiar with the task you're being asked to perform. To put it another way: we're talkin' 'bout PRACTICE.

 
At Tuesday, July 21, 2015 8:27:00 PM, Blogger Tybalt said...

My biggest concern is that the three point contest is a talent show, not a sporting contest, and you can't extrapolate anything from the first to the second. You'd find the same thing for the Home Run Derby probably... there is a "ranging" effect at play. But it's entirely different when there are other players trying to stop you.

I've never doubted for a second that just shooting a basketball is a "hot hand" phenomenon, because I've experienced the feeling myself on several occasions.

Actually, that makes the finding potentially interesting, because it may (subject to the obvious concerns about samples) demarcate clearly the boundary between solo performance and true competition...

 
At Wednesday, July 22, 2015 3:46:00 AM, Anonymous Joshua B. Miller said...

ran out of space, here is the last bit:

Are hot hands detect-ably big? That's another question. We do have evidence that players know who has a tendency to get hot and who not based on previous game and practice experience (our 2014 study) That is some evidence of detect-ability.

My question to hot hand skeptics? Would you have been a hot hand skeptic without the 1985 Gilovich, Vallone and Tversky study? Based on what information are you now a skeptic? Thanks!

 
At Wednesday, July 22, 2015 11:09:00 AM, Anonymous Joshua B. Miller said...

Dear Phil Birnhaum

It is nice to see your blog post on our recent work.

We are planning to put out a FAQ, but until that time the comment section in Andrew Gelman's blog has a lot of folks kicking the tires, with replies, here (warning: in the spirit of a blog comments section, typos are not corrected) : http://andrewgelman.com/2015/07/09/hey-guess-what-there-really-is-a-hot-hand/

we aren't monitoring Gelman's comment section much anymore, but we'd probably respond at some point if someone leaves a comment.

Note near the bottom of the comments section we get into effect size and why with game data you can only really do hypothesis tests (if you are willing to assume conditional-independence), and effect size estimates will always be severe under-estimates.

Now some comments for your comments:

First point, some context. GVT's paper was the paper that switched people from believing in the hot hand, to not believing in the hot hand. Koehler & Conley's Three-Point study has been considered a nice replication. Now it has been shown that if you analyze these two data sets correctly, the evidence for the hot hand is strong and effect size is considerable.

Now if you want to believe the hot hand is a small effect in games, you will need some empirical evidence to justify these beliefs, because the best evidence now available is that players can have considerable variation in their probability of success, and the action is going on for hits and not misses. I'd like to see the justification (please see the comments on Gelman's blog for why game data is not justifiable evidence--it has to do with measurement error, omitted variable bias, and the pooling of heterogeneous responses)

1. With regard to your argument that variation could be potentially attributable to regression to the mean due to early round performance, we actually control for that in the hypothesis test, we stratify our permutations by rounds.

2. With regarding to stitching data sets. This isn’t an accurate description of what we do. A streak can only exist within a round, same for a run. We do calculate the estimate P(H|HHH) based on all the data for a player, but again, in the null reference distribution we do the same thing, and we stratify by rounds.

3. If you add the first two shots the results are stronger, as there is more actually variation than there is in the benchmark distribution. We did that to be conservative. We noted that this warm-up effect is true in every controlled shooting study out there.

4. The 5 shot muscle memory story is certainly plausible in the Three-Point context, but in the original GVT they were moving after each shot and the effect is still there, bigger even. In our study of Spanish semi-pros we keep them in the same spot, and they still exhibit strong hot hand effects. If they are in the same spot, muscle memory and calibration isn't really the issue (once they warm-up). Anyway, it is not clear how big muscle memory is; a jump shot isn't really a precise thing, the release point is always changing a bit because there is variation in the jump. The waste-basket story is more analogous to free-throws, fewer variables, easier to calibrate.

5. On predictive value, if you are talking only about 3 made shots; agreed, there is little predictive value---for the statistician (in many plausible regime switching models 3 made shots predicts the “hot state” with far less than 50% accuracy). Please look at the measurement error comment on Gelman's blog. For the teammate and coach, they see more than 3 made shots. They see body language, shooting mechanics, facial expressions, etc. They know the player well. This is a different matter. Someone should study this. No one has.

The available evidence shows big effect sizes. Should we infer the same effect in games, given we have no known way to measure them? It is certainly a justifiable inference. Thanks!

 
At Friday, September 11, 2015 4:53:00 AM, Blogger Unknown said...

This comment has been removed by a blog administrator.

 
At Monday, March 28, 2016 5:43:00 AM, Blogger Unknown said...

This comment has been removed by a blog administrator.

 
At Thursday, April 07, 2016 3:23:00 AM, Blogger Unknown said...

This comment has been removed by a blog administrator.

 

Post a Comment

<< Home