Can a "hot hand" turn average players into superstars?
Last post, I reviewed a study by Joshua Miller and Adam Sanjurjo that found a "hot hand" effect in the NBA's Three-Point Contest. In addition to finding a hot hand, the authors also showed how some influential previous studies improperly underestimated observed streakiness because of the incorrect way they calculated expectations.
I agreed that the previous studies were biased, and accepted that the authors found evidence of a hot hand in the three-point contest. But I was dubious that you can use that evidence to assume a hot hand in anything other than a "muscle memory" situation.
Dr. Miller, in comments on my post and follow-up e-mails, disagreed. In the comments, he wrote,
"The available evidence shows big effect sizes. Should we infer the same effect in games, given we have no known way to measure them? It is certainly a justifiable inference."
Paraphrasing Dr. Miller's argument: Since (a) the original "no hot hand" studies were based on incorrect calculations, and (b) we now have evidence of an actual hot hand in real life ... then, (c) we should shift our prior for real NBA games from "probably no hot hand" to "probably a significant hot hand."
That's a reasonable argument, but I still disagree.
There are two ways you can define a "hot hand":
1. Sometimes, players have higher talent ("talent" means expected future performance) than other times. In other words, some days they're destined to be "hot," better than their normal selves.
2. When players have just completed a streak of good performance, they are more likely to follow it with continued good performance than you'd otherwise expect.
Call (1) the "hot hand talent" hypothesis, and (2) the "streakiness" hypothesis. Each implies the other -- if you have "good days," your successes will be concentrated among those good days, so you'll look streaky. Conversely, if your expectation is to exhibit streakiness, you must be "better in talent" after a streak than after a non-streak.
I think the two definitions are the same thing, under certain other reasonable assumptions. At worst, they're *almost* the same thing.
However, we can observe (2), but not (1). That's why "hot hand" studies, like Miller/Sanjurjo, have to concentrate on streaks.
The problem is: it takes a *lot* of variation in talent (1) to produce just a *tiny bit* of observed streakiness (2).
Observed streakiness is a very, very weak indicator of a varation in talent. That's because players also go on streaks for a lot of other reasons than that they're actually "hot" -- most importantly, luck.
In the three-point contest study, the authors found an average six percentage point increase in hit rate after a sequence of three consecutive hits, from about 53 percent to 59 percent. As Dr. Miller points out, the actual increase in talent when "hot" must be significantly higher -- because not all players who go HHH are necessarily having a hot hand. Some are average, or even "cold," and wind up on a streak out of random luck.
If only half of "HHH" streaks are from players truly hot at the time, the true "hot hand" effect would have to be double what's observed, or 12 percentage points.
Well, 12 points is huge, by normal NBA standards. I can see it, maybe, in the context of muscle memory, like the uncontested, repeated shots in the Miller/Sanjurjo study -- but not in real life NBA action.
What if there were a 12-point "hot hand" effect in, say, field goal percentage in regular NBA games? Well, for all NBA positions, as far as I can tell, the difference between average and best is much less than 12 points. That would mean that when an average player is +12 points "hot," he'd be better than the best player in the league.
Hence my skepticism. I'm willing to believe that a hot hand exists, but NOT that it's big enough to turn an average player into a superstar. That's just not plausible.
Suppose you discover that a certain player shoots 60% when he's on a three-hit streak, and 50% other times. How good is he when he's actually hot? Again, he's not "hot" every time he's on a streak, because streaks happen often just by random chance. So, the answer depends on *how often* he's hot. You need to estimate that before you can answer the question.
Let's suppose we think he's hot, say, 10 percent of the time.
So, to restate the question as a math problem:
"Joe Average is normally a 50 percent shooter, but, one time in ten, he is "hot", with a talent of P percent. You observe that he hits 60% after three consecutive successes. What's your best estimate of P?"
The answer: about 81 percent.
An 81 percent shooter will make HHH about 4.25 times as often as a 50 percent shooter (that's 81/50 cubed). That means that Joe will hit 4.25 streaks per "hot" game for every one streak per "normal" game.
However: Joe is hot only 1/9 as often as he is normal (10% vs. 90%). Therefore, instead of 425 "hot" HHH for every 100 "regular" HHH, he'll have 425 "hot" HHH for every *900* "regular" HHH.
Over 1325 shots, he'll be taking 425 shots with an expectation of 81 percent, and 900 shots with an expectation of 50 percent.
Combined, that works out to 794-for-1325, which is the observed 60%.
Do you really want to accept that the "hot hand" effect turns an ordinary player into an 81-percent shooter? EIGHTY-ONE PERCENT?
But that's what the assumptions imply. If you argue that:
-- player X is 50% normally;
-- player X is "hot" 10 percent of the time;
-- player X is expected to hit 60% after HHH
Then, it MUST FOLLOW that
-- player X is 81% when "hot".
To which I say: no way. I say, nobody is an 81% shooter, ever -- not Michael Jordan, not LeBron James, nobody.
To posit that the increase from 50% to 60% is reasonable, you have to assume that an average player turns into an otherworldly Superman one day in ten, due to some ineffable psychological state called "hotness."
You can try tweaking the numbers a bit, if you like. What if a player is "hot" 25 percent of the time, instead of 10 percent? In that case,
-- player X is 71% when "hot".
That's not as absurd as 80%, but still not very plausible. What if a player is "hot" fully half the time? Now,
-- player X is 64.6% when "hot".
That's *still* not plausible. Fifteen points is still superstar territory. Do you really want to argue that half the time a player is ordinary, but the other half he's Michael Jordan? And that nobody would notice without analyzing streaks?
Do you really want to assume that the variation in talent within a single player is wider than the variation of talent among all players?.
Let's go the other way, and start with an intuitive prior for what it might mean to be "hot." My gut says, at most, maybe half an SD of league talent. You can go from 50th to 70th percentile when everything is lined up for you -- say, from the 15th best power forward in the league, to the 9th best. Does that sound reasonable?
In the NBA context, let's call that ... I have no idea, but let's guess five percentage points.* And let's say a player is "hot" one time in five.
(* A reader wrote me that five percentage points is a LOT more than half an SD of talent. He's right; my bad. Still, that just makes this part of the argument even stronger.)
So: if one game in five, you were a 55% shooter instead of 50%, what would you hit after streaks?
-- For 1000 "hot" shots, you'd achieve HHH 166 times, and hit 91.3 of the subsequent shots.
-- For 4000 "regular" shots, you'd achieve HHH 500 times, and hit 250 of the subsequent shots.
Overall, you'd be 341.3 out of 666, or 51.25%.
In other words: a hot hand hypothesis that posits a reasonable (but still significant) five-point talent differential expects you're only 1.25 percentage points better after a streak.
Well, you need a pretty big dataset to make 1.25 points statistically significant. 30,000 attempts would do it: 6000 when "hot" and 24,000 when not hot.*
(* That's using binomial approximation, which underestimates the randomness, because the number of attempts isn't fixed or independent of success rate. But never mind for now.)
And even if you had a sample size that big, and you found significance ... well, how can you prove it's a "hot hand"? It's only 1.25 points, which could be an artifact of ... well, a lot of things other than streakiness.
Maybe you didn't properly control for home/road, or you used a linear adjustment for opponent quality instead of quadratic. Maybe the 1.25 doesn't come from a player being hot one game in five, but, rather, the coach using him in different situations one game in five. Or, maybe, those 20 percent of games, the opposition chose to defend him in a way that gave him better shooting opportunities.
So, it's going to be really, really hard to prove a "hot hand" effect by studying performance after streaks.
But maybe there are other ways to analyze the data.
1. Perhaps you could look at player streaks in general, instead of just what happens in the one particular shot after a streak. That would measure roughly the same thing, but might provide more statistical power, since you'd be looking at what happens during a streak instead of just the events at the end.
Would that work? I think it would at least give you a little more power. Dr. Miller actually does something similar in his three-point paper, with a "composite statistic" that measures other apsects of a player's sequences.
2. Instead of just a "yes/no" for whether to count a certain shot, you could weight it by the recent success rate, or the length of the streak, or something. Because, intuitively, wouldn't you expect a player to be "hotter" after HHHHHH than HHH? Or, even, wouldn't you expect him to be hotter after HHMHMHHMHHHMMHH than HHH?
I'm pretty sure that kind of thing has been done before, that there are studies that try to estimate the next shot from the success rate in the X previous shots, or some such.
But, you can't fight the math: no matter what, it still takes ten haystacks of "hot hand talent" variation to produce a single needle of "streakiness." There just isn't enough data available to make the approach work.
Having said that ... there's a non-statistical approach that theoretically could work to prove the existence of a real-life hot hand.
In his e-mails to me, Dr. Miller said that basketball players believe that some of them are intrinsically streakier than others -- and that they even "know" which players those were. In an experiment in one of his papers, he found that the players named as "streaky" did indeed wind up showing a larger "hot hand" effect in a subsequent controlled shooting test.
If that's the case (I haven't read that paper yet), that would certainly be evidence that something real, and observable, is happening.
And, actually, you don't need an a laboratory experiment for this. Dr. Miller believes that coaches and teammates can sense variations in talent from body language and experience. If that's the case, there must be sportswriters, analysts, and fans who can do this too.
So, here's what you do: get some funding, set up a website, and let people log on while watching live games to predict, in real time, which players are currently exhibiting a hot hand. If even one single forecaster proves to be able to consistently choose players who outperform their averages, you have your evidence.
I'd be surprised, frankly, if anyone was able to predict significant overachievement in the long run. And, I'd be shocked -- like, heart attack shocked -- if the identified "hot" players actually did perform with "Superman" increases in accuracy.
As always, I could be wrong. If you think I *am* wrong, that the "hot hand" is even half as significant a factor in real life as it is in the three-point contest, I think this would easily be your best route to proving it.