Does a "hot hand" improve a team's March Madness chances?
Alan Reifman, of "The Hot Hand," sent me this Andy Glockner article on the March Madness selection criteria.
When seeding the teams, NCAA organizers include a measure called "L12," which represents the team's record in the previous twelve games. It's just one of many factors that go into the rankings. The idea is that if a team has played well in the recent past, it might be on a roll, and more worthy of inclusion or an improved ranking.
Glockner thinks the L12 rating should be dropped because it doesn’t have any predictive validity. This is in keeping, I think (and Alan can correct me) with current "hot hand" thinking, which holds that streaks are mostly just random. So a .600 team that got hot lately is probably no better than a .600 team with a more even sequence of wins and losses.
But the evidence that Glockner uses to prove his point, is, I think, not relevant. Not just because the sample is very small (as Glockner acknowledges), but because his conclusions don't match the evidence.
Glockner compared low-ranked teams that won their first March Madness game to teams that lost. He then looked at the L12 record of the two sets of teams, and found they were pretty much identical:
7.7-4.3 -- L12 record of teams that won
7.6-4.4 -- L12 record of teams that lost
Since the records are roughly equal, Glockner argues that L12 is "non-predictive."
But these results are exactly what you'd expect to see if L12 is a legitimate factor! Remember, L12 is one of many criteria used to create the rankings. So a team that gets in with a good L12 record is probably worse in other ways than a team that gets in with a poor one. (That is, a team with a *bad* L12 record has to be a better team overall, or it wouldn't have made it in against the teams with good L12 records.)
So if L12 was useless, you'd see the winners being the better teams, which got in with worse L12 records. And so you'd see winning teams have a *worse* L12 record than losing teams – NOT an identical record.
(An easier way to see why this is true: imagine that "amount of money used to bribe the NCAA" was also one of the criteria. Only bad teams would need to hand out big bribes to get in, so you'd expect losing teams to have given disproportionally large bribes.)
So I think Glockner has it backwards. The fact that the winners are equal to the losers in terms of L12 is evidence – very weak evidence, but evidence nonetheless -- that the committee got it right.
Labels: basketball, hot hand, streakiness
2 Comments:
I actually just wrote about a similar topic at the PFR blog (link in my name), except I look at the top 7 seeds. I looked at last 10 games. I wasn't aware the committee used L12, or I would have just done that instead, as I'm sure it wouldn't have changed much.
I come to a completely different conclusion based on the earlier seeds. I agree with your assessment that he misinterprets the data. If he were correct, the teams with poorer L12 records would advance at a higher rate than those with stronger L12 records, not the same.
Yes, Phil, your general take on hot-hand research accurately represents the dominant thinking among statisticians, as best I can tell.
The clearest way, it seems to me, to test the predictive validity of each of the possible selection criteria (e.g., L12, RPI, win-loss record vs. RPI top 50) would be via multiple-regression analyses (with number of games won in the NCAA tournament as the dependent variable). That would give us the relation of each predictor to the outcome, statistically holding constant the roles of the other co-predictors.
Post a Comment
<< Home