### Does previous playoff experience matter in the NBA?

Conventional wisdom says that playoff experience matters. All else being equal, players who have been in the post-season before are more capable of adapting to the playoff environment -- the pressure, the intensity, the refereeing, and so on.

FiveThirtyEight now has a study they say confirms this in the NBA:

"In the NBA postseason since 1980, the team with the higher initial Elo rating has won 74 percent of playoff series. But if a team has both a higher Elo rating and much more playoff experience, that win percentage shoots up to 86 percent. Conversely, teams with the higher Elo rating but much less playoff experience have won just 52 percent of playoff series. These differences are highly statistically and practically significant."

I don't dispute their results, but I disagree that they have truly found evidence that playoff experience matters.

------

There's a certain amount of random luck in the outcomes of games. NBA results have less luck than most other sports, but there's significant randomness nonetheless.

So, if you have two teams, each of which finish with identical records and identical Elo ratings, they're not necessarily equal in talent. One team is probably better than the other, but just had worse luck in the regular season. That's the team from which you would expect better playoff performance.

But how do you tell them apart? If you have two teams, each of which finishes 55-27, with an Elo of (say) 1600, how can you tell which one is better?

One way is to look at their previous season records. If team A was .500 last year, while team B was .650, it's more likely that B is better. Sure, maybe not: it could be that team B lost a hall-of-famer in the off-season, while team A got a superstar back from injury. But, most of the time, that didn't happen, and you're going to find that team B is still the better team.

If you're looking at last season, the team with the better record is probably the team that got farther in the playoffs. And the team that got farther in the playoffs is probably the one whose players have more playoff experience.

So, when FiveThirtyEight notices that teams with playoff experience tend to outperform Elo expectations, it's not necessarily the actual playoff experience that's the cause. It could be that it's actually that the teams are better than their ratings -- a situation that correlates with playoff experience.

And, of course, "team being better" is a much more plausible explanation for good performance than "team has more playoff experience."

------

Here's a possible counterargument.

The study doesn't just look at players' *last year's* playoff experience -- it looks at their *career* playoff experience. You'd think that would dilute the effect, somewhat. But, still. Teams tend to stay good or bad for a few years before you could say their talent has changed significantly. Also, players with a lot of playoff experience, even with other teams, are more likely to be good players, and good players tend to play for more talented teams (even if all that makes the team "more talented" comes from them).

------

Another counterargument might be: well, the previous season's performance is already baked into Elo. If a team did well last season, it starts out with a higher rating than a team that didn't. So, checking again what a team did last season shouldn't make any difference. It would be like checking which team played better on Mondays. That shouldn't matter, because the Elo's conclusion that the teams are equal has already used the results of Monday games.

That's a strong argument, and it would hold up if Elo did, in fact, give last season the appropriate consideration. But I don't think it does. When it calculates the rating, Elo gives previous seasons a very low weighting.

I did a little simulation (which I'll describe next post), and found that, when two NBA teams start with different Elo ratings, but perform identically, half the difference is wiped out after about 27 games.

So, team A starts out with a rating of 1500 (projection of 41-41). Team B starts out with 1600 (52-30). After 27 games playing identically against identical opponents, the Elo difference drops from 100 points to 50.

After a second 27 games, the difference gets cut in half again, and the teams are now only 25 points apart. After a third 27 games, the difference cuts in half again, to around 12 points. That takes us to 81 games, roughly an NBA season.

So, at the beginning of the season, Elo thought team B was 100 points better than team A. Now, because both teams had equal seasons, Elo thinks B is only 12 points better than A.

And that's even after considering personnel changes between seasons. If the two ratings started out 100 points apart, their performance last season was actually 133 points apart, because FiveThirtyEight regresses a quarter the way to the mean during the off-season, to account for player aging and roster changes.

133 points is about 15 games out of 82. So, last year B was 56-26, while A was 41-41. They now have identical 49-33 seasons, and Elo thinks B is only 1.4 games better than A.

In other words, after a combined 162 games, the system thinks A was less than two games luckier than B.

That seems like too much convergence.

------

Under the FiveThirtyEight system, previous years' performance contributes only 12 percent of the rating; this year's performance is the remaining 88 percent. And that's *in addition* to adjusting for personnel changes off-season.

That's far lower than traditional sabermetric standards. For baseball player performance, Bill James (and Tango, and others) have traditionally put previous seasons at 50 percent. They use a "1/2/3" weighting. By contrast, the NBA is using "1/5/44".

Of course, the "1/2/3" is for players; for teams, it should be lower, because of personnel changes. Especially in the NBA, where personnel changes make a much bigger difference because a superstar has such a big impact.

But, still, 12 percent is far too little weight to give to previous NBA seasons. That's why, when you want to know whose Elo ratings are unlucky, you actually do add valuable information about talent, by checking which teams played much better the last few years than they did this year.

And that's why playoff experience seems to matter. It correlates highly with teams that did well in the past.

------

I could be wrong; it shouldn't be too hard to test.

1. If this hypothesis is right, then playoff experience won't just predict playoff success; it will also predict late regular season success, because the same logic holds. That might be tricky to test because some teams might not give their stars as many minutes in those April games. But you could still check using teams that are fighting for a playoff spot.

2. Instead of breaking Elo ties by looking at previous playoff experience, look instead at the teams' start-of-season rating. I bet you find an even larger effect. And I bet that after you adjust for that, the apparent effect of playoff experience will be much smaller.

3. For teams whose Elo ratings at the end of the season are close to their ratings at the beginning of the season, I predict that the apparent effect of playoff experience will be much smaller. That's because those teams will tend to have been less lucky or unlucky, so you won't need to look as hard at previous performance (playoff experience) to counterbalance the luck.

4. Instead of using Elo as an estimate of team talent entering the playoffs, estimate talent from Vegas odds in April regular-season games (or late-March games, if you're worried about teams who bench their stars in April). I predict you'll find much less of a "playoff experience" effect.

Hat Tip and thanks: GuyM, for link and e-mail discussion

Labels: basketball, FiveThirtyEight, NBA, playoffs

## 9 Comments:

I think I'm ok with a rapid "decay" rate.

For example, in baseball, we know the regression point is around 70 games. So, playing 162 games means that 162/(162+70) = 70% of the weight is from the performance. The other 30% is regression, or if you had earlier performance, it can come from there too.

In basketball, the regression point is only 12 games IIRC. So, playing 82 games means that the weight comes 87% from the performance. There's just not much reason to go earlier than this season.

I think.

There are two sources of error in estimating talent from outcomes:

1. Random luck going from talent to outcome; and

2. Changes in talent over the sample you're using to estimate.

When you decide to make your sample bigger, by, say, adding the previous year's outcomes, you're doing so because the improvement in accuracy by reducing random error (1) is greater than the loss of accuracy due to changes in talent that you can't see (2).

As you say, in the NBA, (1) is smaller than other sports. So, you get less improvement by adding data. Which means that if you use data from too far back, your losses from (2) more quickly overcome the gains from (1). And that's why, for the NBA, the decay rate has to be higher.

If you were flipping coins, where (2) is zero, the decay rate would be zero. And if you're measuring a child's height, where (1) is close to zero, the decay rate would be close to infinite.

Now, what's the appropriate decay rate for the NBA? You could be right that because (1) is low, (2) should be high, and a 9% weight for the previous year (after the adjustment for personnel) is pretty good.

You may be right, but I still think it's way too low.

But: I now realize that doesn't matter that much here. It's sufficient for my argument that 9% is too low, but it's not necessary.

Let me check on my dinner cooking and I'll be back.

OK, I take that back. I think the argument does indeed depend on the appropriate decay rate. I still think a decay rate that weights the previous season at 9% is too high. Tango, do you have any argument otherwise?

You could find out by running a regression to predict this season based on (1) last season, and (2) the season before. That should give you the relative weights. I don't have the data handy, but I'm guessing the coefficient of the T-2 season will have a better than a 9:91 ratio to the coefficient of the T-1 season.

Also: a 12 percent weighting implies that you give a 50% weighting to the last 27 games, and a 50% weighting to everything before that.

Specifically, 50% to the last 27 games, 38% to the previous 54 games, and 12% to everything before that.

So, each of the last 27 games gets about 2.5 times the weight of each of the first 54 games of the season.

That can't be right, can it? Can it be that team talent -- ON AVERAGE -- changes so much that the 70th game gives you 2.5 times as much information about the team's talent as the 28th game?

That's also easy to check. Try to predict the last third of the season from the first two thirds. Then, try to predict the last third of the season from the sum of (first third + 2 * second third). If the second way gives you a larger standard error than the first way, you know your weighting is wrong.

Even better: try to predict the last game of the season from (4 * the last 27) + (2 * the middle 27) + (the first 27). I bet you get a lot weaker a result than if you weight them equally and just use the results of the first 81.

This is still an argument to your gut, for now.

Elo weirdness could be part of it, but I think playoff rotations play a role as well. Say the Cavaliers were a 1600 Elo in the regular season, but we know they have a lot of playoff experience in their top guys. When you get to the playoffs and they heavily/only play those top guys, they might really be a 1700 Elo team. If alternate reality Cavs were a 1600 team but without a lot of playoff experience, their best players probably aren't as good as "Cavs prime". So even when they shorten their bench in the playoffs, maybe they're only a 1650.

I think there is a problem with the experience variable, even if the decay rate is perfect. Even if Elo is perfectly calibrated, and has wrung all possible predictive power from the team's W-L and points scored/allowed, couldn't our predictions still be improved by adding a separate measure of *individual* player talent? If we had a perfect Elo for baseball, wouldn't knowing each player's career WAR still provide useful predictive power? And surely playoff minutes is highly correlated with player talent in the NBA.

Indeed, we don't have to guess about this. When 538 predicts game outcomes, it uses a hybrid measure combining Elo with their estimates of player talent ("CAMELO"). That does a better job than Elo alone. So I think we know that there is a need to control for players' individual talent, beyond Elo.

Guy makes a very good point regarding individual player talents vs team talent. I believe, for team sports like baseball, basketball etc, any model that is based solely on individual player talent will perform much better than Elo models, or even a combination of Elo and player.

So, let's say you have the ZiPS/Steamer projections for all the players in both teams of an MLB game. Is there *any* advantage in considering the team records when calculating the win probability? I cannot imagine how team record can add anything but noise here.

Basketball *is* somewhat different than baseball, in that team synergy may have a larger impact. But I am still willing to bet any model that completely ignores team records, but only considers player talent will be better.

OK, I have a better answer to Tango.

There are two issues when you use a formula to estimate team talent. First, is your estimate unbiased (as likely to be too high as too low)? Second, how accurate is it (how big is your confidence interval)?

In the NBA, as Tango mentions, you need only add 12 games of .500 ball to the record to get an estimate of talent. But that means only that adding 12 games is how you get *the unbiased estimate*. If you add more than 12 games, you're going to wind up with more estimates being too close to the mean. If you add fewer than 12 games, you're going to wind up with more estimates being too far from the mean.

But: the "add 12 games" has nothing to do with the *variance* of your estimate, with your accuracy, with the confidence interval around the estimate. That confidence interval is still based on binomial luck. If you use 82 games of performance, your confidence interval will be a certain size. If you use 164 games of performance, your confidence interval will be only .7 times as big (1 divided by root 2).

So, more games is always better, and always better in the SAME RATIO, if your estimate is unbiased. No matter what sport you use, you cut your confidence interval in half by using four times as many games.

So, the fact that NBA adds only 12 games does not, in any way, mean that your confidence interval is narrower than in MLB when you add 70 games.

------

HOWEVER: the confidence interval for the NBA is already a bit lower than for MLB. That's because, in the NBA, you have lots of mismatched games. The SD of the outcome is proportional to the square root of p(1-p), where p is either team's probability of winning. In the NBA, it's common to have 4:1 favorites. Root [p(1-p)] equals 0.4 for those games. For an even game, it's 0.5.

If you assume that the "average" favorite in the NBA is a 3:1 shot, and in MLB it's a 1.5:1 shot, the SD difference is 0.43 to 0.49, so the NBA confidence interval is about 12 percent smaller. But, still, you gain the same proportion of accuracy by increasing the proportional number of games, in both sports.

So, the argument stands: when you look at previous playoff experience in the NBA, you're getting strong evidence of how the team did in the past. That would also apply to MLB. But, actually, it's much more valuable in the NBA, because how far a team went in the playoffs is much more correlated to skill (ie, much less luck) than in MLB. So last year's playoff experience gives you a LOT of information on how good the team was last year. Probably not as good as their actual regular season record, but much closer than it would be in MLB.

Post a Comment

<< Home