### Stumbling on Wins: Are NBA rebounds consistent because of talent or opportunities?

In David Berri and Martin Schmidt's "Stumbling on Wins," the authors paraphrase JC Bradbury on what makes a useful player-evaluation statistic. They write,

"First, one must look at how the measure connects to current outcomes. Then, one must look at the consistency of the measure over time."

Fair enough. But there's a third criterion that the authors need to add.

To see why, take, for instance, saves in baseball. By the first criterion, saves are obviously important -- that's why teams put their best reliever in the stopper role. By the second criterion, saves are very consistent -- for Yankee pitchers over the last 15 years, there's a very high correlation between saves last year and saves this year. There's a much higher year-to-year correlation for saves than any other measure -- ERA, WHIP, DIPS, even strikeouts.

Does that mean that saves are the most useful way to assign value to a reliever? Does it really mean that Mariano Rivera, with 30 saves, is fifteen times as talented at saving than some other guy in the bullpen with two saves? Of course not. The number of saves depends mostly on opportunities. And opportunities are not a characteristic of the player -- they're a characteristic of the manager, who decides how to assign the workload. Yankee pitchers are not consistent because Mariano Rivera has ten or more times as much "save talent" than any other Yankee. Rather, they're consistent because Yankee managers are consistent in giving Mariano almost all the save opportunities.

So, I propose:

Third, one should look at how much the measure is a true reflection of the player's talent, and how much is a measure of~~factors outside the player's control~~other factors unrelated to talent, such as opportunities.

(Note: above update 3/10/10 after suggestion from Guy in the comments.)

The reason I bring this up is that Berri and Schmidt use the first two criteria to defend why they assign the value of rebounds to the player who grabbed the ball:

"When we look at consistency, ... we see that 90% of the variation in a player's per-minute rebounds is explained by a player's per-minute rebounds the previous season. There appear to be no statistics in baseball or football that are as consistent as rebounds in basketball."

But that doesn't mean that rebounds are a useful statistic. They could be like saves -- it could be that the consistency is due to consistency of *opportunities*, not talent. And many people, myself included, have argued that, that certain players position themselves to compete for rebounds, and others do not. If player X is the designated "rebound guy" on the team, year after year, that would explain the consistency without providing evidence of talent.

If Berri and Schmidt are using the high r-squared to defend their hypothesis that rebounds are talent, then they don't succeed. Indeed, I think the high r-squared shows the opposite. Given that there's a certain amount of binomial randomness in who gets any particular rebound, there's a limit to how much consistency you'd be able to see if everyone had the same number of opportunities. The exceedingly high r-squared is an indication that the cause is probably more than just talent.

I should explain that better. Here's a baseball example. Suppose you computed the year-to-year correlation in hits among players who had at least 400 AB. The r-squared wouldn't be 1, because players don't hit the same every year. Someone who got 150 hits last year might get 160 next year, and vice-versa. Almost everyone would be in the 100 to 200 range, clustering maybe around 150. And you'd get an r-squared of maybe 0.2 (I'm guessing).

Now, suppose you include *every* player, not just those with 400AB. Now, players are much more likely to have similar results than last year. You get your typical regular who has 150 this year and 160 next year. Then you have your utility player who has 40 one year and 27 the next year. And you have your pitchers, who have 8 hits last year and 11 this year.

And so you have an r-squared that's much higher, maybe .7 or more. But the jump in r-squared is measuring consistency of *opportunity*, not talent.

So when you have one argument that rebounds are almost all talent, and another argument that rebounds have a huge component in there that reflects opportunity -- and then you get a high r-squared -- that result better supports the second argument, not the first.

---------

Anyway, that's my main point. While I'm here, a couple of other smaller things I disagree with in that section of the book (pages 33 to 39):

1. The authors list the r-squareds for different measures in various sports; they find that their correlations for basketball are higher than other sports, and therefore argue that NBA statistics are more useful than others. But as I have pointed out before, you can't just use the raw r-squared or correlation coefficient as a measure of persistence of talent. The r-squared is dependent on many other factors -- most notably (as Tango has also pointed out many times), the length of a season. The authors found an r-squared of QB completion percentage of 24%, but a 90% r-squared for rebounding. That doesn't necessarily mean anything on its own. That's because the QB numbers are over 16 games and maybe a few hundred attempts, whereas the rebounding numbers are over 81 games and several thousand attempts. You just can't compare raw r-squared values that way, without first interpreting them.

2. When the authors say "there are no statistics in baseball as consistent as rebounds" ... well, they didn't include saves. I don't know for sure if saves have a higher r-squared or not, but I'd certainly be willing to bet they do.

3. The authors do indeed mention that the football season is shorter than the basketball season, but they don't seem to realize that that fact, in and of itself, affects the r-squareds. Instead, they have two alternative explanations. The first is that football statistics depend more on teammates than basketball statistics do -- which doesn't seem unreasonable, even without evidence backing it up.

But their second argument I'm not sure about. Berri and Schmidt argue that another reason professional football players are inconsistent is because of lack of experience. Why lack of experience? Because football players play only 16 games a season, so they're less experienced than basketball players, who play 81. Moreover, basketball players probably played pickup basketball every day as teenagers, while football players had to wait for organized leagues, because they couldn't just get a few friends together and play a real football game. So NBA players are more experienced because they've played a lot more basketball in their lives than NFL players have played football.

Well, it's probably true that NBA players have spent more time in games than NFL players, but I'm not sure why that's important. Why does playing fewer games (but still a lot of games -- a regular lot, rather than a huge lot) make you less consistent?

If I shoot foul shots for 15 minutes every day for a decade, and you shoot foul shots for 30 minutes every day for a decade, it would be expected that you'd be better than me. So maybe suppose I have more talent, so that even with less practice, I'm as good as you. Now: why would you really be more consistent than me? We're both 70% shooters, say. For me to be less consistent, I'd have to have more 60% years and more 80% years, while you'd hover closer to 70% every year. Why would that be the case? I suppose it's possible, but it doesn't seem plausible to me. Where's the evidence? Why would it matter that we got to the same point with different amounts of practice time?

Would I be more variable day to day, too, so I'd wind up having more 60% games and more 80% games? If that were true, if I'm sometimes 80% and sometimes 60%, my shots will be clustered together more than average. That means I'm more likely to make a shot after I've made my previous shot, and I'm more likely to miss a shot after I've missed the previous shot. That's the equivalent of saying that inexperienced players have a "hot hand" effect. But given that numerous "hot hand" studies have failed to find any effect, doesn't that suggest that all players are equally (binomially) consistent within their level of talent?

Now, I suppose you can make the argument that because of inexperience, football players are more likely to still be learning their technique, so they might be continuously improving. In that case, you might see a QB go from 20% to 25% in some measure more often than a basketball player goes from 20% to 25% in a similar measure. But if that were true, wouldn't the QB be improving throughout his entire career, given that he plays only 16 games a season? In that case, he'd still be improving into his 30s, so his age-related dropoff would be mitigated, and he would look *more* consistent later in his career. So there would be a balance: young players appearing less consistent between seasons, and old players appearing more consistent. The result should be a wash.

So I just don't understand how any inconsistency caused by "inexperience" would happen.

------

It looks to me like the authors are looking at the raw r-squareds, and then coming up with possible explanations for why they differ. But, as I said, they miss what is by far the biggest explanation, which is simply sample size. It's just the nature of how correlations work that the smaller the sample, the more luck dominates the results, and the lower the season-to-season r-squareds. I bet if you looked more closely than just listing correlation coefficients, you'd discover the difference in opportunities accounts for almost all the difference right there.

We can do a quick calculation.

The authors found that NFL QB completion percentage had a year-to-year r-squared of .24. Suppose that's because you have 24 points of variance caused by talent, and 76 points of variance caused by luck.

Now, suppose you played 80 games in an NFL season instead of 16 -- five times as many games, and close to the 82 games that the NBA plays. Now you'd still have 24 points of variance caused by talent, but only one-fifth the original variance caused by luck, which works out to 15.2 points. That would give you an r-squared of (24/39.2), or .61. That fits right in to what you get for similar NBA year-to-year r-squareds:

.47 NBA field goal percentage

.59 NBA free throw percentage

.61 NBA turnovers per minute

.61 QB completion percentage (projected)

.68 NBA steals per minute

.75 NBA points per minute

See? It's just opportunities. Those other explanations, about teammates an inexperience, might be factors too. But they're minor factors at best, and, without evidence, they're just speculation.

In fairness, the authors may have evidence for them that they're not telling us about. They don't say that the apparent inconsistency "may" be caused by inexperience, or that they "suspect" or "wonder" if that's the cause. Rather, they say:

"The inconsistency with respect to football statistics can be traced to two issues: inexperience and teammate interactions." [emphasis mine.]

So they imply they traced the effect, but they don't say *how* they did the tracing. So while I'm currently very skeptical that the apparent "inconsistency" is anything more than just straight sample size, I'm still willing to look at the authors' evidence, when they choose to show it.

Labels: baseball, basketball, regression, statistics, Stumbling on Wins

## 9 Comments:

Outstanding post. You really have a talent for considering these sorts of things.

I've also wondered about the effect of sample sizes between sports, though with admittedly much less statistical insight.

Looking at the rebound issue, where I suspect you're right. We really need only look at the correlation coefficient, right? After all, we're only doing a univariate analysis.

And we might compare the season-to-season correlation between number of rebounds and rebounds per minute played. If you're right, the per minute data should have a lower correlation...

Greg,

There's an easy technique that Tango came up with to figure out the spread of talent in a league. Figure out

T = variance of actual W-L percentage

Y = variance of what W-L percentage would be if all the teams were equal in talent (i.e., binomial approximation to the normal distribution with p=0.5)

If L is the actual talent spread, then

L = T - Y

Y can be computed as p(1-p)/n. So for the NFL, it's .015625. For MLB, it's .00154321.

If you do this for MLB, you'll find (this is from memory) that overall PCT has a variance of 11/162 squared, binomial luck has a variance of 6/162 squared, which means talent has a variance of 9/162 squared.

It's not hard to do all four major sports and compare the variance of talent.

Doc: The "opportunities" for rebounds aren't playing time opportunities, but rebounding opportunities. The idea is that every player has a different chance of being in position to handle the rebound.

GuyM came up with the perfect analogy: In baseball, first basemen make a lot more putouts than any other position. That's because the team has designated the guy to play the position where all the putouts go. In basketball, the team has tacitly designated certain players to be the rebounding guys, so their totals don't necessarily represent how good they are at rebounding, an more than the putouts of the 1B represent how good 1Bs are at generating putouts.

Phil:

I agree with the vast majority of your post. But I'd quibble with the way you formulate the 3rd criterion for a valid player evaluation statistic. You say:

"Third, one should look at how much the measure is a true reflection of the player's talent, and how much is a measure of factors outside the player's control."

I'm not sure that player's "control" is the most important factor here. Was the fact that Dennis Rodman took a huge share of his teams' defensive rebounds "outside of his control?" I suppose it literally was, in the sense that his coach had to tolerate it, but Rodman clearly wanted to play that role.

What matters more is that a marginal change in the statistic at the player level translates into the same marginal change in that statistic at the team level. (Or, if it doesn't, that this difference be adjusted for.) That's the problem with rebounds in Berri's metric -- each additional rebound for a player does not equal an additional rebound for the team (many are effectively taken from teammates). Baseball has fewer examples of this, but an analogy is a fielder who is a "ballhog" on easy to catch flyballs, pumping up his putouts with no real gain for the team.

Another way to say this is that "a good metric must properly take account of differences in opportunities" (which is really your argument). We can't evaluate a running back based on yards gained if we don't know how many carries he had. We can't evaluate a QB's passing yards without knowing his attempts. And we can't properly evaluate a rebounder without understanding how many effective rebound opportunities his team gave him (a statistic which isn't recorded).

Ironically, Berri sees this extremely clearly with regard to shooting. His big point about basketball is that players take shooting opportunities from other players -- so they need to use them efficiently. But he can't see that the same is true, to a considerable extent, for rebounds.

Yeah, you're right, Guy, I should rephrase that third point. I'd say "opportunities or other factors" because there could be other things involved, like garbage time and such.

And excellent point about Berri and points. You're right, it's exactly the same argument for rebounds, but he accepts one and not the other.

Rebounding isn't like saves or 1b putouts. You aren't placed in a role in basketball like you are in baseball or football.

Isn't the amount of talent the difference between the best and worst?

If I am the best in the world at calling coin tosses, and I am 52%, and you are the worst, and you are 47% I'm sure you can come up with math showing it's luck not talent nor opportunity.

If the Yankee closer gets 35 saves, a replacement player given the same opportunites would get say 25, so Yankee closer is that much better. 10 saves due to talent, 23 due to opportunity versus a 2 save guy.

If 1b superstar gets 800 putouts in a season, and replacement 1b gets 750, then 1b superstar is 50 putouts better for talent, 600 better than an of for opportunity.

If I sort BB-Ref by Def Reb% for C with over 1,000 minutes this year, I get Howard and Camby at 31% and 2 guys at 11% - 20% due to talent, and 4% over a 7% guard due to opportunity

Anonymous: the idea is that, just as you need to be in the right position (at first base) to make lots of putouts, you need to be in the right position to make lots of rebounds.

And just like there's only one guy on the team chosen to be at first base (the first baseman), there's only one guy chosen to be in the best spot to pick up rebounds. So the fact that he makes lots of rebounds is a combination of talent, and the fact that he was somehow "chosen".

Someone who knows basketball better than I do can probably explain this better.

Getting to the right spot to make putouts at 1b is easy; even I could do that.

Getting to the right spot to get rebounds in the NBA is real hard. You've got people trying to prevent you from doing that.

I could probably make 90% of the putouts that the best 1b makes. I probably would get less than 33% of the rebounds the best rebounder gets.

And I am (was) a way better basketball player than baseball player.

Also, you're not "chosen" to be in the spot for rebouding, like you are at 1b - You are primarily guarding a player. You might have an exception somewhere, especially if you are guarding someone who is not a threat.

You don't say to Dwight Howard we choose you to get rebounds. He guards his player first, and then tries to get the rebound.

The same as every center in the league. And they are all guarding the same set of players.

- and the variance for defensive rebounding between centers was between 11% and 31%.

Post a Comment

## Links to this post:

Create a Link

<< Home