Saturday, January 20, 2007

Do NBA teams value consistency?

Last week, in honor of Martin Luther King, Jr. Day, David Berri posted a list of academic studies of racial discrimination in basketball. Some found discrimination, some didn’t – check out the link for a summary of all the findings.

I thought I’d take a look at some of these studies. So far I’ve looked at only one: "Do Employers Pay for Consistent Performance?" by Orn B. Bodvarsson and Raymond T. Brastow. I should state in advance that the copy I was able to download from my library was missing a lot of the math and tables, but I think I was able to get the gist of what the study was trying to do. An expensive download is
here.

The idea is something like this. In any employer/employee relationship, it is difficult for the employer to figure out how good the employee’s output is, in quality, quantity, or both. Therefore, "costly monitoring" of the employee’s output is required.

In this study, the employer is the team, and the employee is the player. Assume the player’s MRP ("Marginal Revenue Product," which is basically the monetary value of his output) has an expected value of theta. (At least I think it’s theta – the crappy text version I have uses "0", which makes no sense.) The team initially doesn’t know what theta is, so it has to monitor the player. The team will spend c dollars per game in monitoring, until it has seen enough to be able to estimate theta within a narrow range. Then it will stop spending those c dollars.

Therefore, players who are more consistent will earn more money than players who are less consistent. That’s because inconsistent players have to be monitored for more games. Since teams pay for the monitoring by the game, inconsistent players will cost more, and the team will factor that in to their salary offers.

With this model in mind, the authors run a regression on player salary based on a whole bunch of variables. They used points per minute as the main performance statistic, and include the observed variance of the player’s PPM, since the model assumes that should be an important determinant of salary.

I don’t have the full regression results in my copy of the study, but the authors kindly list the most important findings, which are:

-- variance is significant at the 5% level;
-- the more seasons played, the less the variance predicts salary;
-- there are no observed effects for race.

My main problem with this study is the applicability of its assumptions to the NBA. I have no doubt that in real life, as opposed to professional sports, knowing the capabilities of your employees is difficult, and monitoring costs are indeed high. Magazines are full of articles trying to help managers to figure out who their best performers are, and how to know whom to hire. I’m a software developer, and where I used to work, there were programmers of all different ability levels, and some of them were literally ten times as productive as others. But management was generally oblivious to the differences. And if that’s the case for programmers – where you actually can measure output with not too much difficulty if you choose to – I can imagine that for jobs that don’t have quantitative outputs, like customer service or management, the monitoring problem is indeed very significant.

But in basketball? There aren’t very many fields of human endeavor where it’s easier or cheaper to measure an employee’s output. If your metric is points per minute, like in this study, the cost is pretty much zero – USA Today, and the NBA itself, will do it for you, almost instantly.

And so as for the variance of output affecting salaries, I doubt that the authors’ explanation is the correct one. If the cost of monitoring is zero, it’s hard to accept that the lower salaries are caused by that cost. It seems more likely that players who vary in terms of points per minute are inconsistent because of lesser playing time. Theoretically, the standard deviation of single game PPMs is proportional to the square root of minutes played per game. (Minutes played is included in the regression as a separate variable, but I don’t think there’s any interaction term between PPM and the inverse of the square root of minutes.) The fewer minutes played, the less skilled the player is likely to be, and that would explain the lower salary.

Even if I’m mistaken, and the study does actually adjust standard deviation is for minutes played, there are other possible explanations. For instance, players with a high SD might be played in many different kinds of situations – with different teammates, or more often in garbage time, and so on. Those players are likely to be role players, since guys with lots of minutes pretty much play in all kinds of situations. And role players earn less than stars.

Even if all else were equal, I don’t see why teams should overvalue players who are more consistent. For one thing, if a player’s performance varies by situation, he becomes more valuable, not less. If a player is twice as good at home as on the road, the team can play him only at home, as a kind of platoon player. Who’s more valuable in baseball: the consistent .240 hitter, or the guy who hits .300 against lefties but .210 against righties? The inconsistent one, obviously.

Second, I’d bet that almost all the observed differences in SD are due to luck. If you look at just foul shooting, you can calculate the theoretical variance from the binomial distribution. Any player who has a higher per-game variance, and is thus “inconsistent,” must be having hot streaks and cold streaks. And, as numerous studies have told us, streakiness is almost always random. (See Alan Reifman’s "
hot hand" blog for links and references.)

Finally, I’ve never heard any sports executive complain that a player was too streaky, except in the context of how badly he played during his off-days. Take, for example, baseball. In the 80s, both Bill James and the Elias people broke down every regular batter’s record by month. Do you remember any of them? Do you remember anyone ever complaining that a particular .300 hitter was less desirable than another because he got his .300 by hitting .250 in May but .350 in August? (It’s true that if a player hits .170 in September, you might wonder if something’s wrong with him, or he’s washed up. But that goes to the question of whether is established ability has changed, rather than any inconsistency.)

It makes sense that consistency is desirable in the non-sports context, for exactly the reasons the authors give – it makes it less costly for the employer to evaluate the employee. But in the NBA context, I don’t think that’s really the case.


Labels: , , ,

0 Comments:

Post a Comment

<< Home