Are black NBA coaches more likely to be fired?
Alan Reifman was kind enough to point me to a new study on whether racial discrimination affects the firing of NBA coaches. In case you only care about the answer, it's no – coaches appear to be fired independently of whether they're black or white.
The study is called "Race, Technical Efficiency, and Retention: The Case of NBA Coaches." The authors are Rodney Fort, Young Hoon Lee, and David Berri. Fort is a renowned sports economist, as is Berri (who is co-author of "The Wages of Wins). Lee is an academic economist in Korea.
The study is here (PDF). A press release describing the research is here.
As far as I can gather, "Technical Efficiency" (TE) is an economics term that refers to a firm's ability to efficiently produce valuable goods from its inputs of labor and capital. A TE of 1.00 would signify a firm that would produce 100% of the best possible theoretical output, given the staff and technology available to it. (That might not be completely correct, but it's what I gather from the Wikipedia description.)
The bulk of this paper tries to figure out how much efficiency coaches achieved from their teams, where the inputs are players and the outputs are wins. To do that, the authors figure out the win values of various basketball statistics, which are the same as the ones in "The Wages of Wins" (for instance, a rebound is worth 0.034 wins). Then, they look at last year's statistics for the players on every team, and figure out what they should have been expected to produce.
At this point, you could just look at the expected wins from the players, and use that to figure the expected wins for the team. But the authors added extra variables. There's roster stability, for how much this team's personnel varies from last year. There's years of experience for the coach. There's the coach's career winning percentage. And a few more.
I don't fully understand the statistical technique the authors used; it's some kind of maximum likelihood estimation, rather than a straight regression. It seems like it's standard in the economics literature when estimating technical efficiency.
In any case, the authors estimate TE for each season for each coach. The average TE for all coaches was .760. But coaches that got fired averaged only .670 before being dismissed. This suggests, the authors say, that "firings tend to occur as if owners use TE in the decision."
Furthermore, fired black coaches had TEs of .676, while fired white coaches had TEs of .666. This is a very small difference, not statistically significant, which leads to the paper's main conclusion that there is no racial discrimination by team owners.
Despite the fact that I don't completely understand the authors' methdology, it seems to me that it's a very complicated way of trying to figure out by how much the teams exceeded (or fell short of) expectations. Suppose you want to figure out whether the coach got the most out of his team. A simple way would be to just look at how it did last year, mentally account for any personnel changes, regress to the mean as appropriate, and see if it met that standard this year. An even simpler way would be to look at the pre-season Vegas for over/under in wins. I recall that Bill James created a really simple "expected wins" formula, and did a similar manager evaluation based on that. It worked pretty reasonably, as I recall.
In that light, the algorithm used in this study seems a lot more complicated than it needs to be. (To me, it seems hugely, absurdly complex, but I'm not an academic economist, for which it's probably the standard.)
Admittedly, a complex methodology would be worth it if it led to greater accuracy. But the authors don't compare their estimates to simpler ones. They do say, at one point, that their model has a high correlation with actual wins – 0.983. It seems to me that this must be retroactive, because, with the extent of luck in basketball, there's no way to predict wins with anywhere near that accuracy. So there's still no test of how well their model predicts *future* wins, which, of course, is what it's actually trying to do.
Frankly, I doubt that the system is that much better than a more naive one, if only because of the flaws in the measures of player productivity (as pointed out in discussions of "The Wages of Wins," here and elsewhere – my original review is here).
Another thing that bothers me is the apparent significance of the coach's career winning percentage:
"For a team with TE=0.70 and a coach with a career winning percentage of 0.500, hiring a coach with a career winning percentage of 0.600 increases the TE to 0.79."
I'm not 100% sure what a difference of 0.09 in TE means in practical terms. But it seems pretty big. And, regardless, do you really want to base any conclusions on the coach's previous record without taking into account the talent level of his teams?
And perhaps the one thing that bothers me the most about this study is that it doesn't mention, anywhere, the effects of luck. A team expected to go 41-41 will vary from that with a standard deviation of 4.5 games. Actually, that's the binomial estimate – in real life, it might be a bit lower, so let's call it 4. What is the variance in wins based on the coach's talent at molding a team? Even if it's the same 4 games – which I think is a huge overestimate – that means half of any "efficiency" discrepancy is caused by luck. Shouldn't the study discuss what this means? Because it seems to me that it's likely that many coaches – perhaps most – would be fired due to bad luck, rather than bad coaching.
As for the finding that coaches aren't fired based on race, I do think the study supports the conclusion. It may have used a complex methodology, but it does seem to properly distinguish the teams that exceeded expectations from the teams that did not. That I think there are much simpler ways of doing that doesn't change the fact that theirs probably does the job too.
Still, I have reservations about the study's methodology, in terms of predicting wins. I'd bet that if the authors had run a table of expected versus actual wins, using their methodology, it wouldn't be hard to find a simpler one -- without all the logarithms and likelihood estimators and regressions -- that would be at least as good.