Saturday, December 16, 2006

NHL faceoff skill adjusted for strength of opposition

Last season, Yanic Perreault led the NHL in faceoff winning percentage with 62.2% (559-340). But might that be because of the quality of opposition he faced? Maybe Perreault took more faceoffs against inferior opponents, and that inflated his numbers.

Meanwhile, Sidney Crosby was one of the worst in the league, winning faceoffs at only a 45.5% rate. Was he really that bad, or were his numbers lowered because he faced a lot of skilled opposition front-line centers?

In this study, Javageek tries to figure that out. He assumed that each player in the league has an intrinsic faceoff winning percentage. Then, he assumed that when a player faces another, his chance of winning is determined by pythagorean projection (The log5 method (explained here) might have been a better choice, but I don't think it matters a whole lot).

He then took 121 faceoff men, and looked at their records against each other. That's 7,260 possible faceoff pairs. Javageek figured out (confession: I didn't really read the algebra) that the question could be answered by solving 121 equations in 121 unknowns. He did that, and came up with an adjusted faceoff percentage for each player, corrected for the quality of opposition.

Bottom line: the adjustment doesn't matter much. Javageek didn't give any metrics comparing actual vs. theoretical, but a look at the two charts shows that in most cases, they're almost the same. Find any player in the left (theoretical) column, and he can be found not too far away in the right (actual) column. Yanic Perreault still leads, with an adjusted 63.8%, and Sidney Crosby drops to 43.8%.

It's important to note that these are still not unbiased the most appropriate esimators of the players' actual faceoff skills – you still have to regress to the mean to get an estimate of their true talent. You'd probably want to use this technique to do that.

One thing that bothers me a bit about the results is that the top players become less extreme after the adjustment, but the bottom players become more extreme. You'd expect both halves of the data to be less extreme -- closer to the mean -- after you've adjusted out some of the luck. We don't see that with the bottom players, and I'm not sure what to make of that.

Labels: , ,

4 Comments:

At Sunday, December 17, 2006 10:38:00 AM, Blogger Ted said...

A technical nitpick: the resulting estimates are in fact "unbiased" estimates. Unbiasedness is a property of a statistic, i.e. the formula that converts a sample into a number, and not the resulting number.

For example, if we take a collection of coins and toss them each 100 times and count the proportion of heads for each coin, that proportion is an unbiased estimator of the true probability the coin will land heads. If we have two coins, one which lands heads 53 times and the other 47 times, then .53 and .47 are the unbiased estimates of the true probability for the coins. This is true even if both coins are, in reality, fair coins.

Unbiasedness simply means that there's no systematic error in one direction or another from the "true" probability.

I'm not arguing that we shouldn't regress to means -- we should -- but the reason isn't because the estimators are biased.

 
At Sunday, December 17, 2006 11:00:00 AM, Blogger Phil Birnbaum said...

Right, that's true. Thanks.

But if the statistic under consideration isn't the proportion of faceoffs won for Yanic Perreault, but the proportion of faceoffs won for the player *who's first in the league*, isn't that estimate now biased? That is, *the first order statistic* (ie, highest) of the observed proportions of the 121 players is a biased estimator of the proportion of the actual probability for the associated player.

Trying to wiggle out on a technicality. Did it work? :)

Regardless, I will change "unbiased" to "accurate".

 
At Sunday, December 17, 2006 11:21:00 AM, Blogger Ted said...

Yes, that version's correct, and more precise in some sense, since it captures the essential point: the variance in observed performance is greater than the variance in actual talent.

(Now if only we could get that through the heads of 95% of the people who play sports simulations...)

 
At Sunday, December 17, 2006 5:51:00 PM, Blogger JavaGeek said...

There's actually a pretty big problem with this, as I have recently discovered that the face-off percentage on the power play = 55%.

So if you take a lot of short handed face-offs you'll be lower and power play face-offs you'll be higher.

I recalculated the theoretical value using only even strength face-offs and got this table: Vermette got a higher rating than Perreault

 

Post a Comment

<< Home