Sunday, July 18, 2010

A Pythagorean formula to predict overtime winners

There's a study in the latest JQAS where we learn something concrete about teams' probabilities of winning overtime games. It's called "Predicting Overtime with the Pythagorean Formula," by Jason W. Rosenfeld, Jake I. Fisher, Daniel Adler, and Carl Morris.

The "usual" pythagorean formula looks like

winning percentage = RS^a / (RS^a + RA^a)

where RS = runs scored (or points scored, or goals scored), RA = runs allowed, and "a" is an exponent that makes the equation work for the particular sport.

The original formula for baseball used an exponent of 2; it was later found that 1.83 was better, and there's a version called "PythagenPat" that varies the exponent according to league-average runs scoring.


Anyway, the authors start by taking a bunch of real-life games in three of the four major sports, and finding the best-fit exponent. They come up with:

NBA: 14.05
MLB: 1.94
NFL: 2.59

Those are the best exponents for full games. Now, what about overtime?

From the title of the paper, you'd think that the authors would do the same thing, but for overtime games: look at overtime runs scored and allowed, and find the best fit exponent.

But that's not actually what they have in mind. Their actual question is something like: suppose a baseball team scores 5 runs per game and allows 4 runs per game in general. Using the usual exponent of 2, that team should have a winning percentage of .610. But: what should that team's winning percentage be if we limit the sample to extra-inning games?

That is: forget how many runs that team *actually* scored in extra innings. Assume only its *expectation* in extra innings, that it will, on average, continue to score 5 runs to its opponents' 4 (after adjusting for the effects of walk-offs). What percentage of tie games do you now expect it to win in extra innings?

To figure that out, the authors use a database of actual tie games, and try to find the exponent on "regular" RS and RA that match the team's actual extra-inning record. But, for extra-inning games, you can't assume the opposing teams average out to .500. So the authors made an assumption: that you can adjust the team's RS/RA ratio by dividing it by its opposition's.

So, again, suppose our team outscores its opponents 5 to 4, but now it's in the 10th inning against a team that also outscores its opponents, 4.75 to 4.25. Our team has a RS/RA ratio of 1.25. The other team has an RS/RA ratio of only 1.18. Dividing the two numbers gives 1.06. And, so, for this particular game, our team will go into the database with a ratio of 1.06 -- as if it scores 4.24 runs for every 4.00 runs it gives up.

Having made that adjustment for each of the overtime games in the database -- 1,012 NBA, 269 NFL, and 1,775 MLB -- the authors computed what pythagorean exponent gave the best predictions. Here's what they found:

NBA: 9.22 overtime
MLB: 0.94 overtime
NFL: 1.18 overtime

So, if our baseball team scores 5 and allows 4, for a .610 winning percentage overall. What would its winning percentage be in extra-inning games? To get the result, just rerun the pythag formula with an exponent of 0.94. The answer: .552. Seems quite reasonable.

The authors do a similar calculation for teams that are .750 overall (which isn't that realistic for baseball, but never mind). The results:

NBA: a .750 team is .673 in overtime
NFL: a .750 team is .616 in overtime
MLB: a .750 team is .630 in extra-innings

------

This is good stuff and new knowledge -- an empirical result that we didn't have before. Because of that, I'm even willing to forgive the authors' not including the NHL. Actually, for hockey, you don't need empirical data to figure it out -- at least if you assume the playoff sudden-death format, where they play indefinitely until one team scores. In that case, the exponent must be very close to 1.00.

Why? Imagine that team A scores 4 goals a game, and team B scores 3. If you mix those 7 goals up in random order, and pick the first one to be the overtime winner, the probability is obviously 4/(4+3) that team A wins, and 3/(4+3) that team B wins. In other words, the pythagorean formula with an exponent of exactly 1.


It might actually be a little less -- better teams have more empty-net goals, which really should be eliminated from the overtime calculation. Also, better teams might have slightly better power plays, and, because there are fewer penalties in overtime, that might reduce their chances a bit. Still, an exponent of 1.00 is probably close.

------

The study gives us a pretty good empirical answer to the question it asks. But, to be picky, we could argue that there's not necessarily a reason that Pythagoras should work, theoretically. That's because, in MLB and the NFL, the overtime game is scored differently from the regular game.

Take football first. In an NFL overtime (ignoring this year's rule change), the first team to score wins, and it doesn't matter whether it's a field goal or a touchdown. So it's no longer a game of *points* -- it's a game of *scoring plays*. To properly predict who wins, you want to know about the teams' probabilities of scoring, unweighted by points. For instance, suppose team Q scores 28 points a game, and team R scores 20 points a game. But Q does it with four touchdowns, and R does it with two TDs and two field goals. In that case, you'd expect the two teams to have exactly equal chances of winning in overtime, since they score the *same number of times per game*. There would be no real reason to expect a pythgorean formula to work that's based on points -- you'd want a pythagorean formula that's based on scoring plays.

(Except for one complication: you'd need to somehow estimate how many extra field goals the touchdown team would score if they never went for a touchdown while in field-goal range. That is: sometimes a team will be at their opponent's 20 yard line, but they'll go for the TD instead, and, as a result, they'll sometimes lose the ball without getting any score at all. So a TD might really be the equivalent of (say) 1.2 field goals in terms of scoring plays, because, maybe for every TD scored, there's 0.2 potential FGs that they lost by failing a TD drive other times. If the ratio were 2.33, of course, then points would work perfectly, since 2.33 equals 7 divided by 3. But I suspect that it's not nearly that high.)

A similar argument applies to baseball. In the NFL, an overtime "big inning" (TD) is worth exactly the same as a "one-run inning" (FG). That's not quite the case in MLB -- it's still better to score four runs in the top of the 10th than to score just one run. But, not *much* better. In extra innings, what matters more is *your chance of scoring* in an inning, and not the average number of runs (which matters more in a full game).

Having said that, I should say that I'm not sure it matters very much at all. I'm sure that, in baseball, big-inning and small-inning teams have different pythagorean exponents too, but we don't worry about that too much. So why should we really worry about it here?

------

Finally, two questions I don't know the answers to.

First, the authors write,

"One might expect the relative alphas [overtime exponent divided by regular exponent] in these three sports to equate roughly to the square root of the relative lengths of overtime, compared to a full-length game. If so, that would suggest hat the NBA overtime alpha would be 5, not the NBA's estimated 9.22."


Is there a reason the square root argument should be true? I mean, yeah, when you talk about standard deviations, you use the square root of sample size, but, here, we're talking about the value of an exponent. So is there a mathematical argument for why it should be true anyway?

Second: on page 13 of the study, the paragraph before the section break, the authors give an calculation that, if you eliminate the NFL overtime coin flip, a .750 team's probability of winning the overtime would rise from .6116 to .6160. I don't understand the calculation, and the change in winning percentage seems low. Can anyone explain?



Labels: , , , , ,

3 Comments:

At Tuesday, July 27, 2010 3:03:00 AM, Blogger Hawerchuk said...

Phil

I ran the numbers for OT in the NHL:

http://www.behindthenethockey.com/2009/11/24/1109937/overtime

 
At Tuesday, July 27, 2010 11:14:00 AM, Blogger Phil Birnbaum said...

Cool. There's a lot more regression to the mean there than I thought.

Suppose a team scores 4 goals and allows 3. That means it has a winning percentage of .640 (with exponent 2 -- what is it for hockey?).

In overtime, you'd expect it to score the first goal 4/7 of the time, which is .571. Your chart shows about .530.

I guess it's .571 in games that actually have an overtime goal, and .500 in other games. That's .536, which is pretty close.

 
At Tuesday, July 27, 2010 11:15:00 AM, Blogger Phil Birnbaum said...

Oops, that assumed that 50% of overtime games have a goal. I shouldn't have made that assumption.

 

Post a Comment

<< Home