Friday, May 27, 2011

A fun Markov lineup study

"Reconsideration of the Best Batting Order in Baseball: Is the Order to Maximize the Expected Number of Runs Really the Best?" by Nobuyoshi Hirotsu. JQAS, 7.2.13


When you compare two baseball teams, the one expected to score the most runs isn't necessarily the one that will win more often. For example, suppose team A scores 4 runs per game, every time. Team B scores 3 runs eight games out of ten, and scores 10 runs two games out of ten.

Even though team A scores 4 runs per game, and team B scores an average 4.4 runs per game, A is going to have an .800 winning percentage against B.

In a recent study in JQAS, Nobuyoshi Hirotsu asks the question: does this happen in real life, with batting orders? For the same nine players on a team, is there a batting order that produces fewer runs than another, but still wins 50% of games or more?

To answer the question, you need a lot of computing power. Unlike the contrived example above, any effects using a real batting order are likely to be very, very small. And there are 362,880 different ways of arranging a batting order.

But, Hirotsu was able to do it, thanks to a supercomputer at the University of Tokyo. For all 30 MLB teams in 2007, he took their most frequently used lineups, and did a Markov chain calculation to figure out how many runs they would score.

(He used a very simplified model of a baseball game -- hits, walks, and outs only. Outs do not advance baserunners. No double plays. Runners advance 1-2 and 2-H on a single, 1-3 on a double.
Because this is a Markov study and not a simulation, all figures quoted in the study are *exact* -- subject, of course, to the caveat that the model is much simpler than real baseball.)

After calculating all 362,880 results, he took the single top-scoring lineup, and compared single-game results to the best of the other lineups (10,000 to 30,000 of them). If he found any lineups that beat the best one more than 50% of the time, despite scoring fewer runs, he marked it down.

So, about 600,000 pairs of lineups were compared (say, 20,000 per team times 30 teams). After all that work, how many sets of lineups do you think he found that met the criteria?


There were 13 sets of lineups where the team with more runs scored was not the team with the better record. Those 13 were distributed among only 7 different MLB teams.

For instance, the White Sox. The most frequent lineup actually used by the manager was: Owens, Iguchi, Thome, Konerko, Pierzynski, Dye, Mackowiak, Fields, Uribe. Call that one "123456789".

The lineup that maximized runs scored was "347865921," with 4.85966 runs per game. However, the lineup "348675921" beat it with a .50014 winning percentage, despite scoring only 4.85931 runs per game.

As a general rule, it seems, the lower the standard deviation of runs per game, the more likely a lower-scoring lineup can beat a higher-scoring lineup. In the White Sox case, the SDs were 3.2649 and 3.25817, respectively.

If you studied it, you could probably look at the 13 cases and try to figure out what it is about the players and lineups that makes this possible -- that is, how to create a lineup that's almost as good as the best, but has an SD lower enough to compensate. Do you need lots of power hitters, few, or a mix? Do you have to cluster them, split them up, or go half/half? I have no idea.


Anyway, even though study really has no practical signficance, I really like it. Recreational sabermetrics, I guess you could call it.

Labels: , , ,


Post a Comment

<< Home