Monday, April 14, 2014

Accurate prediction and the speed of light III

There's a natural "speed of light" limitation on how accurate pre-season predictions can be.  For a baseball season, that lower bound is 6.4 games out of 162.  That is, even if you were to know everything that can possibly be known about a team's talent -- including future injuries -- the expected SD of your prediction errors could never be less than 6.4.   (Of course, you could beat 6.4 by plain luck.)

Some commenters at FiveThirtyEight disagree with that position.  One explicitly argued that the limit is zero -- which implies that, if you had enough information, you could be expected to get every team exactly right.  That opinion isn't an outlier  -- other commenters agreed, and the original comment got five "likes," more than any other comment on the post where it appeared.


Suppose it *were* possible to get the win total exactly right. By studying the teams and players intently, you could figure out, for instance, that the 2014 Los Angeles Dodgers would definitely go 92-70.

Now, after 161 games, the Dodgers would have to be 91-70 or 92-69.  For them to finish 92-70 either way, you would have to *know*, before the last game, whether it would be a win or a loss.  If there were any doubt at all, there would be a chance the prediction wouldn't be right.

Therefore, if you believe there is no natural limit to how accurate you can get in predicting a season, you have to believe that it is also possible to predict game 162 with 100% accuracy.

Do you really want to bite that bullet?  

And, is there something special about the number 162?  If you also think there's no limit to how accurate you can be for the first 161 games ... well, then, you have the same situation.  For your prediction to have been expected to be perfect, you have to know the outcome of the 161st game in advance.

And so on for the 160th game, and 159th, and so on.  A zero "speed of light" means that you have to know the result of every game before it happens.  


From what I've seen, when readers reject the idea that the lowest error SD is 6.4, they're reacting to the analogy of coin flipping.  They think something like, "sure, the SD is 6.4 if you think every baseball game is like a coin flip, or a series of dice rolls like in an APBA game.  But in real life, there are no dice. There are flesh-and-blood pitchers and hitters.  The results aren't random, so, in principle, they must be predictable."

I don't think they are predictable at all.  I think the results of real games truly *are* as random as coin tosses.  

As I've argued before, humans have only so much control of their bodies. Justin Verlander might be wanting to put the ball in a certain spot X, at a certain speed Y ... but he can't.  He can just come as close to X and Y as he can, and those discrepencies are random.  Will it be a fraction of a millimeter higher than X, or a fraction lower?  Who knows?  It depends on which neurons fire in his brain at which times.  It depends on whether he's distracted for a millionth of a second by crowd noise, or his glove slipping a bit.  It probably depends on where the seam of the baseball is touching his finger. (And we haven't even talked about the hitter yet.)

It's like the "chaos theory" example of how a butterfly flapping its wings in Brazil can cause a hurricane in Texas. Even if you believe it's all deterministic in theory, it's indistinguishable from random in practice.  I'd bet there aren't enough atoms in the universe to build a computer capable of predicting the final position of the ball from the state of Justin Verlander's mind while he goes into his stretch -- especially, to an extent where you can predict how Mike Trout will hit it, assuming you have a second computer for *his* mind.

What I suspect is, people think of the dice rolls as substitutes for identifiable flesh-and-blood variation. But they aren't. The dice rolls are substitutes for randomness that's already there. The flesh-and-blood variation goes *on top of that*.

APBA games *don't* consider the flesh-and-blood variation, which is why it's much easier to predict an APBA team's wins than a real-life team's wins.  In a game, you know the exact probabilities before every plate appearance.  In real life, you don't know that the batter is expecting fastballs today, but the pitcher is throwing more change-ups.  

The "speed of light" is *higher* in real-life than in simulations, not lower.


Now, perhaps I'm attacking a straw man here.  Maybe nobody *really* believes that it's possible to predict with an error of zero.  Maybe it's just a figure of speech, and what they're really saying is that the natural limit is much lower than 6.4.

Still, there are some pretty good arguments that it can't be all that much lower.

The 6.4 figure comes from the mathematics of flipping a fair coin.  Suppose you try to predict the outcome of a single flip. Your best guess is, "0.5 heads, 0.5 tails".  No matter the eventual outcome of the flip, your error will be 0.5.  That also happens to be the SD, in this case.  

(If you want, you can guess 1-0 or 0-1 instead ... your average error will still be 0.5.  That's a special case that works out for a single fair coin.)  

It's a statistical fact that, as the number of independent flips increases, the expected SD increases by the square root of the number of flips.  The square root of 162 is around 12.7. Multiply that by 0.5, and you get 6.364, which rounds to 6.4. That's it!

Skeptics will say ... well, that's all very well and good, but baseball games aren't independent, and the odds aren't 50/50!  

They're right ... but, it turns out, fixing that problem doesn't change the answer very much.

Let's check what happens if one team is a favorite.  What if the home team wins 60% of the time instead of 50%?

Well, in that case, your best bet is to guess the home team will have a one-game record of 0.6 wins and 0.4 losses.  Six times out of 10, your error will be 0.4.  Four times out of ten, your error will be 0.6.  The root mean square of (0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.6, 0.6, 0.6, 0.6) is around 0.4899. Multiply that by the square root of 162, and you get 6.235.  

That, indeed, is smaller than 6.364.  But not by much.  Still, the critics are right ... the assumption that all games are 50/50 does make for slightly too much pessimism.

Mathematically, the SD is always equal to the square root of (chance of team A winning)*(chance of team B winning)*(162).  That has its maximum value when each game is a 50/50 toss-up.  If there's a clear favorite, the SD drops;  the more lopsided the matchup, the lower the SD.  

But the SD drops very slowly for normal, baseball levels of competitive balance.  As we saw, a season of 60/40 games (60/40 corresponds to Vegas odds of +150/-150 before vigorish) only drops the speed of light from 6.36 to 6.23.  If you go 66/33 -- which is roughly the equivalent of a first-place team against a last-place team -- the SD drops to exactly 6.000, still pretty high.

In real life, every game is different; the odds depend on the teams, the starting pitchers, the park, injuries, and all kinds of other things. Still, that doesn't affect the logic much. With perfect information, you'll know the odds for each individual game, and you can just use the "average" in some sense.  

Looking at the betting lines for tomorrow (April 15, 2014) ... the average favorite has an implied expected winning percentage of .568 (or -132 before vigorish).  Let's be conservative, and say that the average is really .600.  In that case, the "speed of light" comes out to 6.2 instead of 6.4.  

For an analogy, if you like, think of 6.4 as the speed of light in a vacuum, and 6.2 as the speed of light in air.  


What about independence?  Coin flips are independent, but baseball games might not be.  

That's a fair point.  If games are positively correlated with each other, the SD will increase; if they're negatively correlated, the SD will decrease.  

To see why: imagine that games are so positively correlated that every result is the same as the previous one.  Then, every team goes 162-0 or 0-162 ... your best bet is to predict each team at their actual talent level, close to .500. Your error SD will be around 81 games, which is much higher than 6.2.

More importantly: imagine that games are negatively correlated so that every second game is the opposite of the game before.  Then, every team goes 81-81, and you *can* predict perfectly.

But ... these are very extreme, unrealistic assumptions.  And, as before, the SD drops very, very slowly for less implausible levels of negative correlation.

Suppose that every second game, your .500 team has only a 40% chance of the same result as the previous game.  That would still be unrealistically huge ... it would mean an 81-81 team is a 97-65 talent after a loss, and a 65-97 talent after a win.  

Even then, the "speed of light" error SD drops only slightly -- from 6.364 to 6.235.  It's a tiny drop, for such a large implausibility.

But, yes, the point stands, in theory.  If the games are sufficiently "non-independent" of each other, you can indeed get from 6.4 to zero.  But, for anything remotely realistic, you won't even be able to drop your error by a tenth of a game. For that reason, I think it's reasonable to just go ahead and do the analysis as if games are independent.  


Also, yes, it is indeed theoretically possible to get the limit down if you have very, very good information.  To get to zero, you might need to read player neurons.  But, is it more realistic to try to reduce your error by half, say?  How good would you need to be to get from 6.4 down to 3.2?  

*Really* good.  You'd need to be able to predict the winner 93.3% of the time ... 15 out of 16 games.


You have to be able to say something like, "well, it looks like the Dodgers are 60/40 favorites on opening day, but I know they're actually 90/10 favorites because Clayton Kershaw is going to be in a really good mood, and his curve will be working really well."  And you have to repeat that 161 times more.  And 90/10 isn't actually enough, overall ... you need to average around 93/7. 

Put another way: when the Dodgers win, you'd need to have been able to predict that 15/16 of the time.  And when the Dodgers lose, you'd need to have been able to predict that 15/16 of the time.  

That's got to be impossible.  

And, of course, you have to remember: bookies' odds on individual games are not particularly extreme.  If you can regularly predict winners with even 65% accuracy, you can get very, very rich.  This suggests that, as a practical matter, 15 out of 16 is completely out of reach.  

As a theoretical possibility ... if you think it can be done in principle, what kind of information would you actually need in order to forecast a winner 93% of the time?  What the starter ate for breakfast?  Which pitches are working and which ones aren't?  The algorithm by which the starter chooses his pitches, and by which each batter guesses?  

My gut says, all the information in the world wouldn't get you anywhere close to 93%.

Take a bunch of games where where Vegas says the odds are 60/40.  What I suspect is: even if you had a team of a thousand investigative reporters, who can hack into any computer system, spy on the players 24 hours a day, ask the manager anything they wanted, and do full blood tests on every player every day ... you still wouldn't have enough information to pick a winner even 65 percent of the time.  

There's just too much invisible coin-flipping going on.

Labels: , , ,


At Monday, April 14, 2014 5:47:00 PM, Anonymous Voros McCracken said...

I actually agree that the lowest possible error is '0' games. If you knew enough relevant information, you could know exactly which team was going to win every game.

Where people slip up is in assuming this has any bearing on the applicability of probability theory. Because it matters not whether it's 'random numbers' generating an outcome or flesh and blood human beings. What matters is our lack of sufficient knowledge to be able to predict things with any certainty. IOW the observer is the key to probability theory, not the observed phenomenon.

At Monday, April 14, 2014 6:39:00 PM, Blogger Phil Birnbaum said...


By your definition, the distinction I'm trying to make is: (a) there's stuff you can predict only if you have a computer so powerful that it can't be constructed from all the atoms in the universe (or at least, from atoms we have access to). And there's (b), stuff that a large enough group of people can predict using their brain and materials they have lying around their houses and businesses.

In normal English, (a) is "random like a coin flip", and (b) is not.

The critics think that everything in baseball is (b). I'm arguing that, no, lots of it is (a), and here's how much -- specifically, 6.4 games per 162.

At Monday, April 14, 2014 11:07:00 PM, Blogger Don Coffin said...

Actually, I disagree with Voros. If you have all the relevant information *before* a game begins, and then something unforeseen (and perhaps unforeseeable) happens *during the game,* your forecast can easily be wrong.

For example, suppose we forecast that the Angels will beat the Astros, and then Josh Hamilton breaks his thumb in the second inning and can't play the rest of the game. And LAA, which was a cinch to win *before* that happened, loses the game.

In effect, Voros's position is that we can forecast not just the game outcomes, but the *in-game events* that could change an a priori forecast. *Or* our forecasts become conditional: "The Angels will beat the Astros so long as Hamilton doesn't break his thumb in the second inning...then, the Angels may lose (will certainly lose?)"

The point is it's not having all the relevant information a priori (before the game), it's being able to *predict* (not know a priori) any relevant in-game events.

I'll go further than that. Some in-game events are inherently (I would argue) *inherently* unpredictable. Is the ground ball that leapt up and incapacitated Tony Kubek (from Kubek's Wikipedia page: "In Game 7 of the 1960 World Series, Kubek was injured by a bad-hop ground ball that struck him in the throat; Kubek was badly hurt and the batter, Bill Virdon, reached first base, enabling the Pittsburgh Pirates to rally in a game they eventually won 10–9 on a ninth-inning homer by Bill Mazeroski.") predictable? Assuming that the conclusion (that the Pirates won *because* the grounder hit Kubek in the throat) is accurate, is there *any* state of pre-game knowledge that could have predicted that the ground ball would have traveled its actual path and not 2" on either side of that path? I would argue that the path of that ground ball in that circumstance was inherently unknowable before the game *and* at the moment Virdon hit the ball (and probably milliseconds before it hit the pebble).

So I think even the computer from the Hitch-hiker's Guide to the Galaxy would have been inherently unable to predict that outcome--it was truly a random event. And I suspect there's a large amount of the truly unpredictable in any individual game, or any individual play.

At Monday, April 14, 2014 11:14:00 PM, Blogger Tangotiger said...

I disagree with Voros. In order to get "0" as the lowest possible error, you not only have to know the property of every single entity, but you need to know the behaviour of each entity at time T and space S. You have to know what the player will actually choose, before the player himself even knows what to choose. And this is IN-GAME! Forget knowing any of that pre-game.

We're talking about fate here.

At Tuesday, April 15, 2014 2:28:00 AM, Anonymous Voros McCracken said...

It's actually easy to know all of that information, you just have to be God. If something unforeseen happens, you obviously didn't have all the relevant information. God would have all that information.

Here's the point: if you take a normal deck of cards and ask everyone what the chances of the top card being the eight of clubs is, everyone will answer "one in fifty-two." If you then flip the card face up and ask the same question, the answer people will give will either be 0% or 100%.

But the card itself hasn't changed at all; it's the same card. The only thing that's changed is our knowledge about it. And so probability theory is about our own lack of knowledge about the outcome of an event, not the _actual_ chances of the event happening independent of anyone's knowledge.

The point being is that we have to establish exactly to what extent we don't know the outcome of this season's games, before we can say for certain what the "best possible" error could be.

At Tuesday, April 15, 2014 9:43:00 AM, Blogger Hans said...

I think you are getting into quantum mechanics now. There is no way to know the position and velocity of a single atom, much less every atom within a baseball game. Getting to 0 error is literally impossible in our universe.

I'm not even going to touch the whole "God" angle.

At Tuesday, April 15, 2014 9:46:00 AM, Blogger Zach said...

At Tuesday, April 15, 2014 10:38:00 AM, Blogger Phil Birnbaum said...

Right, lurking in the background is the whole "determinism vs. true randomness" debate.

Assuming determinism -- that is, assuming that if you know the state of every particle in the universe at any given time, you can calculate the future -- there's enough information there to predict anything. I assume that was Voros' point about it being possible to get to zero games error.

But, so what? That's impossible to have happen, so let's talk about what we can do with the additional information we CAN get. To which my answer is: almost nothing, you're stuck close to an error of 6ish.

If you believe you CAN'T predict the future universe from the state of the present one -- I assume that's where quantum mechanics comes in -- then, you can just argue outright that the unpredictable 6 games of variation is truly random, and unpredictable by definition.

At Tuesday, April 15, 2014 11:03:00 AM, Anonymous Voros McCracken said...

It's not impossible if you're God (IE omniscient). I think you're missing the point so I'll use non metaphysical examples:

Before the series started, what were the chances of the 1919 Chicago White Sox winning the World Series? Or to bring the 162 game season back into play, what's the lowest possible standard error for predicting the outcome of the next 162 non-drawn WWE wrestling matches?

Can we agree the answer to the latter question is '0' or at least damned near close to it considering the apparent rarity of authentic 'shoot' matches? As far as probability theory is concerned, an MLB baseball season is no different. And probability theory is equally applicable to the outcome of WWE wrestling matches as it is MLB baseball games.

Because the key isn't whether the matches themselves are "random" or "pre-determined", the key is our prior knowledge about them. If we don't know whether Rowdy Roddy Piper will beat the Iron Sheik, then it's a question that can be analyzed through probability even if both Piper and the Sheik do already know. Even if God knows exactly who will win all the MLB games this year, those outcomes can still be analyzed through probability theory.

At Tuesday, April 15, 2014 10:31:00 PM, Blogger Don Coffin said...

But what if (as I believe) there is no god?

At Tuesday, April 15, 2014 10:36:00 PM, Blogger Don Coffin said...

"Before the series started, what were the chances of the 1919 Chicago White Sox winning the World Series?"

Voros, I assume you would answer 0 or 1, but we don't know which one yet. I fundamentally disagree that the world is that deterministic, even at a (more-or-less) macro level.

But, as Phil said, that';s sort of irrelevant. *Regardless* of whether the world is fully deterministic, or probabilistic, we don't (and can't) know enough to reduce the standard error of estimate below Phil's calculations.

I think it's irreducible randomness, you think it's radically incomplete knowledge. But I don't think there's any practical significance to our disagreement.

At Tuesday, April 15, 2014 10:41:00 PM, Blogger Don Coffin said...

Zach wins the thread...


Post a Comment

<< Home