Sunday, September 27, 2009

A game theory study on pitch selection

Commenter "Eddy" was kind enough to send me this link to what looks like a press release on a new study by Kenneth Kovash and Steve Levitt (of Freakonomics fame). The link is to a summary only; to get the actual study, I had to pay $5.

The study is in two parts: one baseball, and one football. I'll talk about the baseball results here; I'll send the football portion of the study to Brian Burke (of "Advanced NFL Stats") in case he wants to review it himself. I hope that's allowed under fair use and I don't have to pay another $5.

In the baseball half, the authors claim that pitchers throw too many fastballs. They would do better -- much better, in fact -- if they threw other kinds of pitches more often.

How can you tell, using game theory, whether fastballs are being overused? Simple: you just check the outcomes. If opposition hitters bat for an OPS of .850 when you throw a first-pitch fastball, but they OPS (can you use OPS as a verb?) .800 when you don't, they obviously you should cut back on the fastballs. At first glance, it might look like you should get rid of them entirely, because, that way, you could shave .050 off the opposition's OPS. But it's not that simple: as soon as the opposition realizes that you're not throwing fastballs, they'll be able to predict your pitches more accurately, and they'll wind up OPSing higher than .800 -- probably even higher than the original .850. Game theory can't tell you the right proportion, at least not without having to make assumptions that would probably be wrong. But it *can* tell you that you should adjust your strategy until the OPS-after-fastball is exactly equal to the OPS-after-non-fastball.

If that's what the Kovash/Levitt study did, it would be great. But it didn't. Instead, it did something that doesn't make sense, and makes almost all its conclusions invalid.

What did it do? It considered outcomes only for pitches that ended the "at bat". (The authors say "at bat", but I think they mean "plate appearance". I'll also use "at bat" to mean "plate appearance" for consistency with the paper.)

That's a huge selective sampling issue. It means that when a pitch on a 3-0 count is a ball, you count it; when it's put in play, you count it; but when it's a strike, you don't include it. That doesn't work. I can make up some data to show you why. Suppose:

-- Fastballs are 50% put in play, for an OPS of 1.000
-- Fastballs are 50% strikes, for an OPS of .800 after the 3-1 count.

-- Non-fastballs are 25% put in play, for an OPS of .900
-- Non-fastballs are 25% strikes, for an OPS of .800 after the 3-1 count
-- Non-fastballs are 50% balls, for an OPS of 1.000.


That summarizes to:

0.900 OPS for fastball
0.925 OPS for non-fastball


Clearly, you should throw a fastball, right?

But if you consider only the last pitch of the at-bat, you have to ignore those 3-1 counts. Then you get:

1.000 OPS for fastball
0.933 OPS for non-fastball


And it looks like you should throw *fewer* fastballs, not more. And that's wrong.

This kind of thing is exactly what Kovash and Levitt have done. They think they've shown that the fastball is a worse pitch than the non-fastball. But what they've *really* shown is that the fastball is a worse pitch than the non-fastball only if you ignore the fact that if the pitch doesn't end the at-bat, the fastball is more likely to put the count more in the pitcher's favor.

So I don't think their main regression result, the one in Table 4, holds water, and I don't think there's a way for the reader to work around it. If the authors just reran that regression, but considered the outcome even if it wasn't the last pitch of the at-bat, that would fix the problem. I'm not sure why they chose not to do that.

----

Still, there are some other aspects of the study that are interesting.

In Table 2, the authors show results for every count separately. On 3-2, every pitch is the last pitch of the AB (except for foul balls, which the authors actually included in the study, but don't affect the results). Therefore, the change in count isn't a consideration, and we can take the results at close to face value.

So what happens? There is indeed a big difference between fastballs and non-fastballs:

.769 OPS after a 3-2 fastball
.651 OPS after a 3-2 non-fastball.


This would certainly lead to a conclusion that pitchers are throwing too many 3-2 fastballs, and the results stunned me: I didn't expect this big a difference. But then it occurred to me: most of the OPS on 3-2 is walks. And walks are undervalued in OPS. If a 3-2 fastball results in more balls in play, but the 3-2 curveball (or whatever) results in more walks, the actual run values might be more even. That is: pitchers know that walks are "worse" than OPS says they are, so they're willing to tolerate a higher OPS for fastballs if it's contains fewer walks. That seems quite reasonable.

Suppose walks form half of OBP for fastballs, but 60% of OBP from curveballs. That's a difference of .100 in OPS due to walks. If you assume that should "really" be .140, that closes the gap from 120 points down to 80.

That adjustment is still not enough to explain the entire gap between fastballs and non-fastballs, but it's certainly part of it. In studies like this, where you're looking for very small discrepancies, and you have non-traditional proportions of offensive events, you need to use something more accurate than OPS.

----

But here's something that makes me worry, and I wonder if there's a problem with the authors' database. Here are the overall OPS values for ABs ending on that pitch, from the authors' Table 1:

.753 fastball
.620 non-fastball

Do you see the problem? This data puts the average OPS at .709 (fastballs being twice as likely as non-fastballs). But the overall major-league OPS for the years of the study (2002-2006) was around .750. Why the discrepancy? The authors do say they left out about 6% of pitches, mostly "unknown", but with a few knuckleballs and screwballs. But there's no way 6% of the data could bring a .750 OPS down to .709. So I'm thinking something's wrong here.

There's no such problem with Table 2, which is broken down by count instead of pitch type. That table does average out to about .750.

UPDATE: in the comments, Guy reports that if you calculate SLG with a denominator of PA instead of AB, the numbers appear to work out OK. So the authors probably just miscalculated.

----

Finally, the authors argue that pitchers aren't randomizing enough. According to game theory, there should be no correlation between your choice of this pitch, and your choice of the next pitch. If you have a correlation, because you're choosing not to randomize properly, the opposition can pick up on that, guess pitches with more confidence, and take advantage.

Kovash and Levitt found that pitchers have negative correlation: after a fastball, they're more likely to throw a non-fastball, and vice-versa. They conclude that teams are not playing the optimal strategy, and it's costing them runs.

However: couldn't there be another factor making it beneficial to do that? It's conventional wisdom that, after seeing a fastball, it's harder to hit a breaking pitch, because your brain is still "tuned" to the trajectory of the fastball. If that's true -- and I think every pitcher and broadcaster would think it is, to some extent -- that would easily explain how the negative correlation observed in the study could actually be the optimal strategy. But the authors don't mention it at all.

----

So I don't think we learn much from this paper, but there's a tidbit I found interesting. Apparently Kovash and Levitt have access to MLB bigwigs, and did a little survey:

"Executives of Major League Baseball teams with whom we spoke estimated that there would be a .150 gap in OPS between a batter who knew for certain a fastball was coming versus the same batter who mistakenly thought that there was a 100 percent chance the next pitch would *not* be a fastball, but in fact was surprised and faced a fastball."


That's kind of interesting. I have no idea how accurate the estimate is ... anybody seen any other research on this topic?



Labels: , ,

Wednesday, September 23, 2009

How much does a "clubhouse cancer" cost his team?

This past weekend, the Cubs suspended outfielder Milton Bradley for the remainder of the 2009 season. Bradley had made some remarks to the press critical of the "negativity" he had received in Chicago. That, combined with his reputation as a complainer who apparently didn't get along with his teammates, prompted GM Jim Hendry to send him home for the rest of the year, with pay.

Do "clubhouse cancers" cost a team wins? In an excellent article at Baseball Analysts, Sky Andrecheck admits he doesn't know. But he looks at the anecdotal evidence of other oncoplayers to at least try to get a handle on how much a team is willing to pay to get rid of him.

This season, Bradley had accumulated 1.2 wins above replacement (WAR) up to the day of his suspension. Andrecheck suggests that he's probably a little better player than that because he's having an off-year. So for Hendry to be willing to lose Bradley's contribution, he must think think that his continued presence would cost the team wins at at least that rate. Otherwise, he'd bite the bullet and keep him around.

That figure is in line with another recent disgruntled clubhouse influence, Shea Hillenbrand, who was projected as a 1.4 WAR player when he was released by the Blue Jays in 2007.

Finally, Tom Tango adds a third anecdote. He notes that no team was willing to sign Barry Bonds in 2008, even at minimum salary, when Bonds was projected to be around 1.5 WAR.

As for other "cancers": Albert Belle and Barry Bonds had poor clubhouse reputations, but weren't released by their teams. Those guys were substantially better than 1.5 WAR per season. That strongly suggests that the cost of keeping a player around is less than the cost of losing an all-star. Andrecheck writes, "I can't think of even a 3 or 4 WAR all-star caliber player ever having been given away or released largely due to clubhouse attitude. Instead, teams learn to deal with these players, rather than oust them."

So, it would appear, poisoning the clubhouse is worth somewhere between 1.5 and 3 wins a year.

That's very cool stuff. But I'm still wondering about a related subject, one that the article doesn't try to answer. My question is: just *how* does a clubhouse cancer cause the 1.5 win dropoff? It's unlikely that the personalities of the players affect their team's Runs Created or Pythagorean estimates (unless clutch play is affected more than non-clutch), so the dropoff must come in the performance of the player's teammates. How does one player's negative attitude cause another player's performance to suffer? Do the disheartened fellow players not try as hard? Are they less motivated to receive coaching, or stay in shape? Do they concentrate less on pitching strategy, maybe spending less time in with the coach going over scouting reports on opposing batters?

And whatever it is, how do we gather evidence? I suppose we could check the performance records of pitchers while Bradley is on the team, and compare them to their records before he arrived and after he left. But 1.5 wins a year, with the equivalent of 18 full-time players (nine hitters and nine pitchers), is only about an 0.8 run shortfall per player. That's not much signal to find among all the noise, isn't it? I suppose you could check the records in the few weeks prior to the player being kicked out, on the premise that that's when the situation became most intolerable. But you might find that the situation reached the breaking point only because the team was losing, so you might mix up cause and effect.

Or maybe it's that one guy who's on the cusp of breaking out, or having a comeback season, just gets discouraged and flames out: some 23-year-old prospect winds up a little less hungry, and gives up a bit too early. That doesn't seem like it could be 1.4 wins, but I guess it's possible.

Any suggestions? I'd even be interested in hearing plausible suggestions for how the 1.4 wins (14 runs) are lost. At least if we have some reasonable hypotheses, maybe we can think of some ways to test them.

----

My suspicion, though, is that the Milton Bradleys don't actually cost their teams 1.4 wins that way. I think there are other reasons that the Cubs might have for releasing Bradley than just a sober calculation of his effect on the team's on-field performance.

First, there's deterrence. There has to be some mechanism by which teams prevent their players from going off half-cocked and ruining team chemistry. There has to be the threat, explicit or implicit, that if the player is disrespectful towards the team, he will pay a price. For most players, who want their time with the team to be as pleasant as possible, the desire to get along with their teammates might be enough incentive. But when an anti-social player crosses the line, the punishment may have to have a negative cost to the team.

For instance, suppose a world-famous surgeon commits murder. Putting him in prison might cost the hundreds of lives his skills would save over the years. But society has to jail him anyway; otherwise, they give every surgeon a license to kill.

The same thing might be happening here. Even if keeping Milton Bradley on the team wouldn't cost only a small fraction of a win, they'd have to get rid of him anyway, just to make sure the other 24 players don't get similar ideas.

Second, Bradley's presence might cost the team wins in other ways than just on the field. If the clubhouse atmosphere is poisoned, the other players are unhappy. If they are unhappy, they are less likely to want to stay on the team. And so, the Cubs would have to offer them more money to stick around as free agents. Indeed, they'd have to pay *all* free agents more money than they would otherwise. If Chicago is a crappy team to play for, but Boston is wonderful, why would anyone sign with the Cubs? (You might also get an increase in disgusted players demanding to be traded.)

If word gets out around the league that Cubs' management is not willing to enforce normal standards of civility from their players, it could cost them a lot more than 1.4 wins per year.

Third, and thinking out loud: is it not possible that while a poisonous Milton Bradley costs his team 1.4 wins a year, a poisonous player of higher ability might cost the team nothing? Whatever mechanism it is that has Bradley hurting the team on the field, there's no doubt it's because of the reaction and chemistry among the other players, right?

Now, people get upset when social norms are violated: I'm going to be more upset if you steal $20 from me than if my taxes go up $20. Is it possible that putting up with an arrogant superstar is a social norm, but putting up with a marginal player is not?

Isn't it possible that when a superstar acts like a disagreeable moron, the other players kind of shrug and accept it? If the social norm is that some superstars are a**holes, and you just have to get used to it if you want to win, then it might cause no harm at all. Where I used to work, if the manager was being a big jerk, the rest of us would talk about it over coffee, and we'd grin and bear it and get back to work. But if one of our fellow grunts was acting like an idiot, that would be different: that would upset us a lot more, because he was one of us.

Could it be the same thing happening here? When Barry Bonds was a jerk, maybe management took the players aside and said, "yeah, we know he's acting like that, but he's our best chance of winning, so try to deal with it?" That wouldn't work with Milton Bradley, and so the players are less likely to put up with it, and management would have to get rid of him.

Anyway, as I said, just thinking out loud on this one.

Finally, could it be just money? If Milton Bradley is pissing off the other players, and the fans find out, and they start booing Bradley, and the team does nothing about it ... might that not get in the way of the fans' long-term loyalty to the team? The fans are loyal and rabid. They're proud to be Cubs supporters, and many have spent their whole lives dreaming a World Series win. Then Milton Bradley comes along, winds up in the absolute dream job of Chicago Cub outfielder, but doesn't appreciate what he's got, and starts insulting the Cubs and the fans and the tradition.

Doesn't getting rid of Bradley fulfill an obligation to those fans? Doesn't that build the brand and cement the relationship and lead to fan loyalty and revenues?

Bradley is only going to miss two weeks, and may get a chance to reform. Those two weeks are worth, what, maybe .1 wins? That's less than $1 million -- and, considering that the Cubs are out of playoff contention, it may be only a few hundred thousand. The suspension could pay for itself in no time at all.

Labels: , ,

Wednesday, September 16, 2009

You can't forecast outcomes that are random

Predictions are often wrong. In an article in the Wall Street Journal last month, "Numbers Guy" Carl Bialik points out a few that went awry. Two years ago, for instance, a government energy agency predicted that the price of oil would be between $75 and $85 in 2008. In reality, it started out the year close to $100, ran up past $140 in July, and dropped back below $40 by the end of the year. Bialik writes, "winging darts at numbers on a board might have been more accurate."

It's easy to make fun of prognosticators when they get this stuff wrong. But let's not get too hasty. The fact is, the things that are most worth predicting are things that are most unpredictable. If you want a prediction of what time the sun will rise tomorrow morning, you can get 100% accurate predictions from any competent astronomer. But what would be the point?

The price of oil varies so much because there are so many factors that influence it: wars, foreign government policies, consumer behavior, US election results, technological advances, natural disasters, and so on. These things are random. And they are very, very complex, most of them being the result of human thought and action.

Still, shouldn't some people be better skilled at making those predictions than others? Absolutely. Tancred Lidderdale, the economist quoted in Bialik's article, has an excellent understanding of the factors that impact the price of oil, much better than mine. So what's wrong with evaluating his predictions after the fact, to see if he's any good?

The problem is that no matter how much you know about the price of oil, it's random enough that the spread of outcomes is really, really wide: much wider than the effects of any knowledge you bring to the problem.

Suppose that on the basis of Miguel Tejada's career, everyone thinks he should hit .290 next year. But suppose Bob, who's a big fan of Tejada, and follows his plate appearances closely, has noticed something about his performance and thinks differently. Maybe it's some detailed observation that he swings a certain way, and other players with the same swing have declined more in their thirties than average. So Tejada should be only about .286.

That may be absolutely right, and figuring that out was an act of staggering sabermetric genius. Bob's estimate of .286 is correct, and the .290 estimates are all wrong. Bob is literally the only one in the world whose estimate is correct.

But in practice, how do you prove that? The standard deviation of batting average over 500 AB is about 20 points: so even with .286 being correct, there's still a 46% chance that A will hit closer to .290 than .286 next year. There's actually about a 1 in 3 chance that Tejada's average will be below .266 or above .306. For practical purposes, it's impossible to evaluate the two predictions on this one single sample. Even if Bob is omniscient, knowing everything possible about Tejada's talent, health, and diet, it's going to take a lot of evidence to prove that he's a better estimator than the mob, so long as the results of individual at-bats are random.

The problem is the small sample size: over 1000 predictions, or 1,000,000, Bob is going to have a better record than everyone else. But, who makes a million predictions, and who keeps track of them to evaluate them afterwards? And even if we do this a reasonable number of times, like 100, Bob still isn't assured of beating me. If his chance of beating me is 54%, then, if we predict 100 times each, I still have an almost 35% 21% chance of coming out the winner.

That is: an omnisicient expert can beat a reasonably-informed layman only about 65% 79% of the time. And that's after 100 trials each, 100 trials where the predictor actually has a significant edge in knowledge or analysis. In real life, if you get only one trial, and you're not even close to omniscient, and the prediction you're making may not be the one in which you have the most confidence, the public's expectations of you shouldn't be very high. Not because you're ignorant, but because life is just too random.

Of course, this is an arbitrary example, with more randomness (20 points) than knowledge (4 points). But isn't it roughly the same situation for the price of oil? The randomness in the economy is just huge. Part of the reason oil went down last year is because of the recession. The recession happened because of the credit crisis. And very few people foresaw the credit crisis, including people who had thousands, or millions, or billions of dollars on the line. For a government economist to be omniscient, he has to be omniscient about mortgage finance, and on the government's and public's reaction to every crisis that might possibly occur. That's asking a lot, isn't it? To an energy economist, the state of mortgage finance has to be taken as random.

Because life is random, and the price of oil is very sensitive to the randomness of human-caused shocks, you can't expect a single, point estimate of the price of oil to be 95% accurate within $1, or even $5. An estimate that precise is impossible, beyond the scope of human capability, and probably beyond the scope of the most powerful computers that could be imagined. An honest and competent forecaster will tell you that the best he can do is give you *distribution* for the future price of oil: maybe that there's a 60% chance it will be between $60 and $110, and a 10% chance it will be below $60, and maybe a 5% chance it'll go over $200 (if there's a major war in the Middle East, say), and so on. That's not something the newspapers are keen to report on -- it's hard to put in a headline, and it's harder for readers to understand.

What you hope Mr. Lidderdale's agency was probably saying was, "we have our best guess at a probability distribution for what the price of oil will be next year. Its mean is in the $75 to $85 range." If that phrasing makes journalists uncomfortable, fine. But that doesn't change the fact that it's the best anybody can do. And it doesn't change the fact that you can't decide how good a predictor is on the basis of one, two, or even a hundred point estimates. You need a LOT of data. And if an outlier happens, all evaluations are off. I'd bet that anyone who predicted, back in 2007, that oil would jump to $140 and then drop back to $37, is a kook, not an expert. What happened in 2008 was something of an outlier, random, unpredictable, and unknowable. Anyone who came close was probably just drop-dead lucky.



Labels:

Sunday, September 13, 2009

SABR journal looking for sabermerics submissions

SABR's "Baseball Research Journal" is looking for submissions.

BRJ is a large format paperback book, published twice a year by SABR and sent to all several thousand of its members. It used to have crappy statistical articles in it -- stuff that wasn't peer reviewed, from authors who may never have read Bill James. I am happy to report that, recently, under former editor Jim Charlton, and current editor Nicholas Frankovich, the quality is much higher. I may be biased, because they've run a few articles of mine, but it really is getting a lot better. BRJ is also the place where Bill first ran his "Underestimating the Fog" article (pdf).

But Nick Frankovich is getting more aggressive about pursuing even better stuff, and he asked me to post this bleg. SABR needs your research, and he's asking you to consider submitting an article to BRJ.

It doesn't matter if you're a member of SABR or not. It doesn't matter if you've already published your research on a website. All that matters is if it's a good article, suitable for readers who may not know a whole lot of sabermetrics. That doesn't necessarily mean it has to be dumbed down; it does mean you may have to explain all of your acronyms and start at the beginning rather than the middle.

Nick is especially interested in articles that explain the current state of a topic in sabermetrics. He (actually, someone in SABR) suggested an article summarizing the current state of the DIPS theory, which I think would be a very good idea. I've always been looking for articles that explain something in sabermetrics from the bottom up, because that way I have somewhere to refer people who contact me or submit articles to "By the Numbers". DIPS would be a very good candidate.

Anyway, any reasonable topic will do, and any submission would be appreciated. If you're accepted, you don't get paid, but you get three copies of the book, and you get full rights to do whatever you want with the article afterwards (although you grant SABR the right to use it too). You also help improve the quality of the sabermetric research in SABR, which, perhaps surprisingly, is something that's really needed.

You can contact Nick at frankovich@sabr.org. Or, feel free to e-mail me with any questions.

Labels: ,

Monday, September 07, 2009

Matt Swartz on home field advantage

Baseball Prospectus's Matt Swartz has completed a nice five-part series on home-field advantage (HFA) in major league baseball. I've always thought HFA was one of the biggest unresolved issues in sabermetrics. So does Swartz, and he said it better than I could:

"[HFA] should surprise us as analysts more than it does. Nearly every study of psychology with respect to baseball has come up revealing either small effects or no effect. We all know that players are human, but the numbers do not seem to indicate many obvious psychological aspects. Hundreds of researchers have tried to discover clutch hitting, but few have found any evidence of its being a repeatable skill. ... We have attempted all kinds of ways to splice the data to reveal a large psychological effect within baseball to show that baseball players don’t behave like statistical models, and there seems to be little evidence of any strong, detectable effects, even if we know they exist and occasionally can discover smaller ones. ...

"However, home-field advantage is perhaps the most obvious area where we see something resembling a psychological effect, or at least an effect that is not captured by our typical models of baseball players and ballgames. It is clear that something about being the home team trumps talent in a way that is mathematically equivalent to benching an average player on the road team."


Swartz proceeds to look at various aspects of HFA. Many of the findings are unremarkable, but there are a couple that are kind of interesting.

First, let me quickly summarize the other stuff that Matt found in each of his five parts.

Part 1: HFA has been very steady over the decades, at around 40 points (.540 to .460). It shows up in almost every statistical category for hitters and pitchers, except those related to errors.

Part 2: There doesn't seem to be a team-specific HFA, except for the Rockies, whose HFA is an outlier and much higher than most.

Part 3: There appears to be a "familiarity" effect. HFA is highest for interleague games, next highest for games between teams in different divisions, and lowest for intradivisional games (where presumably the teams face each other most often). Also, the farther apart the teams, the higher the HFA.

Part 4: The second-last game of a series seems to have a larger HFA than any other game. This apparently only holds for teams who are geographically close together. Lots of other breakdowns show no significant effect.

Part 5: Individual players do appear to show stable HFAs from year to year, suggesting that they can be more or less suited to their home park.

Most of this is roughly in line with what we knew already. But here's the thing I found most interesting: a lot more of HFA comes in the first three innings than in the rest. Here's Swartz's chart; for each inning, the percentages are the difference in runs scored for the home team vs. the visiting team:

1 16.2%
2 9.3%
3 10.1%
4 6.0%
5 7.8%
6 8.1%
7 8.7%
8 6.5%

The overall difference appears to be about 8%. By Pythagoras, if a team scores 8% more runs than their opponents, they'll win a little over 16% more games, which works out to about a .540 winning percentage, exactly as observed (.540 divided by .460 equals 1.17). But the first inning number is huge! If the home team outscored the visiting team by 16.2% overall, its winning percentage would be .575 (Pythagoras with exponent 2).

What could cause this? It could just be that the first inning is higher-scoring overall, and the difference isn't linear. But the difference is still huge. Could this be a real finding, that HFA diminishes later in the game? If it's a question of familiarity, that might make sense, except that why would the visiting team be less familiar with the park the first inning of Game 3 as opposed to the eighth inning of Game 2?

Still, this is something I haven't seen before, and I wonder if you'd find the same thing if you looked at other sports.

---

One thing that might be good is to break down HFA into its component parts. The articles show us the HFA appears in almost every statistical category, but they overlap. For instance, the home team strikes out less and walks more. This indicates that the visiting pitchers are throwing fewer strikes and more balls. Is that enough to be the entire effect? That is, if the road pitchers are getting behind in the count, the batters will do better, even if batting skill is completely unaffected by HFA. On 2-0, the batters will be seeing juicier pitches, and that alone could account for their extra doubles, triples, and home runs.

Does it? What you'd want to do to find out, is to compare batting lines based on count (and controlling for pitcher, if you really wanted to be thorough). As it stands now, we still don't really know what HFA comes from, whether it's evenly balanced between batter and pitcher, or what.

The home team scores, on average, about 0.4 runs per game more than the visiting team. Using Swartz's numbers and assuming 40 PA per game per team, the home team gets about 0.4 fewer strikeouts and 0.25 fewer walks. That adds up to about .18 runs. That's half the entire effect. Is it possible that just the different (favorable) counts account for the home team's remaining .22 run advantage? Seems possible to me.

Or, looking at it another way: a study I did a few years ago (.pdf, page 4) came up with the figure that turning a ball into a strike is worth about .14 runs. That's a three pitch per game difference between the two teams. Would a three pitch difference (three extra strikes and three fewer balls) be consistent with 0.4 extra strikeouts and 0.25 fewer walks? I don't know, but you could try looking at it that way.

If you went about it that way, you might wind up with a breakdown of HFA something like:

30% pitchers throwing more strikes
15% batters putting the ball in play more often
10% batters hitting a different LD/GB/FB mix
20% higher BABIP on a given type of ball in play
15% more HRs

I'm making these numbers up, of course. And for some of this stuff, you wouldn't be able to tell if it was the pitcher or the hitter; for instance, fewer strikes might just mean that the batter makes contact better, as opposed to the pitcher improving. And for a higher BABIP (which Swartz found), is it the hitters doing better, or the defense doing worse? We don't know. But still, a breakdown like that would be a start.

---

Another thing I'd like to see is just raw performance data. Do pitchers throw harder at home than on the road? Do their pitches have more break or movement, all else being equal? That might be hard to study, because all else is never equal, and Pitch F/X recorders might be different at different parks. Although, if the Braves' pitchers show 2 MPH more than their opponents at home, but 1 MPH less on the road ... that does indeed tell you something, although the caliber of the opposition might not even out in your two samples.

My guess is that you'd find that HFA goes right down to the most base level imaginable: the home team would have higher bat speeds and pitch velocities. Their players would run faster at home, and they'd have faster reaction times. I suspect that HFA is something universal, and both psychological and physiological. I'd bet that within a few years, evolutionary psychologists will be studying this stuff and have some theories about how we evolved to be physically more competent in familiar surroundings.

But I'm just guessing.



Labels: ,

Tuesday, September 01, 2009

Re-estimating an NHL team's Picasso value

Sports franchises are different from "regular" businesses in one important way -- they're a lot more fun. If you own a team, you get any profit it makes, but you get lots of perks in addition to that. You get to be on TV a lot. You get the best seat in the house. You get to hire and fire staff. You get quoted in the paper any time you want. You get to be a hero in your local community. And so on.

Because of this, you'd expect team owners to be willing to pay more for a team than its future earnings are worth; they want the "consumption value" in addition to the investment value. You can call this the "Picasso effect," because owning an expensive sports team is a bit like owning an expensive painting; you do it partly for the pride of ownership.

In the previous post, I tried to estimate the Picasso value this way: I ran a regression to predict team market value from team earnings (both values as estimated by Forbes). The equation came out

Market Value = 4 * annual earnings + $200 million

From that, I suggested that Picasso value was $200 million: that is, since the $200MM term didn't have anything to do with the success of the business, it must be the value that owners are willing to pay just to own the team.

But, following a post by Dackle over at "The Book" blog, I realized that isn't quite right.

The problem is that team value -- at least that portion that has to do with earnings -- is based on *future* prospects. And future prospects don't correlate 100% with current prospects. In effect, some of today's earnings is random noise -- the economy might be good in that particular city, or a promotion works well, or the team is just having a good year.

The more random noise, the higher the Picasso estimate. For instance, suppose that profits were completely random, and had nothing to do with any particular attribute of the team. Then, all teams would be valued equally, and the equation would be

Market Value = 0 * annual earnings + $220 million

And it would look like the entire value was Picasso, when, in reality, it could be that the value is driven entirely by earnings.

So to do the calculation right, you have to remove the noise from the earnings.

To try to figure out how to do that, I started by running a regression on Forbes 2008 earnings vs. 2007 earnings. If earnings were completely random, the correlation coefficient would be zero. Of course, it wasn't zero; the Leafs were profitable not because they were lucky that year, but because there are millions of loyal idiots like me who worship the team even though it continues to suck. The correlation coefficient was actually a very high .93. I'll put that in courier font:

One-year earnings correlation coefficient = .93

An r of .93 doesn't suggest a lot of noise, so it won't change things much. But maybe the .93 is still too high. Remember, the economic value of the team is the present value of *all* future earnings, not just next year. And earnings might change more in future years. For instance, between one year and the next, team performance is usually similar. Good teams stay good teams, and poor teams stay poor teams. Maybe that all evens out after, say, five years.

If we take .93 to the fifth power, in effect "compounding" the regression to the mean, we get about .70. This seems reasonably generous to me; a correlation of .7 is an r-squared of .5, which implies that the "fixed" component of a team's earnings has the same variance as the "variable" component.

That means that to get a team's "true" earnings from its 2008 earnings, we regress the number 30% towards the mean. To take one example: the Rangers had earnings of $30.7MM in 2008. The mean is $4.7MM. Regressing $30.7MM thirty percent towards $4.7MM gives $22.9 MM. So we assume that the expected value of the Rangers' "real" earnings was $22.9MM, and the remaining $7.8MM was due to random factors specific to that season.

If we do that for all 30 teams, and rerun the analysis using our regressed estimates of earnings, we now get

Market Value = 5.6 * annual earnings + $193 million

Not much different ... but better, I think. I'm more comfortable with a higher earnings multiple (5.6, in this case, rather than 4.0), since, for publicly traded securities, ratios (I think) tend to range between 7 and 11.

So this reduces our estimate of "Picasso value" from $200 million to $193 million. Not much. And it's easy to see why not much: according to the Forbes data, the money-losing teams are worth an average of about $160 million. If you believe these teams will continue to lose money, then, obviously, the Picasso value must be at least $160MM, since they're worth zero as a going concern.

I believe some of the $193 million is Picasso value, and some of it is hopes that the team will eventually be profitable: either by moving it to a city where it can make money, or by making more money in other ways (like a better TV deal).

Anyway, getting back to the Zimbalist/Balsillie question of how much more a team is worth in Hamilton ... if we run the revised numbers, we get an even bigger difference -- which makes sense, since the more profits matter, the more a team is worth in a money-making city as compared to a money-losing city.

The regressed estimate for Phoenix earnings is a loss of $5.4MM. For Hamilton, we continue to use Balsillie's own estimate of $11MM (we don't regress that since it's an estimate and not an actual observation).

That means, by this method,

$163MM market value for Phoenix
$255MM market value for Hamilton

Still, about the same as in the previous analysis. The benefit to the move is around $90MM, and three-quarters of the value of the Hamilton franchise is Picasso value.

(Thanks again to Dackle for the comment that led to this post.)

Labels: , , ,