Monday, August 29, 2011

More on "psychic value"

In Malcolm Gladwell's piece on the "psychic value" or "Picasso value" of owning a sports team (which I talked about here), there was a reference to an academic study (.pdf) that tried to find the psychic value of owning a painting. That study found that psychic value to be around 28% of the value of the painting.

Well, that makes no sense. Paintings don't return a stream of income (unless you charge admission to see them, which isn't the case here). The only benefit to owning the painting is the intrinsic, subjective value you get from owning it. So the "psychic value" of owning the painting can't be 28% of its value. It must be 100%.

The confusion, I think, comes from the fact that, sometimes, you can sell a painting at a profit. That makes it seem like there are two benefits to the painting -- the psychic benefit of ownership, and the potential capital gain at the end. But, really, there's one benefit: the psychic one. Sure, the *value* of that psychic benefit will likely rise over the years, and, when it does, you can sell that benefit to another buyer at a higher price. But you're still selling only joy.

The "profit" is actually something you can expect, and it's built into the price of the painting. Suppose owning a Picasso is worth $100K a year in "psychic value" to the person who likes it best. And that value rises every year by the rate of inflation -- say, 5%. And suppose interest rates are 10%.

The buyer then expects:

$100,000 worth of psychic value the first year
$105,000 worth of psychic value the second year
$110,250 the third year
$115,763 the fourth year
... and so on.

How much is he willing to pay for the painting? Well, at a discount rate of 10%, it works out to $2 million. By buying the painting for $2 million, the buyer forgoes $200,000 in interest that he would get otherwise. In exchange, he gets $100,000 in psychic value the first year, and the painting appreciates by $100,000.

But if you were to look at the fact that the psychic value equals the appreciation, and conclude that only 50% of the value of the painting was psychic value, you'd would be incorrect. Psychic value accounts for 100% of the value of the painting. The appreciation comes from an increase, over time, in the rate of return in psychic value.

What you
CAN say is that, of the first year's forgone interest on the value of the painting, 50% of that represents the psychic value consumed that year, while 50% represents appreciation of the remainder of the psychic value. But that's not that brilliant an insight. It's true for everything you buy: the "psychic value" must be at least the forgone interest minus the appreciation (or plus the depreciation, which is negative appreciation). If you buy a TV for $1000 at 10% interest, and it loses 20% of its value every year, the first year's "psychic value" must be at least $300, or you wouldn't buy it.

------

Another thing that's confusing is that sometimes paintings appreciate a lot more than inflation, which makes them look like a good investment. But that's got to be random. If it was known in advance that the painting would appreciate more than stocks, the price would go up immediately as buyers bid up the price. Those stories you hear about buyers paying $500 and selling for $1,000,000 ... well, those are outliers, like winning lottery tickets. In a reasonably efficient market, the sum of the psychic value, and the appreciation, must be close to the return you can get from other (similarly risky) investments.

But life is random, and it's possible that values increased much more than expected in the past. The art world may have thought that psychic value would increase only with inflation, but, as more and more billionaires were created, the psychic value rose even faster. That would certainly have caused prices to rise faster than expected, and would make paintings look like a good "investment". But the market would adjust to the new expectations. Indeed, as it did, prices would rise even faster! They'd rise once for the fact that psychic values are now higher, and they'd rise again for the fact that psychic values are accelerating over time.

In retrospect, that may have made paintings look like they were a better than average investment (which I guess they would have been). But that's not because paintings have two benefits -- psychic, and non-psychic. It's because they have one benefit, psychic, and the value of that benefit increased sharply. If you buy a painting as an "investment," you are speculating in the value of its psychic benefits. And you are betting against the market. Unless you have much, much better speculative skills than anyone else, you're probably going to break even in the long run, before taking into account auction fees, and such. And "breaking even" includes psychic benefits. If you don't like art, the expectation for your overall experience is strongly negative, compared to other investments.

------

Another way to look at it: suppose you hate paintings, and the fame that comes with owning them. You buy a Picasso for a million dollars as an investment, but laws are passed that prevent you from ever, ever selling it or renting it. Now, the value of the Picasso to you is zero. You might as well have never bought it. You have a loss of a million dollars, as if you spent your money on a big bag of manure.

Now, suppose you instead buy a million dollars worth of McDonald's stock. And, again, suppose you are prevented from ever selling it. You won't care that much. Because, McDonald's is going to keep making profits, and sending you ever-increasing dividend checks every quarter. The present value of all those dividend checks works out to a million dollars.

When you buy a painting, you're buying a stream of quarterly "dividend checks" of 100% psychic value. When you buy a stock, you're buying a stream of quarterly dividend checks in 100% cash.

------

Now, a sports team is a combination of a Picasso and a McDonald's. It produces psychic benefits, but it also produces profit (perhaps negative profit). So, *now*, it's a real question to ask what percentage of a sports team's value is psychic value, and what percentage is investment value. The answers are no longer 100% and 0%, like they were for a painting.

But it depends on the team. For a big-market profitable team, the investment value might be 70% or more, and the Picasso value 30% or less. For a team that loses money, the Picasso value might be greater than 150% or 200% of the total value.

But, again, that percentage doesn't mean much in the real world. What matters the ratio of annual Picasso value to cash losses. Because, if that goes over 100%, it means the owner is losing more money than he's prepared to lose. It means the owner is not bluffing when he says he's bleeding too much money.

If an NBA team loses $25 million, is that a problem big enough that the players should have to take a pay cut? It depends. If the owner's Picasso value is more than $25 million, then the players can say, "no way". If the Picasso value is less than $25 million, the players need to at least consider that the owners are in a financial situation that they don't consider sustainable.

Because, suppose nobody in the world is willing to pay more than $25 million a year for the thrill of owning a team. And suppose that team is perpetually losing $30 million a year. Then, the team becomes, literally, valueless. It becomes in the owner's interest to fold the team entirely. It is in the interests of the players to figure out if NBA teams are approaching that point, and, if so, what should be done about it.











Labels: ,

Thursday, August 25, 2011

Gladwell on the Picasso Theory

It looks like the "Picasso Theory" is catching on.

A few years ago, I argued that owning a sports team is like owning a Van Gogh: you don't own it to make a profit, but, rather, for the ego gratification of ownership. Normally, an owner doesn't mind that much if he makes money, or even loses a small amount, because that's just the cost of the hobby of owning the team.

There's some evidence that supports that. For instance, teams that are making lots of money are more likely to be owned by corporations, who don't get pleasure and so insist on money. On the other hand, teams losing money are more likely to be owned by individuals, who are willing to pay the price to be real-life fantasy owners. (Read my previous post for the full argument. And, other posts on this topic here.)

Later, Tango dubbed that the "Picasso theory."

Now, it looks like David Berri and Malcolm Gladwell agree. Gladwell wrote about the "psychic value" of NBA team ownership on "Grantland" last week, and, today, Berri cites it approvingly.

Gladwell writes in the context of a possible NBA lockout, where the owners are complaining that they can't make money under the current salary structure. Read the whole thing, but here's two key paragraphs:

"The best illustration of psychic benefits is the art market. Art collectors buy paintings for two reasons. They are interested in the painting as an investment — the same way they would view buying stock in General Motors. And they are interested in the painting as a painting — as a beautiful object. In a recent paper in Economics Bulletin, the economists Erdal Atukeren and Aylin Seçkin used a variety of clever ways to figure out just how large the second psychic benefit is, and they put it at 28 percent.7 In other words, if you pay $100 million for a Van Gogh, $28 million of that is for the joy of looking at it every morning. If that seems like a lot, it shouldn't. There aren't many Van Goghs out there, and they are very beautiful. If you care passionately about art, paying that kind of premium makes perfect sense ... Pro sports teams are a lot like works of art ...

"The big difference between art and sports, of course, is that art collectors are honest about psychic benefits. They do not wake up one day, pretend that looking at a Van Gogh leaves them cold, and demand a $27 million refund from their art dealer. But that is exactly what the NBA owners are doing. They are indulging in the fantasy that what they run are ordinary businesses — when they never were. And they are asking us to believe that these "businesses" lose money. But of course an owner is only losing money if he values the psychic benefits of owning an NBA franchise at zero — and if you value psychic benefits at zero, then you shouldn't own an NBA franchise in the first place. You should sell your "business" — at what is sure to be a healthy premium — to someone who actually likes basketball."

Anyway, my comments, which mostly agree with Gladwell despite a couple of exceptions:

1. I'm not sure I'd phrase it the way Gladwell did, that you have to add "psychic benefits" to financial profit to get true profit. I think you have to talk about them separately. I'd say that, yes, the owners are losing money, but, that's because they're willing to accept losses to get the "psychic benefit".

There are psychic benefits to a lot of transactions. I like Tim Hortons coffee, and, a few years ago, I bought their stock. I get a very small amount of psychic benefit out of owning the stock, because I like the product so much. But if I sell the stock, would anyone expect me to adjust the amount of my gain or loss by my perceived Picasso value? I don't think so. That would be weird. Better to say, "yes, I lost $X, but I got some small pride from being able to say I owned it, so I'm not all that upset."

In the case of the NBA, I wouldn't phrase it quite the way Gladwell does. Rather, I'd say, yes, some of the NBA owners are indeed losing money, but, so what? That's the price they pay for all the fun and fame of owning a team. If they don't like the cost, they can always sell the team, perhaps for more than they paid."

2. Even if teams broke even, they'd still be a bad investment. Right now, the way the stock market is valued, you can buy good businesses at 15x earnings, or even less ... which means you'll make at least 7% on your investment. Safer investments make less, of course ... let's suppose an NBA team is more predictable than (say) Coca-Cola, and a 4% return is more appropriate.

If the Detroit Pistons recently sold for $420 million, and they only broke even, the owner would be forgoing 4% of $420 million, which is $16.8 million. That's the opportunity cost of owning a team that merely breaks even.

According to the NBA, "11 teams lost more than $20 million each." If your team lost $25 million, that's not as big a deal as it looks, since you're already choosing to give up $17 million to own the team.

In fairness, you have to take into account that the owners may not have the cash flow to flush $25 million in cash down the drain ... the $16.8 million is a paper loss, while the $25 million is cash that needs to be found somewhere.

3. Even after lots of total losses, you might argue that the team might still be a good investment.

The argument might go like this. Suppose it costs $40 million in opportunity costs to own a team. If there are enough billionaires around who love basketball, they might be willing to pay that, and more. Suppose the 30th most rabid basketball billionaire is willing to pay $50 million a year for the privilege of being an NBA mogul. Then he'd be willing to pay $625 million for the team. (That's because his $625 million purchase will cost him an annual $25 million in opportunity cost, and $25 million cash.)


So, even losing money every year, an NBA team can still be a good investment. But only if the Picasso Value -- the consumption, "psychic" value of ownership -- rises.

But, wait! It's not enough that the psychic value rises. It has to rise *more than expected*. Because expectations for future value are already built in to the purchase price. If owner X realizes that the psychic value will be $100 million ten years from now, he'll be willing to pay extra for the team now, knowing that in ten years, he can sell it at a hefty profit.

As well, value of the team goes down if the actual cash losses are higher than expected. If the 30th most rabid billionaire is willing to lose $50 million a year, and losses are $55 million a year ... then the team is valueless. Well, not really, because someone might buy the team in anticipation of locking out the players to restructure league finances. But, still, the amount of cash losses is very important. Owners are willing to bear a certain amount of loss, but not an *unlimited* amount of loss.

And that's why I don't see any reason that NBA owners shouldn't want to restructure their agreement with the players to keep their losses down. Even if an owner is actually getting good value even if he loses $25 million a year ... well, at the same time, the NBA's superstar players are getting good value for their talents even if they make $8 million a year instead of $12 million.

The point is not that the owners are wrong in wanting to pay the players less. The point should only be that it is not unacceptable for the owners to be losing money. The question is: *how much* money is it reasonable to ask the owners to lose? And *how much* money is it reasonable to ask the players to give up? I don't have answers to those questions.

And that is one point where I disagree with Malcolm Gladwell. He says,

"But of course an owner is only losing money if he values the psychic benefits of owning an NBA franchise at zero."

That's not true. The psychic benefits might be quite high, but the losses, both in cash and opportunity cost, might be even higher. If the owners are really willing to lock out the players, isn't that some evidence that we're getting close to the owners' "reservation price"?


And,
"if you value psychic benefits at zero, then you shouldn't own an NBA franchise in the first place. You should sell your "business" — at what is sure to be a healthy premium — to someone who actually likes basketball."

But the owners don't value the psychic benefits at zero. If they did, they wouldn't have bought the team in the first place, as they would have made more money in other investments. They're just getting to the point where the financial losses are starting to come close to the psychic benefits.


If I had to summarize everything in one paragraph:

It is absolutely true that there are substantial psychic benefits to owning an NBA franchise. But those psychic benefits aren't infinite. Unless we can figure out the value of those benefits, we have no idea whether

(a) league finances are truly unsustainable, as losses are regularly higher than Picasso values;
(b) league finances aren't unsustainable yet, but headed that way, as the losses owners are being asked to bear are getting close to their Picasso value;
(c) league finances are still in favor of the owners, just not as much as they used to be;
(d) league finances are WAY in favor of the owners, who are pretending otherwise and blowing smoke.

You have to come up with some way of estimating "psychic values" to figure out where we actually are.


Labels:

Tuesday, August 16, 2011

The Tango method of regression to the mean -- a proof

Warning: technical mathy post.

-----

To go from a record of performance to an estimate of a team's talent, you have to regress its winning percentage towards the mean. How do you figure out how much to regress?


Tango has often given these instructions:

-----

1. First, figure out the standard deviation of team performance. For MLB, for all teams playing at least 160 games up until 2009, that figure is 0.070 (about 11.34 wins per 162 games).

Second, figure out the theoretical standard deviation of luck over a season, using the binomial approximation to normal. That's estimated by the formula

Square root of (p(1-p)/g))

For baseball, p = .500 (since the average team must be .500), and g = 162. So the SD of luck works out to about 0.039 (6.36 games per season).


So SD(performance) = 0.070, and SD(luck) = 0.039. Square those numbers to get var(performance) and var(luck). Then, if luck is independent of talent, we get

var(performance) = var(talent) + var(luck)

That means var(talent) equals 0.058 squared, so SD(talent) = 0.058.

2. Now, find the number of games for which the SD(luck) equals SD(talent), or 0.058. It turns out that's about 74 games, because the square root of (p(1-p))/74 is approximately equal to 0.058.

3. That number, 74, is your "answer". So, now, any time you want to regress a team's record to the mean, take 74 games of .500 ball (37-37), and add them to the actual performance. The result is your best estimate of the team's talent.

For instance, suppose your team goes 100-62. What's its expected talent? Adjust the record to 137-99. That gives an estimated talent of .581, or 94-68.

Or, suppose your team starts 2-6. Adjust it to 39-43. That's an estimated talent of .476, or 77-85.

-----

Those estimates seemed reasonable to me, but I often wondered: does this really work? Is it really true that you can add 74 games to a 162 game season, and it'll work, but you can also add 74 games to an 8 game season, and that'll work too? Surely you want to add fewer .500 games when your original sample is smaller, no?

And why always add the exact number of games that makes the talent SD equal to the luck SD? Is it a rule of thumb? Is it a guess? Again, that can't be the mathematically best way, can it?

It can, actually. I spent a couple of hours doing some algebra, and it turns out that Tango's method is exactly right. I was very surprised. Also, I don't know how Tango figured it out ... maybe he use an easier, more intuitive way to figure out that it works than going through a bunch of algebra.

But I can't find one, so let me take you through the algebra, if you care. Tango, is there an obvious explanation for why this works, more obvious that what I've done?

-------

As I wrote a few paragraphs ago,

var(overall) = var(talent) + var(luck). [Call this "equation 1" for later.]

Let v^2 =var(overall), and let t^2 = var(talent). Also, let "g" be the number of games.

From the binomial approximation to normal, we know var(luck) = (.25/g). So

v = SD(overall)
t = SD(talent)
sqr(.25/g) = SD(luck)

Suppose you run a regression on overall outcome vs. talent. The variance of talent is t^2. The variance of overall outcome is v^2. Therefore, we know that talent will explain t^2/v^2 of the variance of outcome, so the r-squared we get out of the regression will be t^2/v^2. That means the correlation coefficient, "r", will be equal to the square root of that, or t/v.

There's a property of regression in general that implies this: If we want to predict talent from outcome, then, if the outcome X is y standard deviations from the mean, talent will be y(t/v) standard deviations from the mean. That's one of the things that's true for any regression of two variables.

So:

Expected talent = average + (number of SDs outcome is away from the mean) (t/v) * (SD of talent)

Expected talent = average + [(outcome - mean)/SD of outcome] [t/v] * (SD of talent)

Expected talent = average + (X - mean)/v * (t/v) * t

Expected talent = average + t^2/v^2 (X - mean)

That last equation means that when we look at how far the observation is from average, we "keep" t^2/v^2 of the difference, and regress to the mean by the rest. In other words, we regress to the mean by (1 - t^2/v^2), or "(100 * (1 - t^2/v^2)) percent".

Now, if we regress to the mean by (1 - t^2/v^2), that's the exactly the same as averaging

-- (1 - t^2/v^2) parts average performance, and
-- (t^2/v^2) parts observed performance.

For instance, if you're regressing one-third of the way to the mean, you can do it two ways. You can (a) move from the average to the observation, and then move the other way by 1/3 of the difference, or (b) you can just take an average of two parts original and one part mean.

But how does that translate, in practical terms, into how many games of average performance we need to add?

From above, we know that:

For every t^2/v^2 games of observed performance, we want (1 - t^2/v^2) games of average performance.

And now a little algebra:

For every 1 game of observed performance, we want (1 - t^2/v^2)/(t^2/v^2) games of average performance.

Simplifying gives,

For every game of observed performance, we want (v^2-t^2)/t^2 games of average performance.

Multiply by g:

For every "g" games of observed performance, we want g(v^2-t^2)/t^2 games of average performance.

But, from equation 1, we know that (v^2-t^2) is just the squared SD of luck, which is .25/g. So,

For every "g" games of observed performance, we want g(.25/g)/t^2 games of average performance.

The "g"s cancel, and we get,

For every "g" games of observed performance, we want .25/t^2 games of average performance.

And that doesn't depend on g! So no matter whether you're regressing a team over 1 game, or 10 games, or 20 games, or 162 games, you can always add *the same number of average games* and get the right answer! I wouldn't have guessed that.

--------

But how many games? Well, it's (.25/t^2) games.

For baseball, we calculated earlier now that t = 0.058. So .25/t^2 equals ... 74 games. Exactly as Tango said, the number of games we're adding is exactly the number of games for which SD(luck) equals SD(talent)!


Is that a coincidence? No, it's not. It's the way it has to be. Why? Here's a semi-intuitive explanation.

As we saw above, the number of games we have to add does NOT depend on the number of games we started with in the observed W-L record. So, we can pick any number of games. Suppose we just happened to start with 74 games -- maybe a team that was 40-34, or something.

Now, for that team, the SD of its talent is 0.058. And, the SD of its luck is also 0.058. Therefore, if we were to do a regression of talent vs. observed, we would necessarily come up with an r-squared of 0.5 -- since the variances of talent and luck are exactly equal, talent explains half of the total variance.

That means the correlation coefficient, r, is the square root of 0.5, or 1 divided by the square root of 2. For every SD change in performance, we predict 1/sqr(2) SD change in talent. But the SD of talent is exactly 1/sqr(2) times the SD of performance. Multiply those two 1/sqr(2)'s together and you get 1/2, which means for every win change in performance, we predict 1/2 win change in talent.

That's another way of saying that we want to regress exactly halfway back to the mean. That, in turn, is the equivalent of averaging one part observation, and one part mean. Since we have 74 games of observation, we need to add 74 games of mean.

So, in the case of "starting with 74 games of observation," the answer is, "we need to add 74 games of .500 to properly regress to the mean."

However, we showed above that we want to add the *same* number of .500 games regardless of how many observed games we started with. Since this case works out to 74 games, *all* situations must work out to 74 games.

QED, I guess.

--------

And, of course, and again as Tango has pointed out, this works for *any* binomial variable, like batting average or hockey save percentage. The only thing you have to keep in mind is that the ".25" in the formula for luck is based on an average being .500. It's really p(1-p), which works out to .25 if your p equals .500. If your p doesn't equal .500, use p(1-p) instead. So, in hockey, where a typical save percentage is .880, use (.880)(.120) = .1056 instead.

--------

Sorry this is so ugly to read in blog form. Maybe I'll make the equations nicer and rerun this in "By the Numbers." Let me know if I've done anything wrong, or if I've just duplicated Tango's proof. For all I know, Tango has already explained all this somewhere else.

But this is still kind of complicated. Tango, do you have a more intuitive explanation of why this works, one that doesn't need all this algebra?

--------

(Update, 11:30pm: part of the explanation above "QED" was wrong ... now fixed.)


Labels: ,

Saturday, August 13, 2011

Top academic journals won't publish rebuttal replications

Recently, a top academic psychology journal, JPSP, decided to publish a study purporting to show the existence of ESP. An author had submitted a paper, which showed statistical significance at 2.5 SD, and the journal argued that it wasn't appropriate to reject the paper just because the effect was precognition rather than something more mainstream.

So the paper was accepted. The decision to publish was controversial, as you might expect. Almost immediately, several researchers replicated the experiment, and found no effect whatsoever. They submitted rebuttal papers to JPSP.

And JPSP refused to publish them! Not only JPSP, but "the other high-end psychology journals" (according to this Carl Shulman post from "Less Wrong", which I am paraphrasing here) also refused.

Their explanation: they don't publish straight replications.

So, let's get this straight. Someone writes a study that makes extremely bold claims with weak evidence. The journal decides to publish it. But when other researchers almost immediately rebut it in the strongest possible way -- by replicating the experiment exactly -- the journals decide they're not interested.

Shulman writes,

From the journals' point of view, this (common) policy makes sense: bold new claims will tend to be cited more and raise journal status (which depends on citations per article), even though this means most of the 'discoveries' they publish will be false despite their p-values. However, this means that overall the journals are giving career incentives for scientists to massage and mine their data for bogus results, but not to challenge bogus results by others. Alas.


As far as I'm concerned -- and, indeed, as far as the scientific method is concerned -- it isn't science unless you expose your findings to confirmation and challenge. By Shulman's argument, it looks like academia has sacrificed the pursuit of science to the pursuit of status-seeking, where journals want to be interesting and professors want to avoid being proven wrong.




Labels:

Monday, August 08, 2011

Umpires' racial bias disappears for other years of data -- Part II

Hopefully this will be my last post on that Hamermesh umpire bias study ...

Two days before I was going to talk about the study at a presentation last week, I discovered that someone else was presenting a poster on it at the same conference.

Jeff Hamrick and John Rasp ran the data for 21 years of MLB, 1980-2010; the original Hamermesh study was for three years, 2004-2006.

Here's the chart they got. The numbers are, as usual, percentages of called pitches that were strikes:

Pitcher ------ White Hspnc Black
--------------------------------
White Umpire-- 30.70 30.60 29.30
Hspnc Umpire-- 31.70 31.30 28.80
Black Umpire-- 30.80 30.30 28.70
--------------------------------

(Their numbers are actually only to one decimal place, not two, but I added the zero to make the numbers line up with the previous charts.)

There is no evidence of bias here ... the diagonal entries are not really any bigger than they "should" be. Still, after controlling for a whole bunch of stuff, including the identity of the pitcher, batter, and umpire, they got a significance level of 0.075. That's below the standard .05 threshold. Even if you accept the .075 as real, the amount of bias is very, very small.

As you may recall, the 3-year sample, which was significant at (I think) somewhere between .01 and .05, could have been caused by a mere 35 pitches per season. This study used seven times as much data, and got a lower significance level. The square root of 7 is about 2.6, so we divide 35 by 2.6 to get about 13 pitches per season. And, since the new study found only maybe 1.7 SDs instead of 2.5, we divide by another 1.5 to get maybe 9 pitches per season.

(That 9 pitches is the minimum, if maybe only one or two umpires are biased. It could be a lot more, if it's white pitchers who are biased. But there's no way to know for sure from the data.)

However, one thing we have to consider. The Hamermesh study found an effect only for low attendance situations. This new Hamrick/Rask data is for all attendance situations. However, when Hamrick and Rask included attendance in their regression, they got a significance level of 0.977, which shows the effect is almost completely random with regard to attendance.

So it's probably safe to conclude that, when you extend the Hamermesh study from 3 seasons to 21, the effect goes away.

Thanks to Jeff and John for making the data available.



Labels: , , ,

Monday, August 01, 2011

Do umpires discriminate in favor of veterans?

At the SABR convention last month, some evidence was unveiled that suggests that umpires give more favorable ball/strike calls to veterans.

Pat Kilgo gave a presentation, "Do Umpires Give Favorable Treatment to Some Players?" He and his colleagues -- Hillary Superak, Lisa Elon, Mark Katz, Paul Weiss, Jeff Switchenko, Brian Schmotzer, and Lance Waller -- looked at called pitches from 2009-10. They compared the call to the PitchF/X data, and, from that, decided if it was a correct call, a "false strike," or a "false ball".

They then created a matrix classifying both the batter and pitcher by years of experience. There were 16 classifications for each, from "0-1 year experience" to "more than 15 years experience". So the matrix had 256 cells. Each cell contained the percentage of "false strikes" for that situation.

As it turned out, there were many, many more false strikes when the pitcher had a lot of experience but the batter did not. And there were many *fewer* false strikes when the situation was reversed, with an experienced batter and rookie-ish pitcher.

Pat was kind enough to give me permission to post his PowerPoint slides, which are here. If you turn to slide 16, Pat and his colleagues color coded the cells, from dark green (lots of false strikes) to beige (few false strikes). Most of the green are at the bottom-left; most of the beige are at the top-right. There is no doubt that the distribution of colors is statistically significant.

Here, take a look:



On slide 22, the authors repeat the analysis for "false balls". This time, the pitcher's experience is significant (veterans don't get cheated out of a strike very often), but the batter's is not.

To summarize the authors' slide 33:

-- Umpires absolutely favor veterans with respect to false strikes
-- Umpires most likely favor veteran pitchers with respect to false balls
-- No evidence of benefit to veteran hitters [on false balls]


There are a couple of possible criticisms to the study. One is that PitchF/X might not be the best way to classify missed calls (I believe Mike Fast made this argument, but I don't have the link handy).

Another -- and I think this was raised by a questioner at the original presentation -- is that not all pitches are created equal. If veteran pitchers tend to throw down the middle, instead of trying to paint the corners, that would reduce their number of false balls (since their strikes are more obvious). I suppose you could check that out by controlling for pitch location.

Still, it seems to me that there's a good chance that Pat and his colleagues have found a real effect. Part of the reason is that the "umpires favor veterans" theory doesn't come out of the blue -- a lot of observers have long believed it to be true. That's unlike the "umpires have racial bias" hypothesis, which was (and still is) generally doubted by players and sportswriters.

I look forward to hearing what everyone else thinks. Thanks again to Pat for permission to post and link.


Labels: , ,