Tuesday, July 21, 2015

A "hot hand" is found in the NBA three-point contest

A recent paper provides what I think is rare, persuasive evidence of a "hot hand" in a sporting event.

The NBA Three-Point Contest has been held annually since 1986 (with the exception of 1999), as part of the NBA All-Star Game event. A pair of academic economists, Joshua Miller and Adam Sanjurjo, found video recordings of those contests, and analyzed the results. (.pdf)

They found that players were significantly more likely to make a shot after a series of three hits than otherwise. Among the 33 shooters who had at least 100 shots in their careers, the average player hit 54 percent overall, but 58 percent after three consecutive hits ("HHH").  

(UPDATE: the 58 percent figure is approximate: the study reports an increase of four percentage points after HHH than after other sequences. Because the authors left out some of the shots in some of their calculations (as discussed later in this post), it might be more like 59% vs. 55%, or some such. None of the discussion to follow depends on the exact number.)

The authors corrected for two biases. I'll get to those in detail in a future post, but I'll quickly describe the most obvious one. And that is: after HHH, you'd expect a *lower than normal* hit rate -- that is, an apparent "mean-reverting hand" -- even if results were completely random. 

Why? Because, if a player hit exactly 54 of 100 shots, then, after HHH, the next shot must come out of what remains -- which is 51 remaining hits out of 97 remaining shots. That's only 52.6 percent. In other words, the hit rate not including the "HHH" must obviously be lower than the hit rate including "HHH". 

That might be easier to see if you imagine that the player hit only 3 out of 100 shots overall. In that case, the expectation following HHH must be 0 percent, not 3 percent, since there aren't enough hits to form HHHH!

After the authors corrected for this, and for the other bias they noted, the "hot hand" effect jumped from 4 percentage points to 6. 

------

UPDATE: Joshua Miller has replied to some of what follows, in the comments.  I have updated the post in a couple of places to reflect some of his responses.

------

That's a big effect, a difference of 6 percentage points. Maybe it's easier to picture this way:

Of the 33 players, 25 of them shot better after HHH than their overall rate. 

In other words, the "hot hand" beat the "mean-reverting hand" with a W-L record of 25-8. With the adjustments included, the hot hand jumps to 28-5.

------

Could the result be due to something other than a hot hand? Well, to some extent, it could be selective sampling of players.

In the contest, players shoot 25 attempts per round. To get to 100 attempts, and be included in the study, a shooter has to play at least four rounds in his career.  (By the way, here's a YouTube video of the 2013 competition.)

In any given contest, to survive to the next round, a player needs to do well in the current round. That means that players who got enough attempts were probably lucky early. That might select players who concentrated their hits in early rounds, compared to the late rounds, and create a bit of a "hot hand" effect just from that.

And I bet that's part of it ... but a very small part. Even if a player shot 60/60/50 in successive rounds, just by luck, that alone wouldn't be nearly enough to show an overall effect of 6 percentage points, or even 4, or (I think) even 1.

UPDATE: The authors control for this by stratifying by rounds, Dr. Miller replies.

------

One reason I believe the effect is real is that it makes much more intuitive sense to expect a hot hand in this kind of competition than in normal NBA play.

In each round of the contest, players shoot 5 consecutive balls from the same spot on the court, in immediate succession. That seems like the kind of task that would easily show an effect. It seems to me that a large part of this would be muscle memory -- once you figure out the shot, you just want to do exactly the same thing four more times (or however many balls you have left once you figure it out). 

After those five balls, you move to another spot on the arc for another five balls, and so on, and the round ends after you've thrown five balls from each of five locations. However, even though the locations move, the distances are not that much different, so some of the experience gained earlier might extend to the next set of five, making the hot hand even more pronounced.

There's one piece of the evidence that offers support for the "muscle memory" hypothesis. It turns out that the first two shots in each round were awful. The authors report that the first shot was made only 26 percent of the time, and the second shot only 39 percent. For the remaining twenty-three shots, the average success rate was 56 percent.

That "warm up" time is very consistent with a "muscle memory" hot hand.

-----

In fact, those first two shots were so miserable that the authors actually removed them from the dataset! If I understand the authors correctly, a player listed with 100 shots was analyzed for only 92 of those shots.

UPDATE: originally, I thought that rounds were stitched together, so removing those shots would increase observed streakiness from one round to the next. But Dr. Miller notes, in the comments, that they considered streaks within a single round only. In that case, as he notes, removing the first two shots has the effect of reducing "cold hand" streakiness, making the results more conservative.  

The removal of those shots, it seems to me, would be likely to overstate the findings a bit. The authors strung rounds together as if they were just one long series of attempts (even if they spanned different years; that seems a bit weird, that you'd say a player had a "hot hand" if he continued a 2004 streak in 2005, but never mind).

That means that when they string the last five shots of one round with the first five shots of the next, instead of something like


MHHHH MMHMH


they get 


MHHHH   HMH


which tends to create more streaks, since you're taking out shots that tend to be mostly misses, in the midst of a series of shots that tend to be mostly hits. ("M" represents a miss, as you probably gathered.)


I wonder if the significant effect the authors found would still have shown up without those omitted shots. I suspect it would have been, at least, significantly weaker. I may be wrong -- the authors showed streakiness both for hits and misses, so maybe the extra "MM" shots would have shown up in their "cold hand" numbers.


------

I bet you'd find a hot hand if you tried the equivalent contest yourself. Position a wastebasket somewhere in the room, a few feet away. Then, stay in one spot, and try to throw wads of paper into the basket. I'm guessing your first one will miss, and you'll adjust your shot, and then you'll get a bit better, and, eventually, you'll be sinking 80 to 90 percent of them. Which means, you have a "hot hand" -- once you get the hang of it, you'll be able to just repeat what you learned, which means hits will tend to follow hits.

Here's a more extreme analogy. Instead of throwing paper into a basket, you're shown a picture of a random member of the Kansas City Royals, and asked to guess his age exactly. After your guess, you're told how far you were off. And then you get another random player (which might be a repeat).

Your first time through the roster, you might get, say, 1/3 of them right. The second time through, you'll get at least 2/3 of them right -- the 1/3 from last time, and at least half the rest (now that you know how much you were off by, you only have to guess which direction). The third time through, you'll get 100%.

So, your list of attempts will look something like this (H for hit, M for miss):

MMMHMHMHMMHHHMMHHMHMMMHMHHHHMHHHMMHHHHHHHHMHHHHHHHH...

Which clearly demonstrates a hot hand.

And that's similar to what I think is happening here. 

------

The popular belief, among sportswriters and broadcasters, is that the hot hand - aka "momentum" or "streakiness" -- is real, that a team that has been successful should be expected to continue that way. But almost every study that has looked for such an effect has failed to find one.

That led to the coining of the term "hot hand fallacy" -- the belief that a momentum effect exists, when it does not. Hence the title of this paper: "Is it a Fallacy to Believe in the Hot Hand in the NBA Three Point Contest?"

So, does this study actually refute the hot hand fallacy? 

Well, it refutes it in its strongest form, which is the position that there NEVER exists a hot hand of ANY magnitude, in ANY situation. That's obviously wrong. You can prove it with the Kansas City Royals example, or ... well, you can prove it in your own life. If you score every word you misspelled as a miss, and the rest as a hit ... most of your misses are clustered early in life, when you were learning to read and write, so there's your hot hand right there.

The real "fallacy," as I see it, is not the idea that a hot hand exists at all, but the idea that it is a significant factor in predicting what's going to happen next. In most aspects of sports, the hot hand, when it does exist, is so small as to have almost no predictive value. 

Suppose a player has two kinds of days, equal and random -- "on," where he hits 60%, and "off" where he hits only 50%. That would give rise to a hot hand, obviously. But how big a hot hand? What should you predict as the chance of the player making his next shot?

Before the game, you'd guess 55% -- maybe he's on, or maybe he's off. But, now, he hits three straight shots. He has a hot hand! What do you expect now?

If my math is right, you should now expect him to shoot ... 56.3%. Not much different!

The "50/60 on/off" actually represents a huge variation in talent. The problem is that streaks are a weak indicator of whether the player is actually "on," versus whether he just had a lucky three shots. In real life, it's even weaker than a 1.3 percent indicator, because, for one thing, how do you know how long a player is "on" and how long he's "off"? I assumed a full game, but that's wildly unrealistic.

You can probably think of many reasons streakiness is a weak indicator. Here's just one more. 

The "56.3%" illustration was assuming that all shots were identical. In real life, if it's not a special case of a three-point contest ... well, when a player hits HHH, it might be evidence of a hot hand, but it also just could be that those shots were taken in easier conditions, that they were 60% shots instead of 50% shots because the defense didn't cover the shooter very well.

Real games are much more complicated and random than a three-point shooting contest. That's why I don't like the phrasing, that the authors of this NBA study found evidence of "THE hot hand effect". They found evidence of "A hot hand effect", one particular one that's large enough to show up in the contrived environment of a muscle-memory based All-Star novelty event. It doesn't necessarily translate to a regular NBA game, at least not unless you dilute it enough that it becomes irrelevant.

------

The "hot hand" issue reminds me of the "clutch hitting" issue. Both effects probably exist, but are so tiny that they're pretty much useless for any practical purposes. Academic studies fail to find statistically significant evidence, and imply that "absence of evidence" implies that no effect exists. We sabermetricians cheat a little bit, saving effort by saying there's "no effect" instead of "no effect big enough to measure."

So "no effect" becomes the consensus. Then, someone comes up with a finding that actually measures an effect -- this study for the hot hand, and "The Book" for clutch hitting. And those who never disbelieved in it jump on the news, and say, "Aha! See, I told you it exists!"  

But they still ignore effect size. 

People will still declare that their favorite hitter is certainly creating at least a win or two by driving in runs when it really counts. But now, they can add, "Because, clutch hitting exists, it's been proven!" In reality, there's still no way of knowing who the best clutch hitters are, and even if you could, you'd find their clutch contribution to be marginal.

And, now, I suspect, when the Yankees win five games in a row, the sportscasters will still say, "They have momentum! They're probably going to win tonight!" But now, they can add, "Because, the hot hand exists, it's been proven!" In reality, the effect is so attenuated that their "hotness" probably makes them a .501 expectation instead of .500 -- and, probably, even that one point is an exaggeration.  

My bet is: the "hot hand" narrative won't change, but now it will claim to have science on its side.




Labels: , , , ,

Wednesday, July 01, 2015

Do stock buybacks enrich CEOs at the expense of the economy?

Are share buybacks hurting the economy and increasing income inequality? 

Some pundits seem to think so. There was an article in Harvard Business Review, a while ago, which might have been an editorial (I can't find a byline). That followed a similar article from FiveThirtyEight, that concentrated on the economic effects. When I Googled, I came across another article from The Atlantic. I think it's a common argument ... I'm pretty sure I've seen it lots of other places, including blogs and Facebook.

They think it's a big deal, at least going by the headlines: 


-- "How stock options lead CEOs to put their own interests first" (Washington Post)

-- "Stock Buybacks Are Killing the American Economy" (The Atlantic)

-- "Profits Without Prosperity" (Harvard Business Review)

-- "Corporate America Is Enriching Shareholders at the Expense of the Economy" (FiveThirtyEight)

But ... it seems to me that neither the "hurt the economy" argument nor the "increase inequality" argument actually makes sense.

Before I start, here's a summary, in my own words, of what the three articles seem to be saying. You can check them out and see if I've captured them fairly.


"Corporations have always paid out some of their earnings in dividends to shareholders. But lately, they've been dispersing even more of their profits, by buying back their own shares on the open market. 

"This is problematic in several ways. For one, it takes money that companies would normally devote to research and expansion, and just pays it out, reducing their ability to expand the economy to benefit everyone. In addition, it artificially boosts the market price of the stock. That benefits CEOs unfairly, since their compensation includes shares of the company, and provides a perverse incentive to funnel cash to buybacks instead of expanding the business.

"Finally, it makes the rich richer, boosting the stock values for CEOs and other shareholders at the expense of the lower and middle classes."

As I said, I don't think any part of this argument actually works. The reasons are fairly straightfoward, not requiring any intricate macroeconomics.

-----

1. Buybacks don't increase the value of the shares

At first consideration, it seems obvious that buybacks must increase the value of your stockholdings. With fewer shares outstanding, the value of the company has to be split fewer ways, so your piece of the pie is bigger.

But, no. Your *fraction* of the pie is bigger, but the pie is reduced in size by exactly the same fraction. You break even. That *has* to be the case, otherwise it would be a way to generate free money!

Acme has one million (1 MM) shares outstanding. The company's business assets are worth $2 MM, and it has $1 MM in cash in the bank with no debt. So the company is worth $3 a share.

Now, Acme buys back 100,000 shares, 10 percent of the total. It spends $300,000 to do that. Then, it cancels the shares, leaving only 900,000.

After the buyback, the company still owns a business worth $2 million, but now only has $700,000 in the bank. Its total value is $2.7 million. Divide that by the 900,000 remaining shares, and you get ... the same $3 a share as when you started.

It's got to be that way. You can't create wealth out of thin air by market-value transactions. 

The HBR author might realize this: he or she hints that buybacks increase stock prices "in the short term," and "even if only temporarily."  I'm not sure how that would happen -- for very liquid shares, the extra demand isn't going to change the price very much. Maybe the *announcements* of buybacks could boost the shares, by signalling that the company has confidence in its future. But that's also the case for announcements of dividend increases.

One caveat: it's true the share price is higher after a buyback than a dividend, but that's not because the buyback raises the price: it's because the dividend lowers it. If the company spends the $300,000 on dividends instead of buybacks, the value of a share drops to $2.70. The shareholders still have $3 worth of value: $2.70 for the share, and 30 cents in cash from the dividend. (It's well known, and easily observed, that the change in share price actually does happen in real life.)

If the CEO chooses to spend the cash on buybacks, then, yes, the stock price will be higher than if he chose to spend it on dividends. It won't just be higher in the short term, but in the long term too. 

Are buybacks actually replacing dividends? The FiveThirtyEight article shows that both dividends and buybacks are increasing, so it's not obviously CEOs choosing to replace one with the other. 

But, sure, if the company does replace expected dividends with buybacks, the share price will indeed sit higher, and the CEO's stock options will be more valuable.

To avoid conflicts of interest, it seems like CEOs should be compensated in options that adjust for dividends paid. (As should all stock options, including the ones civilians buy. But they don't do that, probably because it's too complicated to keep track of.)  But, again: the source of the conflict is not that buybacks boost the share price, but that dividends reduce it. If you believe CEOs are enriching themselves by shady dealing, you should be demanding more dividends, not decrying buybacks.


2. The buyback money is still invested

The narratives claim that the money paid out in share buybacks is lost, that it's money that won't be used to grow the economy.

But it's NOT lost. It's just transferred from the company to the shareholders who sell their stock. 

Suppose I own 10 shares of Apple, and they do a buyback, and I sell my shares for $600 of Apple's money. That's $600 that Apple no longer has to spend on R&D, or advertising, or whatever. But, now, *I have that $600*. And I'm probably going to invest it somewhere else. 

Now, I might just buy stock in another company -- Coca-Cola, say -- from another shareholder. That just transfers money from me to the other guy -- the Coca-Cola Corporation doesn't get any of that to invest. But, then, the other guy will buy some other stock from another guy, and so on, and so on, until you finally hit one last someone who doesn't use it to buy another stock.

What will he do? Maybe he'll use the $600 to buy a computer, or something, in which case that helps the economy that way. Or, he'll donate the $600 to raise awareness of sexism, to shame bloggers who assume all CEOs and investors are "he". Or, he'll use it to pay for his kids' tuition, which is effectively an investment in human capital. 

Who's to say that these expenditures don't help the economy at least as much as Apple's would?

In fact, the investor might use the $600 to actually invest in a business, by buying into an IPO. In 2013, Twitter raised $1.8 billion in fresh money, to use to build its business. It's quite possible that my $600, which came out of Apple's bank account, eventually found its way into Twitter's.

Is that a bad thing? No, it's a very good thing. The market judged, albeit in a roundabout way, that there was more profit potential for that $600 in Twitter than in Apple. The market could be wrong, of course, but, in general, it's pretty efficient. You'd have a tough time convincing me that, at the margin, that $600 would be more profitable in Apple than in Twitter.

The economy grows the best when the R&D money goes where it will do the most good. If Consolidated Buggy Whip has a billion dollars in the bank, do you really want it to build a research laboratory where it can spend it on figuring out how to synthesize a more flexible whip handle? 

At the margin, that's probably where Apple is coming from. It makes huge, huge amounts of profit, around $43 billion last year. It spent about $7 billion on R&D. Do we really want Apple to spend six times as much on research as we think is appropriate? It seems to me that the world is much better off if that money is given back to investors to put elsewhere into the economy.

That might be part of why buyback announcements boost the stock price, if indeed they do. When Apple says it's going to buy back stock, shareholders are relieved to find out they're not going to waste that cash trying to create the iToilet or something.


3. Successful companies are not restrained by cash in the bank

According to the FiveThirtyEight article, Coca-Cola spent around $5 billion in share repurchases in 2013. But their long-term debt is almost $20 billion.

For a company like Coca-Cola, $20 billion is nothing. It's only twice their annual profit. Their credit is good -- I'm sure they could borrow another $20 billion tomorrow if they wanted to.

In other words: anytime the executives at Coke see an opportunity to expand the business, they will have no problem finding money to invest. 

If you don't believe that, if you still believe that the $5 billion buyback reduces their business options ... then, you should be equally outraged if they used that money to pay down their debt. Either way, that's $5 billion cash they no longer have handy! The only difference is, when Coca-Cola pays down debt, the $5 billion goes to the bondholders instead of the shareholders. (In effect, paying off debt is a "bond buyback".)

The "good for the economy" argument isn't actually about buybacks -- it's about investment. If buybacks are bad, it's not because they're buybacks specifically; it's because they're something other than necessary investment.

It's as if people are buying Cadillac Escalades instead of saving for retirement. The problem isn't Escalades, specifically. The problem is that people aren't using the money for retirement savings. Banning Escalades won't help, if people just don't like saving. They'll just spend the money on Lexuses instead.

Is investment actually dropping? The FiveThirtyEight article thinks so -- it shows that companies' investment-to-payout ratio is dropping over time. But, so what? Why divide investment by payouts? Companies could just be getting rid of excess cash that they don't know what to do with (which they also get criticized for -- "sitting on cash"). Looking at Apple ... their capital expenditures went from 12 cents a share in 2007, to $1.55 in 2014 (adjusted for the change in shares outstanding). A thirteen-fold increase in research and development doesn't suggest that they're scrimping on necessary investment.


4. Companies offset their buybacks by issuing new shares

As I mentioned, the FiveThirtyEight article notes that Coke bought back $5 billion in shares in 2013. But, looking at Value Line's report (.pdf), it looks like, between 2012 and 2013, outstanding shares only dropped by about half that amount.

Which means ... even while buying back and retiring $5 billion in old shares, Coca-Cola must have, at the same time, been issuing $2.5 billion in *new* shares. 

I don't know why or how. Maybe they issued them to award to employees as stock options. In that case, the money is part of employee compensation. Even if the shares went to the CEO, if they didn't issue those shares, they'd have to pay the equivalent in cash.

So if you're going to criticize Coca-Cola for wasting valuable cash buying shares, you also have to praise it, in an exactly offsetting way, for *saving* valuable cash by paying employees in shares instead. Don't you?

I suppose you could say, yes, they did the right thing by saving cash, but they could do more of the right thing by not buying back shares! But: the two are equal. If you're going to criticize Coca-Cola for buying back shares, you have to criticize other companies that pay their CEOs exclusively in cash. 

But the HBR article actually gets it backwards. It *criticizes* companies that pay their CEOs in shares!

Suppose Coca-Cola is buying back (say) a million shares for $40 MM, which is presumably bad. Then, they give those shares to the employees, which is also presumably bad. Instead, the Harvard article says, they should take the $40 MM, and give it to the employees directly. 

But that's exactly the same thing! Either way, Coca-Cola has the same amount of cash at the end. It's just that in one case, the original shareholders have shares and the CEO has cash. The other way, the original shareholders have the cash and the CEO has the shares.

What difference does that make to the economy or the company? Little to none.


5. Inequality is barely affected, if at all

Suppose a typical CEO makes about $40 million. And suppose half of that is in stock. And suppose, generously, that the CEO can increase the realized value of his shares by 5 percent by allegedly manipulating the price with share buybacks.

You're talking about $1 million in manipulation. 

How much does that affect inequality? Hardly at all. The top 1% of earners in the United States are, by definition, around 3 million people. That includes children ... let's suppose the official statistics use only 2 million people.

The Fortune 500 companies are, at most, 500 CEOs. Let's include other executives and companies, to get, say, 4,000 people. 

That's still only one-fifth of one percent of the "one percenters."

The average annual income of the top 1% is around $717,000. Multiply that by two million people, and you get total income of around $1.4 trillion.

After the CEOs finish manipulating the stock price, the 4,000 executives earn an extra $4 billion overall. So the income of the top 1% goes from

$1,400,000,000,000

to 

$1,404,000,000,000

That's an increase of less than one-third of one percent. Well, yes, technically, that does "contribute" to inequality, but by such a negligible amount that it's hardly worth mentioning. 

And that .00333 percent is still probably an overstatement:

1. We used very generous assumptions about how CEOs capitalize on stock price changes. 

2. When the board offers the CEO stock options, both parties are aware of the benefits of the CEO being able to time the announcements. Without that benefit, pay would probably have to increase (for the same reason you have to pay a baseball player more if you don't give him a no-trade clause). So, much of this alleged benefit is not truly affecting overall compensation.

3. Price manipulation is a zero-sum game. If the executives win, someone loses. Who loses? The investors who buy the executives' shares when they sell. Who are those investors? Mostly the well-off. Some of the buyers might be pension funds for line workers, or some such, but I'd bet most of the buyers are upper middle class, at least. 

We know for sure it isn't the poorest who lose out, because they don't have pension funds or stocks. So it's probably the top 1 percent getting richer on the backs of the top 10 percent.

--------

Here's one argument that *does* hold up, in a way: the claim that buybacks increase earnings per share (EPS).

Let's go back to the Acme example. Suppose, originally, they have $200,000 in earnings: $190,000 from the business, and $10,000 from interest on the $1 MM in the bank. With a million shares outstanding, EPS is 20 cents.

Now, they spend $300K to buy back 100,000 shares. Afterwards, their earnings will be $197,000 instead of $200,000. With only 900,000 shares remaining outstanding, EPS will jump from 20 cents to 21.89 cents.

Does that mean the CEO artificially increased EPS? I would argue: no. He did increase EPS, but not "artificially."

Before the buyback, Acme had a million dollars in the bank, earning only 1 percent interest. On the other hand, an investment in Acme itself would earn almost 7 percent (20 cents on the $3 share price). Why not switch the 1-percent investment for a 7-percent investment? 

It's a *real* improvement, not an artificial one. If Acme doesn't actually need the cash for business purposes, the buyback benefits all investors. It's the same logic that says that when you save for retirement, you get a better return in stocks than in cash. It might be right for Acme for the same reason it's right for you.

Does the improvement in EPS boost the share price? Probably not much -- the stock market is probably efficient enough that investors would have seen the cash in the bank, and adjusted their expectations (and stock price) accordingly. A small boost might arise if the buybacks are larger, or earlier, than expected, but hardly enough to make the CEO any more fabulously wealthy than he'd be without them.

------

There's another reason companies might buy back shares -- to defer tax for their shareholders.

Suppose Coca-Cola has money sitting around. They can pay $40 to me as a dividend. If they do, I pay tax on that -- say, $12. So, now, I have $12 less in value than before. The value of my stock dropped by $40, and I only have $28 in after-tax cash to compensate.

Instead of paying a dividend, Coke could use the $40 to buy back a share. In that case, I pay no tax, and the value of my account doesn't drop.  

Actually, the buybacks are just deferring my taxes, not eliminating them. When I sell my shares, my capital gain will be $40 more after the buyback than it would have been if Coke had issued a dividend instead. As one of the linked articles notes, the US tax rate on capital gains is roughly the same as on dividends. So, the total amount is a wash -- it's just the timing that changes.

Maybe that tax deferral bothers you. Maybe you think the companies are doing something unfair, and exploiting a loophole. I don't agree; for one thing, I think taxing corporate profits, and also dividends, is double taxation, a hidden, inefficient and sometimes unfair way to raise revenues. (Companies already have to pay corporate income tax on earnings, regardless of whether they use it for buybacks, dividends, reinvestment, or cash hoards.)

You might disagree with me on that point.  If you do, then why aren't you upset at companies who don't pay dividends at all? If share buybacks are a loophole because they defer taxes, then retained earnings must be a bigger loophole, because they defer even *more* taxes!

Keep in mind, though, the deferral from buybacks is not quite as big as it looks. When the company buys the shares, the sellers realize a capital gain immediately. If the stock has skyrocketed recently, the total tax the IRS collects after the buyout could, in theory, be a significant portion the amount it would have collected off the dividend. (For instance, if all the selling shareholders had originally bought Coca-Cola stock for a penny, the entire buyback (less one cent) would be taxed, just as the entire dividend would have been.)

There's another benefit: when Coca-Cola buys shares, it buys them from willing sellers, who are in a position to accept their capital gains tax burden right now. That's the main advantage, as I see it: the immediate tax burden winds up falling on "volunteers," those who are able and willing to absorb it right now.

-------

In my view, buybacks have little to do with greedy CEOs trying to enrich themselves, and they have negligible effect on the economy compared to traditional dividends. They're just the most tax-efficient way for companies to return value to their owners.




UPDATE: Finance writer Michael Mauboussin explains buybacks in more detail in a FAQ here.  (Mauboussin has also written about sports and luck.)


Labels: , ,

Friday, June 19, 2015

Can fans evaluate fielding better than sabermetric statistics?

Team defenses differ in how well they turn batted balls into outs. How do you measure the various factors that influence the differences? The fielders obviously have a huge role, but do the pitchers and parks also have an influence?

Twelve years ago, in a group discussion, Erik Allen, Arvin Hsu, and Tom Tango broke down the variation in batting average on balls in play (BAbip). Their analysis was published in a summary called "Solving DIPS" (.pdf).

A couple of weeks ago, I independently repeated their analysis -- I had forgotten they had already done it -- and, reassuringly, got roughly the same result. In round numbers, it turns out that:

The SD of team BAbip fielding talent is roughly 30 runs over a season.

------

There are several competing systems for evaluating which players and teams are best in the field, and by how much. The Fangraphs stats pages list some of those stats, and let you compare.

I looked at those team stats for the 2014 season. Specifically, these three:

1. DRS, from The Fielding Bible -- specifically, the rPM column, runs above average from plays made. (That's the one we want, because it doesn't include outfielder/catcher arms, or double-play ability.)

2. The Fan Scouting Report (FSR), which is based on an annual fan survey run by Tom Tango.

3. Ultimate Zone Rating (UZR), a stat originally developed by Mitchel Lichtman, but which, as I understand it, is now public. I used the column "RngR," which is the range portion (again to leave out arms and other defensive skills).

All three stats are denominated in runs. Here are their team SDs for the 2014 season, rounded:

37 runs -- DRS (rPM column)
23 runs -- Fan Scouting Report (FSR)
29 runs -- UZR (RngR)
------------------------------------
30 runs -- team talent

The SD of DRS is much higher than the SD of team talent. Does that mean it's breaching the "speed of light" limit of forecasting, trying to (retrospectively) predict random luck as well as skill?

No, not necessarily. Because DRS isn't actually trying to evaluate talent.  It's trying to evaluate what actually happened on the field. That has a wider distribution than just talent, because there's luck involved.

A team with fielding talent of +30 runs might have actually saved +40 runs last year, just like a player with 30-home-run talent may have actually hit 40.

The thing is, though, that in the second case, we actually KNOW that the player hit 40 homers. For team fielding, we can only ESTIMATE that it saved 40 runs, because we don't have good enough data to know that the extra runs didn't just result from getting easier balls to field.

In defense, the luck of "made more good plays than average" is all mixed up with "had more easier balls to field than average."  The defensive statistics I've seen try their best to figure out which is which, but they can't, at least not very well.

What they do, basically, is classify every ball in play according to how difficult it was, based on location and trajectory. I found this post from 2003, which shows some of the classifications for UZR. For instance, a "hard" ground ball to the "56" zone (a specific portion of the field between third and short) gets turned into an out 43.5 percent of the time, and becomes a hit the other 56.5 percent. 

If it turns out a team had 100 of those balls to field, and converted them to outs at 45 percent instead of 43.5 percent, that's 1.5 extra outs it gets credited for, which is maybe 1.2 runs saved.

The problem with that is: the 43.5 percent is a very imprecise estimate of what the baseline should be. Because, even in the "hard-hit balls to zone 56" category, the opportunities aren't all the same. 

Some of them are hit close to the fielder, and those might be turned into outs 95 percent of the time, even for an average or bad-fielding team. Some are hit with a trajectory and location that makes them only 8 percent. And, of course, each individual case depends where the fielders are positioned, so the identical ball could be 80 percent in one case and 10 percent in another.

In a "Baseball Guts" thread at Tango's site, data from Sky Andrecheck and BIS suggested that only 20 percent of ground balls, and 10 percent of fly balls, are "in doubt", in the sense that if you were watching the game, you'd think it could have gone either way. In other words, at least 80% of balls in play are either "easy outs" or "sure hits."  ("In doubt" is my phrase, meaning BIPs in which it wasn't immediately at least 90 percent obvious to the observer whether it would be a hit or an out.)

That means that almost all the differences in talent and performance manifest themselves in just 10 to 20 percent of balls in play.

But, even the best fielding systems have few zones that are less than 20 percent or more than 80 percent. That means that there is still huge variation in difficulty *even accounting for zone*. 

So, when a team makes 40 extra plays over a season, it's a combination of:

(a) those 40 plays came from extra performance from the few "in doubt" balls;
(b) those 40 plays came from easier balls overall.

I think (b) is much more a factor than (a), and that you have to regress the +40 to the mean quite a bit to get a true estimate. 

Maybe when the zones get good enough to show large differences between teams -- like, say, 20% for a bad fielder and 80% for a good fielder -- well, at that point, you have a system that might work. But, without that, doesn't it almost have to be the case that most of the difference is just from what kinds of balls you get?

Tango made a very relevant point, indirectly, in a recent post. He asked, "Is it possible that Manny Ramirez never made an above-average play in the outfield?"  The consensus answer, which sounds right to me, was ... it would be very rare to see Manny make a play that an average outfielder wouldn't have made. (Leaving positioning out of the argument for now.)

Suppose BIPs to a certain difficult zone get caught 30% of the time by an average fielder, and Manny catches them 20% of the time. Since ANY outfielder would catch a ball that Manny gets to ... well, that zone must really be at least TWO zones: a "very easy" zone with a 100% catch rate, and a "harder" zone with an 10% catch rate for an average fielder, and a 0% catch rate for Manny.

In other words, if Manny makes 30% plays in that zone and a Gold Glove outfielder makes 25%, it's almost certain that Manny just got easier balls to catch. 

The only way to eliminate that kind of luck is to classify the zones in enough micro detail that you get close to 0% for the worst, or close to 100% for the best.

And that's not what's happening. Which means, there's no way to tell how many runs a defense saved.

------

And this brings us back to the point I made last month, about figuring out how to split observed runs allowed into observed pitching and observed fielding. There's really no way to do it, because you can't tell a good fielding play from an average one with the numbers currently available. 

Which means: the DRS and UZR numbers in the Fangraphs tables are actually just estimates -- not estimates of talent, but estimates of *what happened in the field*. 

There's nothing wrong with that, in principle: but, I don't think it's generally realized that that's what those are, just estimates. They wind up in the same statistical summaries as pitching and hitting metrics, which themselves are reliable observations. 

At baseball-reference, for instance, you can see, on the hitting page, that Robinson Cano hit .302-28-118 (fact), which was worth 31 runs above average (close enough to be called fact).

On his fielding page, you can see that Cano had 323 putouts (fact) and 444 assists (fact), which, by Total Zone Rating, was worth 4 runs below average (uh-oh).

Unlike the other columns, UZR column is an *estimate*. Maybe it really was -4 runs, but it could easily have been -10 runs, or -20 runs, or +6 runs. 

To the naked eye, the hitting and fielding numbers both look equally official and reliable, as accurate observations of what happened. But one is based on an observation of what happened, and the other is based on an estimate of what happened.

------

OK, that's a bit of an exaggeration, so let me backtrack and explain what I mean.

Cano had 28 home runs, and 444 assists. Those are "facts", in the sense that the error is zero, if the observations are recorded correctly.

Cano's offense was 31 runs above average. I'm saying that's accurate enough to be called a "fact."  But admittedly, it is, in fact, an estimate. Even if the Linear Weights formula (or whatever) is perfectly accurate, the "runs above average" number is after adjusting for park effects (which are imperfect estimates, albeit pretty good ones). Also, the +31 assumes Cano faced league-average pitching. That, again, is an estimate, but, again, it's a pretty strong one.

For defense, comparatively, the UZR of "-4" is a very, very, weak estimate. It carries an implicit assumption that Cano's "relative difficulty of balls in play" was zero. That's much less reliable than the estimate that his "relative difficulty of pitchers faced" was zero. If you wanted, you could do the math, and show how much weaker the one estimate is than the other; the difference is huge.

But, here's a thought experiment to make it clear. Suppose Cano faces an the worst pitcher in the league, and hits a home run. In that case, he's at worst 1.3 runs above average for that plate appearance, instead of our estimate of 1.4. It's a real difference in how we evaluate his performance, but a small one.

On the other hand, suppose Cano faces a grounder in a 50% zone, but one of the easy ones, that almost any fielder would get to. Then, he's maybe +0.01 hits above average, but we're estimating +0.5. That is a HUGE difference. 

It's also completely at odds with our observation of what happens on the field. After an easy ground ball, even the most casual fan would say he observed Cano saving his team 0 runs over what another player would do. But we write it down as +0.4 runs, which is ... well, it's so big, you have to call it *wrong*. We are not accurately recording what happened on the field.

So, if you take "what happened on the field" in broad, intutive terms, the home run matches: "he did a good thing on the field and created over a run" both to the observer and the statistic. But for the ground ball, the statistic lies. It says Cano "did a good thing on the field and saved almost half a run," but the observer says Cano "made a routine play." 

The batting statistics match what a human would say happened. The fielding stats do not.

------

How much random error is in those fielding statistics? When UZR gives an SD of 29 runs, how much of that is luck, and how much is talent? If we knew, we could at least regress to the mean. But we don't. 

That's because we don't know the idealized actual SD of observed performance, adjusted for the difficulty of the balls in play. It must be somewhere between 47 runs (the SD of observed performance without adjusting for difficulty), and 30 runs (the SD of talent). But where in between?

In addition: how sure are we that the estimates are even unbiased, in the sense that they're independently just as likely to be too high as too low? If they're truly unbiased, that makes them much easier to live with -- at the very least, you know they'll get more accurate as you average over multiple seasons. But if they inappropriately adjust for park effects, or pitcher talent, you might find some teams being consistently overestimated or underestimated. And that could really screw up your evaluations, especially if you're using those fielding estimates to rejig pitching numbers. 

-------

For now, the estimates I like best are the ones from Tango's "Fan Scouting Report" (FSR). As I understand it, those are actually estimates of talent, rather than estimates of what happened on the field. 

Team FSR has an SD of 23 runs. That's very reasonable. It's even more conservative than it looks. That 23 includes all the "other than range" stuff -- throwing arm, double plays, and so on. So the range portion of FSR is probably a bit lower than 23.

We know the true SD of talent is closer to 30, but there's no way for subjective judgments to be that precise. For one thing, the humans that respond to Tango's survey aren't perfect evaluators of what they see on the field. Second, even if they *were* perfect, a portion of what they're observing is random luck anyway. You have to temper your conclusions for the amount of noise that must be there. 

It might be a little bit apples-to-oranges to compare FSR to the other estimates, because FSR has much more information to work with. The survey respondents don't just use the ball-in-play stats for a single year -- they consider the individual players' entire careers, ages and trajectories; the opinions of their peers and the press; their personal understanding of how fielding works; and anything else they deem relevant.

But, that's OK. If your goal is to try to estimate the influence of team fielding, you might as well just use the best estimate you've got. 

For my part, I think FSR is the one I trust the most. When it comes to evaluating fielding, I think sabermetrics is still way behind the best subjective evaluations.







Labels: , , , , , , , , ,

Friday, June 12, 2015

New issue of "By the Numbers"

The May, 2015 issue of "By the Numbers" is now available.  You can get it from the SABR website (.pdf), or from my website (.pdf).  Back issues can be found here or here.




Labels: ,

Thursday, May 28, 2015

Pitchers influence BAbip more than the fielders behind them

It's generally believed that when pitchers' teams vary in their success rate in turning batted balls into outs, the fielders should get the credit or blame. That's because of the conventional wisdom that pitchers have little control over balls in play.

I ran some numbers, and ... well, I think that's not right. I think individual pitchers actually have as much influence on batting average on balls in play (BAbip) as the defense behind them, and maybe even a bit more.

------

UPDATE: turns out all the work I did is just confirming a result from 2003, in a document called "Solving DIPS" (.pdf).  It's by Erik Allen, Arvin Hsu, and Tom Tango. (I had read it, too, several years ago, and promptly forgot about it.)

It's striking how close their numbers are to these, even though I'm calculating things in a different way than they did. That suggests that we're all measuring the same thing with the same accuracy.

One advantage of their analysis over mine is they have good park effect numbers.  See the first comment in this post for Tango's links to "batting average on balls in play" park effect data.

------

For the first step, I'll run the usual "Tango method" to divide BAbip into talent and luck.

For all team-seasons from 2001 to 2011, I figured the SD of team BAbip, adjusted for the league average. That SD turned out to be .01032, which I'll refer to as "10.3 points", as in "points of batting average."  

The average SD of binomial luck for those seasons was 7.1 points. Since

SD(observed)^2 = SD(luck)^2 + SD(talent)^2

We can calculate that SD(talent) = 7.5 points.

"Talent," here, doesn't yet differentiate between pitcher and fielder talent. Actually, it's a conglomeration of everything other than luck -- fielders, pitchers, slight randomness of opposition batters, day/night effects, and park effects. (In this context, we're saying that Oakland's huge foul territory has the "talent" of reducing BAbip by producing foul pop-ups.)

So:

7.2 = SD(luck) 
7.5 = SD(talent) 

For a team-season from 2001 to 2011, talent was more important than luck, but not by much. 

I did the same calculation for other sets of seasons. Here's the summary:


            Obsrvd  Luck Talents
--------------------------------
1960-1968    11.41  6.95   9.05
1969-1976    12.24  6.86  10.14
1977-1991    10.95  6.94   8.46
1992-2000    11.42  7.22   8.85
2001-2011    10.32  7.09   7.50
-------------------------------
"Average"    11.00  7.00   8.50

I've arbitrarily decided to "average" the eras out to round numbers:  7 points for luck, and 8.5 points for talent. Feel free to use actual averages if you like. 

It's interesting how close that breakdown is to the (rounded) one for team W-L records:

          Observed  Luck  Talent
--------------------------------
BABIP        11.00  7.00   8.50
Team Wins    11.00  6.50   9.00
--------------------------------

That's just coincidence, but still interesting and intuitively helpful.

-------

That works for separating BAbip into skill and luck, but we still need to break down the skill into pitching and fielding.

I found every pitcher-season from 1981 to 2011 where the pitcher faced at least 400 batters. I compared his BAbip allowed to that of the rest of his team. The comparison to teammates effectively controls for defense, since, presumably, the defense is the same no matter who's on the mound. 

Then, I took the player/rest-of-team difference, and calculated the Z-score: if the difference were all random, how many SDs of luck would it be? 

If BAbip was all luck, the SD of the Z-scores would be exactly 1.0000. It wasn't, of course. It was actually 1.0834. 

Using the "observed squared = talent squared plus luck squared", we can calculate that SD(talent) is 0.417 times as big as SD(luck). For the full dataset, the (geometric) average SD(luck) was 21.75 points. So, SD(talent) must be 0.417 times 21.75, which is 9.07 points.

We're not quite done. The 9.07 isn't an estimate of a single pitcher's talent SD; it's the estimate of the difference between that pitcher and his teammates. There's randomness in the teammates, too, which we have to remove.

I arbitrarily chose to assume the pitcher has 8 times the luck variance of the teammates (he probably pitched more than 1/8 of the innings, but there are more than 8 other pitchers to dilute the SD; I just figured maybe the two forces balance out). That would mean 8/9 of the total variance belongs to the individual pitcher, or the square root of 8/9 of the SD. That reduces the 9.07 points to 8.55 points.

8.55 = SD(single pitcher talent)

That's for individual pitchers. The SD for the talent of a *pitching staff* would be lower, of course, since the individual pitchers would even each other out. If there were nine pitchers on the team, each with equal numbers of BAbip, we'd just divide that by the square root of 9, which would give 2.85. I'll drop that to 2.5, because real life is probably a bit more dilute than that.

So for a single team-season, we have

8.5 = SD(overall talent) 
-------------------------------------
2.5 = SD(pitching staff talent) 
8.1 = SD(fielding + all other talent)

------

What else is in that 8.1 other than fielding? Well, there's park effects. The only effect I have good data for, right now (I was too lazy to look hard), is foul outs. I searched for those because of all the times I've read about the huge foul territory in Oakland, and how big an effect it has.

Google found me a FanGraphs study by Eno Sarris, showing huge differences in foul outs among parks. The difference between top and bottom is more than double -- 398 outs in Oakland over two years, compared to only 139 in Colorado. 

The team SD from Sarris's chart was about 24 outs per year. Only half of those go to the home pitchers' BAbip, so that's 12 per year. Just to be conservative, I'll reduce that to 10.

Ten extra outs on a team-season's worth of BIP is around 2.5 points.

So: if 8.1 is the remaining unexplained talent SD, we can break it down as 2.5 points of foul territory, and 7.7 points of everything else (including fielding).

Our breakdown is now:

11.0 = SD(observed) 
--------------------------
 7.1 = SD(luck) 
 2.5 = SD(pitching staff)
 2.5 = SD(park foul outs)
 7.7 = SD(fielders + unexplained)

We can combine the first three lines of the breakdown to get this:

11.0 = SD(observed) 
--------------------------
 7.9 = SD(luck/pitchers/park) 
 7.7 = SD(fielders/unexplained)

Fielding and non-fielding are almost exactly equal. Which is why I think you have to regress BAbip around halfway to the mean to get an unbiased estimate for the contribution of fielding.

UPDATE: as mentioned, Tango has better park effect data, here.

------

Now, remember when I said that pitchers differ more in BAbip than fielders? Not for a team, but for an individual pitcher,

8.5 = SD(individual pitcher)
7.7 = SD(fielders + unexplained)

The only reason fielding is more important than pitching for a *team*, is that the multiple pitchers on a staff tend to cancel each other out, reducing the 8.5 SD down to 2.5.

-------

Well, those last three charts are the main conclusions of this study. The rest of this post is just confirming the results from a couple of different angles.

-------

Let's try this, to start. Earlier, when we found that SD(pitchers) = 8.5, we did it by comparing a pitcher's BAbip to that of his teammates. What if we compare his BAbip to the rest of the pitchers in the league, the ones NOT on his team?

In that case, we should get a much higher SD(observed), since we're adding the effects of different teams' fielders.

We do. When I convert the pitchers to Z-scores, I get an SD of 1.149. That means SD(talent) is  0.57 as big as SD(luck). With SD(luck) calculated to be about 20.54 points, based on the average number of BIPs in the two samples ... that makes SD(talents) equal to 11.6 points.

In the other study, we found SD(pitcher) was 8.5 points. Subtracting the square of 8.5 from the square of 11.6, as usual, gives

11.6 = SD(pitcher+fielders+park)
--------------------------------
 8.5 = SD(pitcher)
 7.9 = SD(fielding+park)

So, SD(fielding+park) works out to 7.9 by this method, 8.1 by the other method. Pretty good confirmation.

-------

Let's try another. This time, we'll look at pitchers' careers, rather than single seasons. 

For every player who pitched at least 4,000 outs (1333.1 innings) between 1980 and 2011, I looked at his career BAbip, compared to his teammates' weighted BAbip in those same seasons. 

And, again, I calculated the Z-scores for number of luck SDs he was off. The SD of those Z-scores was 1.655. That means talent was 1.32 times as important as luck (since 1.32 squared plus 1 squared equals 1.655 squared).

The SD of luck, averaged for all pitchers in the study, was 6.06 points. So SD(talent) was 1.32 times 6.06, or 8.0 points.

10.0 = SD(pitching+luck)
------------------------
 6.1 = SD(luck)
 8.0 = SD(pitching)

The 8.0 is pretty close to the 8.5 we got earlier. And, remember, we didn't include all pitchers in this study, just those with long careers. That probably accounts for some of the difference.

Here's the same thing, but for 1960-1979:

 9.3 = SD(pitching+luck)
------------------------
 6.0 = SD(luck)
 7.2 = SD(pitching)

It looks like variation in pitcher BAbip skill was lower in the olden times than it is now. Or, it's just random variation.

--------

I did the career study again, but compare each pitcher to OTHER teams' pitchers. Just like when we did this for single seasons, the SD should be higher, because now we're not controlling for differences in fielding talent. 

And, indeed, it jumps from 8.0 to 8.8. If we keep our estimate that 8.0 is pitching, the remainder must be fielding. Doing the breakdown:

10.5 = SD(pitching+fielding+luck)
---------------------------------
 5.8 = SD(luck
 8.0 = SD(pitching)
 3.6 = SD(fielding)

That seems to work out. Fielding is smaller for a career than a season, because the quality of the defense behind the pitcher tends to even out over a career. I was surprised it was even that large, but, then, it does include park effects (and those even out less than fielders do). 

For 1960-1979:

10.2 = SD(pitching+fielding+luck)
---------------------------------
 5.7 = SD(luck)
 7.2 = SD(pitching)
 4.4 = SD(fielding)

Pretty much along the same lines.

------

Unless I've screwed up somewhere, I think we've got these as our best estimates for BAbip variation in talent:

8.5 = SD(individual pitcher BAbip talent)
2.5 = SD(team pitching staff BAbip talent)
7.7 = SD(team fielding staff BAbip talent)
2.5 = SD(park foul territory BAbip talent)

And, for a single team-season,

7.1 = SD(team season BAbip luck)

For a single team-season, it appears that luck, pitching, and park effects, combined, are about as big an influence on BAbip as fielding skill.  



Labels: , , , ,