Monday, September 18, 2006

Study: "protection" and "clutch pitching" exist

Conventional wisdom says that a batter will see good pitches if he has the “protection” of a good on-deck hitter. With a superstar on deck, the pitcher is less likely to walk the current hitter, and more likely to give him a pitch he can drive. On the other hand, if the on-deck hitter is mediocre, the opposition should be more likely to walk the batter, or at least not give him anything good to hit.

Therefore, with Barry Bonds on deck, the current hitter should (a) have his walk rate reduced, and (b) have his batting average go up.

But in this academic study, J.C. Bradbury and Doug Drinen come up with a surprising result: with a good hitter due up, batting averages actually go down.

Bradbury and Drinen ran a regression on batting average against a whole bunch of other variables. There were a few variables for the quality of the batter and pitcher, the score, the number of outs, and so on. And there were at least 50 dummy variables, including nine variables for inning, thirty or so for park, and so on.

What Bradbury and Drinen found is that, holding all those variables constant, a one-SD increase in the OPS of the on-deck batter was associated with a 1% drop (about .003) in the current hitter’s batting average. It also led to a 2.6% drop in walk rate, a 3.7% drop in extra base hit rate, and a 3% drop in home run rate. (From these rates, it looks like there may have been about a 5% drop in doubles and triples, but barely any drop in singles.)

Why does this happen? If there’s a drop in walks, shouldn’t we expect an increase in other offensive stats, since the batter is getting more good pitches to hit?

Bradbury and Drinen’s answer is that with Barry Bonds on deck, it becomes exceptionally important to keep the batter off base. Therefore, the pitcher puts extra “effort” into that particular at-bat, throwing his best stuff. He can’t do that every time, because he can only throw hard for so long, and he has to pace himself. But at this crucial point in the game, that’s the time that he needs to turn it on. So he does.

In effect, the authors are arguing that there is such a thing as “clutch pitching” (even if there’s no such thing as clutch hitting).

Could that be true? It does make sense to me.

First, clutch pitching is different from clutch hitting – it seems plausible that pitchers can throw harder or softer, can’t throw hard indefinitely, and would save their best pitching for the most important times. For hitters, though, how would you play more or less clutch if you wanted to? There’s no real physical limitation on the batter – unlike the pticher, he can play at 100% every at-bat without wearing himself out.

Second, the magnitudes of the findings are about what I’d expect … just a little bit of an edge when it really matters. If the finding was very large – say, 25 points in batting average instead of three – I’d be skeptical, on the grounds that someone would have noticed how much better pitchers do in certain situations.

Having said that, it’s possible that the study wasn’t perfect, and found an effect where none existed because of the variables they chose and the structure of their study. And you could probably come up with certain nitpicks (for instance, the authors used separate variables for bases and outs – if there’s an interaction between them, it might appear as a small false protection effect). My gut feeling is that correcting for those kinds of objections wouldn’t change the results much.

But the most interesting thing to me is the idea that pitchers turn it on or off at various points during a game. It does make sense, but I’ve never seen it talked about. And how would you design a study to find it? There are so many different variables to consider, as Bradbury and Drinen noted, that it might not be easy. Maybe you’d have to start by looking at pitch selection and velocity. Does anyone know of any such study?



5 Comments:

At Tuesday, September 19, 2006 3:23:00 PM, Anonymous Anonymous said...

I don't know if this is strictly related to pitcher effort levels, but I believe there is evidence that some do pitch differently according to the base/out situation. An extreme example would be Tom Glavine. His splits with runners on show significantly higher K and BB rates (even after removing IBB) and a lower HR rate compared to bases empty situations.

I'm sure you could find others as well, but more interesting to me would be a study to determine whether such a change in approach is warranted.

 
At Tuesday, September 19, 2006 7:46:00 PM, Anonymous Anonymous said...

Their idea of the possibility of an effort effect is very clever. Of course it's one of the truisms of the modern era that starters don't last as long because they have to bear down on every hitter, because almost any player is a power threat. If both are true, there would be less and less scope for a strong effort effect now, but the effect then should be stronger in earlier seasons in baseball history.

I think the most direct approach would rely on studying pitch by pitch data itself rather than play by play outcomes. If pitchers are "turning it up a notch", you'd expect both fewer balls and more swinging strikes and/or more called strikes, without doing such complex regression analysis.

There is one aspect to the study of outcomes which I don't see allowance for, since they study the impact on extra base hits separately. If a batter hits a stretcheable single, he is most likely to risk stretching it if the following batter is weak and least likely if the following batter is strong. This is after the ball is hit and doesn't have to do with the pitcher's strategy or the base/out situation per se, but it would be an externality caused by the strength of the following hitter.

 
At Wednesday, September 20, 2006 12:13:00 AM, Anonymous Anonymous said...

I don't think it's obvious that greater effort leads to fewer balls. Perhaps greater effort means pitching more carefully so that the pitch location tends to be near the edges of the strike zone. Maybe more pitches per PA would be the result.

I think it would be very difficult to detect different levels of pitching effort objectively, but we might be able to infer some insight by comparing starters vs. relievers since the differences in ERA seems to be mostly attributed to differences in effort level.

 
At Wednesday, September 20, 2006 7:24:00 AM, Anonymous Anonymous said...

Jim is correct that it's not a foregone conclusion that the called ball rate will go down. On the other hand, my reading of Bradbury and Drinen's article is that they believe that the protection effect is real, but more than counterbalanced by the "effort" effect. They note that both theories predict a decrease in the walk rate. I suppose if you're getting more non-contact strikes, you can also give up more non-contact balls and still come out "ahead" in walk rate.
At any rate, either theory implies measurable changes in the direct pitch outcomes, and I think that using pitch by pitch data would be a more direct way to test the theories.

 
At Wednesday, September 20, 2006 10:15:00 AM, Blogger Tangotiger said...

In the batting order chapter, I show how a hitter does in each batting slot, relative to his overall average. It was a very quick study, but it sure looked like that the pitcher was pitching worse, the more good hitters he was going through, and pitching better the more bad hitters he was going through.

Hopefully someone can expand that study greatly, and see if they can come up with something more definitive.

 

Post a Comment

<< Home