Do players turn "clutch" when chasing a personal goal?
The better a level of performance, the harder it is to achieve it. There should be more .270 hitters than .275 hitters, more .275 hitters than .285 hitters, and so on. "[Brooks Robinson] had a miserable year in 1963, and went into his last at bat of the season hitting exactly .250—147 for 588. If he made an out, he wound up the season hitting under .250—but he got a hit, and wound up at .251. He said it was the only hit he got all season in a pressure situation. ... "[P]layers WANT to wind up the season hitting .250, rather than in the .240s. They tend to make it happen."
But, surprisingly, there's an exception: significantly more players hit .300 to .304 in a season than .299 to .296.
That finding comes from Bill James' study, "The Targeting Phenomenon" (subscription required, but the essay is also on page 67 of "The Bill James Gold Mine.")
For pitcher wins, Bill found a similar exception that's even more striking. More pitchers win 0 games than 1. More pitchers win 1 game than 2. More pitchers win 2 games than 3. And so on, all the way up to 30 wins. But there's one exception – 20. Significantly more pitchers finish with 20 wins than with 19.
Why? Because, Bill argues, players care about hitting their "targets."
The implication is that there's a kind of clutch effect happening here, where the player somehow gets better when the target is near. But if that's true, wouldn't that point to baseball players as selfish? Studies have shown very little evidence for clutch hitting when the *game* is on the line. If players care more about hitting .300 than winning the game, that doesn't say much for their priorities.
(Although, in fairness, it should be acknowledged that the opposition is probably trying harder to stop Brooks Robinson from driving in the game-winning run than it is to keep him from getting to .250. For the record, Robinson's final 1963 hit drove in the third run in the ninth inning of a 7-3 loss to the Tigers.)
The study also finds that while this kind of targeting happens for batting average, RBIs, wins, and (pitcher) strikeouts, there's no evidence for targeting in SLG, OBP, OPS, saves, or runs scored. For ERA, there's some evidence of targeting, but not enough to say for sure.
Also, Bill finds that targeting seems to have started around 1940. He argues that's the same time as a jump in fan interest in players' statistical accomplishments.
These are very interesting findings, and I wouldn't have expected as much targeting as seems to have actually occurred. But I'm a bit skeptical about clutchness, and whether players really can boost their performance in target-near situations. I wondered if, instead of clutch performance, it might be something else. Maybe, if a player is close to his goal, he is given additional playing time in support of reaching the target.
That is, if a pitcher has 19 wins late in the season, perhaps the manager will squeeze in an extra start for him. Or if a player is hitting .298, maybe they'll let him play every day until he gets to .300, instead of resting him in favor of the September callup. If and when he reaches .300, then they could sit him (as, I think I remember reading, Bobby Mattick did for Alvis Woods in 1980).
To test the "extra start theory," I looked at pitchers since 1940, grouping them by number of wins. I then looked at their winning percentage, number of starts, and the number of seasons in the group:
16 .606 32.3 311
17 .613 33.0 221
18 .648 33.3 185
19 .650 34.4 123
20 .667 34.9 144
21 .673 34.7 92
22 .691 35.9 54
23 .705 35.9 34
24 .707 38.5 23
So, reading one line of the chart, 20-win pitchers had a .667 winning percentage and an average of 34.9 starts that year. There were 144 seasons in the group.
Looking at the numbers, we do see a bit of an anomaly. More wins normally means more starts, except that pitchers with 20 wins had more starts than pitchers with 21 wins. And, there's a big jump between 18 and 19, more than you'd expect based on the other gaps in that win range.
Suppose we wanted to smooth out the "number of starts" column. We might adjust them like this:
Now we have a smooth increasing trend. To get it, we had to remove 0.6 starts from each of the 19- and 20-win groups.
One possible interpretation: when a pitcher has 19 wins near the end of the season, they give him 1.2 extra starts. Half the time, that gives him an extra win, and he goes to 20 (which now shows 0.6 extra starts). The other half, he fails to get the win, and stays at 19 (which also shows 0.6 extra starts).
Another way to look at this in the "win percentage" column: pitchers with 19 wins have almost the same winning percentage as the 18-win guys, which means more losses. And the 20-win guys, at .667, are only .006 away from the 21-win pitchers, which suggests more wins. That's exactly what happens if you take a bunch of 19-win guys, give them an extra start, and reclassify them.
So what do you think of this as an explanation? Does the *average* 19 win late-September pitcher really get 1.2 extra starts? That seems too high to me, although I don't really know. And, some of the effect might not be extra starts, but leaving the pitcher in the game longer when he's losing or tied, long enough for his offense to bail him out and give him the win.
Now look at the last column, the number of seasons. If we were to smooth out that column, we might do it this way:
The difference is 29 pitchers in the nineteen-win row, and 29 pitchers in the twenty-win row. Assume those 29 pitchers moved from 19 to 20 because of the extra start. If you figure that these pitchers generally win half their starts, that means about 58 pitchers were given that one extra shot.
58 pitchers in the 68 baseball seasons since 1940 means about a little less than one pitcher a year getting that extra start. There are normally only about two 19-win pitchers a year, so that means about half of them would have to get the special treatment.
Again, that seems high. However, in support of this theory, the effect diminishes after 1980. In fact, there are now *fewer* pitchers winning 20 than 19:
There's still a bit of an effect, but not as much – in line with Bill's idea that, these days, managers are less likely to pitch an ace on short rest (or leave him in longer in a tie game) just to help him reach a personal goal.
There are probably other things that might be causing this, that I haven't thought of.
In any case, it wouldn't be too hard to figure out a decent answer: just head to Retrosheet and look at 19- and 20-game winners. See if their days of rest varied late in the season, which would mean the "extra start" theory is correct. Check whether they were left in the game longer than normal. And check whether they pitched better in late-season games, which would mean the "clutch" theory is correct.
And you can do the same thing for hitting, for players around .300. Is it just a matter of opportunities, or is there some clutchness too? If the latter, that would be a very significant finding. It would suggest, perhaps, that
(a) clutch hitting does exist, and either
(b1) it only shows up for personal goals, or
(b2) it only shows up when the situation is not clutch for the other team.
Maybe I'll do this myself, if nobody else does ...
"[Brooks Robinson] had a miserable year in 1963, and went into his last at bat of the season hitting exactly .250—147 for 588. If he made an out, he wound up the season hitting under .250—but he got a hit, and wound up at .251. He said it was the only hit he got all season in a pressure situation. ...
"[P]layers WANT to wind up the season hitting .250, rather than in the .240s. They tend to make it happen."