Friday, June 23, 2017

Juiced baseballs, part II

Last post, I showed how MGL found the variation (SD) of MLB baseballs to be in the range of about 7 feet difference for a typical fly ball. I wondered if that were truly the case, or if some of it wasn't real, just imprecision due to measurement error.

After some Twitter conversations that led me to other sources, I'm leaning to the conclusion that the variance is real.

------

Two of the three measurements in MGL's study (co-authored with Ben Lindbergh) were the circumference of the baseball and its average seam height. For both of those factors, the higher the measure, the more air resistance, and therefore shorter distance travelled.

It occurred to me -- why not measure distance directly, if that's what you're interested in? MGL told me, on Twitter, that that's been done. I found one study via a Google search (a study that Kevin later linked to in a comment).

That study took a box of one dozen MLB balls, fired them from a cannon one by one, and observed how far each travelled. Crucially, the authors adjusted that distance for the original speed and angle, because the cannon itself produces variations in intial conditions. So, what remains is mostly about the ball.

For one of the two boxes, the balls varied (SD) by 8 feet. For the second box, the SD was only 3 feet.

It's still possible that some of that variation is due to initial conditions that weren't controlled for, like small fluctuations in temperature, or air movement within the flight path, or whatever. Fortunately, the authors repeated the procedure, but for a single ball fired multiple times. 

The SD for the single ball was 3 feet.

Using the usual method, we know

SD for different balls ^ 2 = SD for a single ball ^ 2 + SD caused by ball differences ^ 2

That means for the first box, we estimate that the balls vary by 7 feet. For the second box, it's 0 feet. That's a big difference. Fortunately again, the authors repeated the procedure for different types of balls.

NCAA balls have higher seams and therefore less carry. The study found an overall SD of 11 feet, and single ball variation of 2 feet. That means different balls vary by an expected 10.8 feet, which I'll round to 11. 

For minor league balls, the study found an SD of 8 feet overall, but didn't test single balls. Taking 3 feet as a representative estimate for single-ball variation, we get that MiLB balls vary by 7 feet. (8 squared minus 3 squared equals 7 squared, roughly.)

So we have:

-- MLB  balls vary  0 feet in air resistance
-- MLB  balls vary  7 feet in air resistance
-- MiLB balls vary  7 feet in air resistance
-- NCAA balls vary 11 feet in air resistance

In that light, the 7 feet found in MGL's study doesn't seem out of line. Actually, that 7 feet is a bit of an overestimate. It includes variation in COR (bounciness), which doesn't factor into air resistance, as far as I can tell. Limiting only to air resistance, MGL's study found an SD of only 6 feet.

-----

One thing I noticed in the MGL data is that even for balls within the same era, the COR "bouciness" measure correlates highly to both circumference (-.46 overall) and seam height (-.35 overall). (For the 10 balls after the 2016 All-Star break, it's -.36 and -.56, respectively.)

I don't know if those measures are related on some kind of physics basis, or if it's just coincidence that they varied together that way. 

-----

One thing I wonder: are balls within the same batch (whether the definition of "batch" is a box, a case, or a day's production) more uniform than balls from different batches? I haven't found a study that tells us that. From MGL's data, and treating day of use as a "batch," my eyeballs say batches are slightly more uniform than expected, but not much. My eyeballs could be wrong.

If batches *are* more uniform, teams could get valuable information by grabbing a few balls from today's batch, and getting them tested in advance. They'd be more likely to know, then, if they were dealing with livelier or deader balls that night.

Even if there's no difference within batches compared to between batches, it's still worth the testing. I don't know if any teams actually did this, but if any of them were testing balls in 2016, they'd have had advance knowledge that the balls were getting livelier. 

I have no idea what a team would do with that information, that home runs were about to jump significantly over last year ... but you'd think it would be valuable some way.

-----

MGL tweeted, and I agreed, that it doesn't take much variation in a ball to make a huge difference to home run rates. He also thinks that any change in liveliness is likely to have been inadvertent on the part of the manufacturer, since it takes so little to make balls fly farther. I agree with that too.

But, why are MLB standards so lenient? As Lindbergh quotes from an earlier report,


" ... two baseballs could meet MLB specifications for construction but one ball could be theoretically hit 49.1 feet further."

Why doesn't MLB just put tighter control on the baseballs it uses? If the manufacturers can't make baseballs that precise, just put out a net at a standard distance, fire all the balls, and discard (or save for batting practice) all the balls that land outside the net. (That can't be so hard, can it? It can't be that the cannon would damage the balls too much, since MLB reuses balls that have been hit for line drives, which is a much more violent impact.)

You could even assign the balls to different liveliness groups, and require that different batches be stored at different humidor settings to equalize their bounciness.

Even if that's not practical, couldn't MLB, at least, test the balls regularly, so as to notice the variation before it shows up so obviously in the HR totals?

-----

Finally, one last thought I had. If a ball is hit for a deep fly ball, doesn't that suggest that, at least as a matter or probability, it's juicier than average? If I were the pitching team, I might not want to pitch that ball again. It might be an expected difference of only a foot or two, but every little bit helps.





Labels: , , ,

Monday, June 19, 2017

Are some of today's baseballs twice as lively as others?

Over at The Ringer, Ben Lindbergh and Mitchel Lichtman (MGL) claim to have evidence of a juiced ball in MLB.

They got the evidence in the most direct way possible -- by obtaining actual balls, and having them tested. MLB sells some of their game-used balls directly to the public, with certificates of authenticity that include the date and play in which the ball was used. MGL bought 36 of those balls, and sent them to a lab for testing.

It never once occurred to me that you could do that ... so simple an idea, and so ingenious! Kudos to MGL. I wonder why mainstream sports journalists didn't think of it? It would be trivial for Sports Illustrated or ESPN to arrange for that.

Anyway ... it turned out that the 13 more recent balls -- the ones used in 2016 -- were indeed "juicier" than the 10 older balls used before the 2015 All-Star break. Differences in COR (Coefficient of Restitution, a measure of "bounciness"), seam height, and circumference were all in the expected "juicy" direction in favor of the newer baseballs. (The difference was statistically significant at 2.6 SD.)

The article says,


"While none of these attributes in isolation could explain the increase in home runs that we saw in the summer of 2015, in combination, they can."

If I read that right, it means the magnitude of the difference in the balls matches the magnitude of the increase in home runs. The sum of the three differences translated to the equivalent of 7.1 feet in fly ball distance.

The authors posted the results of the lab tests, for each of the 36 balls in the study; you can find their spreadsheet here.

-------

One thing I noticed: there sure is a lot of variation between balls, even within the same era, even used on the same day. Consider, for instance, the balls marked "MSCC0041" and "MSCC0043," both used on June 15, 2016.

The "43" ball had a COR of .497, compared to .486 for the "41" ball. That's a difference of 8 feet (I extrapolated from the chart in the article).

The "43" ball had a seam height of .032 inches, versus .046 for the other ball. That's a difference of *17 feet*.

The "43" ball had a circumference of 9.06 inches, compared to 9.08. That's another 0.5 feet.

Add those up, and you get that one ball, used the same day as another, was twenty-five feet livelier

If 7.1 feet (what MGL observed between seasons) is worth, say, 30 percent more home runs, then the 25 foot difference means the "43" ball is worth DOUBLE the home runs of the "41" ball. And that's for two balls that look identical, feel identical, and were used in MLB game play on exactly the same day.

-----

That 25-foot difference is bigger than typical, because I chose a relative outlier for the example. But the average difference is still pretty significant. Even within eras, the SD of difference between balls (adding up the three factors) is 7 or 8 feet.

Which means, if you take two random balls used on the same day in MLB, on average, one of them is *40 percent likelier* to be hit for a home run.

Of course, you don't know which one. If it were possible to somehow figure it out in real time during a game, what would that mean for strategy?


-----

UPDATE: thinking further ... could it just be that the lab tests aren't that precise, and the observed differences between same-era balls are mostly random error? 

That would explain the unintuitive result that balls vary so hugely, and it would still preserve the observation that the eras are different.
















Labels: , , ,