Wednesday, April 15, 2020

Regressing Park Factors (Part III)

I previously calculated that to estimate the true park factor (BPF) for a particular season, you have to take the "standard" one and regress it to the mean by 38 percent.

That's the generic estimate, for all parks combined. If you take Coors Field out of the pool of parks, you have to regress even more.

I ran the same study as in my other post, but this time I left out all the Rockies. Now, instead of 38 percent, you have to regress 50 percent. (It was actually 49-point-something, but I'm calling it 50 percent for simplicity.)

In effect, the old 38 percent estimate comes from a combination of 

1. Coors Field, which needs to be regressed virtually zero, and
2. The other parks, which need to be regressed 50 percent.

For the 50-percent estimate, the 93% confidence interval is (41, 58), which is very wide. But the theoretical method from last post, which I also repeated without Colorado, gave 51 percent, right in line with the observed number.

--------

I tried this method for the Rockies only, and it turns out that the point estimate is that you have to regress slightly *away* from the mean of 100. But with so few team-seasons, the confidence interval is so huge that I'd just take the park factors at face value and not regress them at all. 

The proper method would probably be to regress the Rockies' park factor to the Coors Field mean, which is about 113. You could probably crunch the numbers and figure out how much to regress. 

--------

The overall non-Coors value is 50 percent, but it turns out that every decade is different. *Very* different:

1960s:   regress 15 percent
1970s:   regress 27 percent
1980s:   regress 80 percent
1990s:   regress 84 percent
2000s:   regress 28 percent
2010-16: regress 28 percent 

Why do the values jump around so much? One possibility is that it's random variation on how teams are matched to parks. The method expects batters in hitters' parks to be equal to batters in pitchers' parks, but if (for instance) the Red Sox had a bad team in the 80s, this method would make the park effect appear smaller.

As soon as I wrote that, I realized I could check it. Here are the correlations between BPF and team talent in terms of RS-RA (per 162 games) for team-seasons, by decade. I'll include the regression-to-the-mean amount to make it easier to compare:

             r    RTM
---------------------
1960s:    +0.14   15% 
1970s:    +0.06   27%
1980s:    -0.14   80%
1990s:    +0.03   84%
2000s:    +0.16   28%
2010s:    +0.23   28%
---------------------
overall:  +0.05   50%

It does seem to work out that the more positive the correlation between hitting and BPF, the more you have to regress. The two lowest correlations were the ones with the two highest levels of regression to the mean.

(The 1990s does seem a little out of whack, though. Maybe it has something to do with the fact that we're leaving out the Rockies, so the NL BPFs are deflated for 1993-99, but the RS-RA are inflated because the Rockies were mediocre that decade. With the Rockies included, the 1990s correlation would turn negative.)

The "regress 50 percent to the mean" estimate seems to be associated with an overall correlation of +.05. If we want an estimate that assumes zero correlation, we should probably bump it up a bit -- maybe to 60 percent or something.

I'd have to think about whether I wanted to do that, though. My gut seems more comfortable with the actual observed value of 50 percent. I can't justify that.



Labels: , ,

6 Comments:

At Wednesday, April 15, 2020 6:10:00 PM, Anonymous Guy said...

Super interesting work, Phil. A hypothesis on the changing amount of regression needed:

I think new ballparks became much more homogenous in design in the 1970s and 1980s, in terms of many factors that create distinct park factors: foul territory, hitter backgrounds, symmetry in the OF distances. So it makes sense that true differences shrank. Why the increase in real differences in recent years? I would guess that there are still real differences in HR factors, and those became more salient as HR totals surged. (The other big change in the game is obviously more strikeouts, but it seems less likely parks differ a lot on K propensity.)

 
At Wednesday, April 15, 2020 6:35:00 PM, Blogger Phil Birnbaum said...

I like that hypothesis about ballparks so similar in the 70s and 80s.

Another possibility that I didn't mention is that teams, especially those in hitter's parks, might be tailoring their lineup to take advantage of that park's peculiarities. That would possibly account for the +.05 correlation between run differential and hitterparkiness.

Another interesting thing, while I'm typing: we *noticed* that park factors were overstated, because we embraced the idea of averaging three years worth of data. I don't think we would have done that if we hadn't seen some weird, random, implausible results taking one-year BPFs as calculated.

Anyway, I like your hypothesis best. Sorry to say one sentence about yours and then ramble on for another 200 words.

 
At Friday, April 17, 2020 11:27:00 PM, Anonymous Anonymous said...

Phil - This series has been very enlightening.

Note over at seamheads.com Ballparks Database, for the 3 year park factor calculations, we add a '4th' year using the parks 'long-term' historical park factor, as a regression component. That is just based on empirical data, and not on any good theoretical basis or math. I need to think more about if what you've found means we may need to modify that calculation (while trying to avoid any 'custom' calculations for Coors, or temporary parks, etc.)

 
At Friday, April 17, 2020 11:28:00 PM, Blogger KJOK said...

THANKS,
KJOK

 
At Friday, April 17, 2020 11:36:00 PM, Blogger Phil Birnbaum said...

Geez, not sure how I'd go about that. I wonder ...

OK, this just occurred to me. Suppose you calculate the SD of team road runs, unadjusted for park. Then you calculate the SD of team home runs (runs at home, that is), adjusted for park.

If SD(home) << SD(road), you've probably overadjusted. You couldn't do that for individual teams, but for a large sample, like a decade, you'd have 300 datapoints and that could help you tell if your park factors are generally too exaggerated.

You could leave out the outliers like Coors and Astrodome.

Does that sound like it would work? It's off the top of my head.

 
At Friday, April 17, 2020 11:50:00 PM, Blogger Phil Birnbaum said...

Hmmmm, not sure if that would work.

Try this: find SD(runs) over all individual games, for both teams combined. It'll probably be in the low 4s.

Then, multiply by 9, which should give you a theoretical SD(total runs over 81 games).

Then, for each park, after park adjustment, calculate SD[((n-1)/n)opposition runs + (1/(n)) home team runs], so that every team is equally represented in the total.

Compare that set of park SDs to the theoretical. If those true SDs are too low, your park adjustments are too extreme.

Still off the top of my head, but I think that works. Is it sensitive enough, or will they be so close you can't tell? Hmmm.

 

Post a Comment

<< Home