### A flawed argument that marginal offense and defense have equal value

Last post, I argued that a defensive run saved isn't necessarily equally as valuable as an extra offensive run scored.

But I didn't realize that was true right away. Originally, I thought that they had to be equal. My internal monologue went like this:

Imagine giving a team an extra run of offense over a season. You pick a random game, and add on a run, and see if that changes the result. Maybe it turns an extra-inning loss into a nine-inning win, or turns a one-run loss into an extra-inning game. Or, maybe it just turns an 8-3 blowout into a 9-3 blowout.

(It turns out that every ten games, that run will turn a loss into a win ... see here. But that's not important right now.)

But, it will always be the same as giving them an extra run of defense, right? Because, it doesn't matter if you turn a 5-4 loss into a 5-5 tie, or into a 4-4 tie. And it doesn't matter if you turn an 8-3 blowout into a 9-3 blowout, or into a 8-2 blowout.

Any time one more run scored will change the result of a game, one less run allowed will change it in exactly the same way! So, how can the value of the run scored possibly be different from the value of the run allowed?

That argument is wrong. It's obvious to me now why it's wrong, but it took me a long time to figure out the flaw in this argument.

Maybe you're faster than I was, and maybe you have an easier explanation than I do. Can you figure out what's wrong with this argument?

(I'll answer next post if nobody gets it. Also, it helps to think of runs (or goals, or points) as Poisson, even if they're not.)

## 7 Comments:

The concept of measuring impact in terms of runs is meant to be applied to a full season, not uniformly assigned to individual games. One run is worth much less in a 10-9 game than it is in a 2-1 game. As an extreme example, let's say a team averaged allowing 1.0 runs per game. Then they improved the defense by 1.0 runs per game. Would they be expected to average allowing 0.0 runs per game (100% shutouts)? No.

Is it the zero lower bound for runs? You can always increase the number of offensive runs, but you can't hold an opponent to -1 runs.

The lower scoring the environment, the greater the value of a run. A goal in soccer is worth more (in terms of wins) than a run in baseball, is worth more than a point in basketball. A defensive run saved moves the games toward a lower scoring environment thus increasing the value of that run, whereas an offensive run scored moves it toward a higher scoring environment, comparatively decreasing the value of the run.

It's also evident in Pythagorean expectations:

100 R, 100 RA ==> 100^2 + (100^2 + 100^2) = 0.500000

101 R, 100 RA ==> 101^2 + (101^2 + 100^2) = 0.504975

100 R, 99 RA ==> 100^2 + (100^2 + 99^2) = 0.505025

The extra run saved is worth slightly more than the extra run scored.

Matthew: You're on the right track. What you said was actually the hint I was going to give!

Scott: I completely agree, as I wrote in the previous post -- but that doesn't explain why the wrong argument is wrong.

OK... "You pick a random game, and add on a run, and see if that changes the result." You can do that to any game with equal probability. However, if you pick a random game and subtract a run, you can only do that in games where at least one run is allowed. Some games are shutouts and you can't subtract a run from those. So the random game must be one of the other non-shutout games. So in a given game that's not a shutout, the expected value of the added run is 1/162, whereas the expected value of the run saved is 1/(162 - ShO). Slightly greater. The shutout games are wins anyway and not affected by the added or subtracted run.

Agreed, as far as that goes. But the method also fails in sports where shutouts are basically non-existent. Imagine NBA, but where the FG% runs around 35% instead of 50%. You wouldn't have shutouts, but the argument still fails.

In the real NBA, where FG% is around 50%, my answer doesn't work. Poisson is for "rare events" and 50% is not rare. So that's another hint, I guess.

I guess, technically, the shutout argument is a valid answer, because a single counterexample is enough to disprove an hypothesis. But there's a more general argument that applies to non-shutout sports too.

A different argument

Win probability is zero sum. Any gains or losses to the own team exactly offset losses or gains to the other team. Since we could be performing this analysis from either perspective, the values must be equal.

This breaks down when you have knowledge of the team,dpending on your assumptions. For instance, every team scoring a fixed amount more (each team gets x more runs) benefits below average teams, at the expense of above average teams (scoring a fixed amount less has the opposite results) . However, league wide scoring a % more or less does not affect the status quo in your model.

If you want to answer for a specific team, will 1 run of offense or defense help more, that goes back to the question of if win by 1 or lose by 1 is more probable. The answer to that depends on if the team is better or worse than average. Is increasing or decreasing the variance good? Depends if the team is already worse or better than average respectively.

Post a Comment

<< Home