Wednesday, March 12, 2008

Do golfers give less effort when they're playing against Tiger?

According to this golf study by Jennifer Brown, the best professional golfers don't play as well in tournaments in which they're competing against Tiger Woods. Apparently, they have a pretty good idea that they won't beat Tiger, and so they somehow don't try as hard.

Brown ran a regression to predict a golfer's tournament scores. She considered course length, whether Tiger was playing, whether the golfer was "exempt" (golfers who were high achievers in the past are granted exemptions) or not, whether the event was a major, and so forth. It turned out that when Tiger was playing, the exempt golfers' scores were about 0.8 strokes higher (over 72 holes) than when Tiger sat the week out. Non-exempt golfers, on the other hand, were much less affected by Tiger's participation – only 0.3 strokes. Brown suggests that the difference is that the non-exempt golfers know they can't beat Tiger anyway, so his presence doesn't deter them from playing their best.

Also, when Tiger was on a hot streak – his scores in the previous month were much better than other exempt golfers – the effect increases. Now, instead of being just 0.8 strokes worse than usual, the exempt golfers are 1.8 strokes worse. During a Tiger "slump," the exempt golfers are so pumped by their chance of winning that they're *better* than usual, instead of worse – by 0.4 strokes.

The result seems reasonable – the less chance you have of winning, the less effort you give. But 0.8 strokes seems like a lot. That's especially true when you consider that it's not *every* time that Tiger Woods runs away with the lead. It might be 0 strokes when Tiger is struggling, but 1.6 strokes when it's obvious that it's a lost cause this week. And how does Phil Mickelson lower his scores by 0.8 just by trying harder? Is it more practice? Is it setting up better? Is it spending more time reading the green? What actually is it?

(UPDATE: a couple of commenters noted that with Tiger in the field, certain opponents might change their style of play to take more risks to beat him, and that might cause the scoring difference. However, the study checked for that, by looking at the 72 individual holes, and it found no difference in the variance based on whether or not Tiger was playing.)

There's something else that's confusing me, and that's one of the actual regression results in the paper. Brown has a dummy variable for (among other things) every player, every golf course, and whether or not the tournament is a major. The coefficient for the major is huge: around 17 strokes.

That doesn't make sense to me, because that's after adjusting for the player and the course. It says that if Phil Mickelson plays Pinehurst #2 twice, with the same course length and the same wind and temperature, but one time he's playing a major and the other he's not, *his score will be 17 strokes higher in the major*. That just doesn't sound right to me. Why does calling a tournament a "major" make every golfer 17 strokes worse? (See tables 4 and 5 of the study.)

(Also, how can you divorce the course from the major? As far as I know, the Augusta National Golf Club, which hosts the Masters (a major), doesn't host any other PGA tournaments. So what happens when you run the regression, and the Augusta dummy is always the same as the Masters dummy? I'm not an expert after my one course – but doesn't that cause some kind of matrix problem in the regression?)

I wouldn't even expect a coefficient of +17 only if you didn't adjust for the course at all. Well, maybe if you look at last year's Masters. At Augusta in 2007, every golfer finished over par. But in the 2008 Buick Invitational (which is not a major), 34 players finished at or below par, and the winner (Tiger) was at –19.

But that's an exception. Eyeballing the four majors on Wikipedia, the
Masters and PGA Championship usually feature a winner around –8. The British Open looks a little easier, and the US Open a little tougher. Eyeballing the non-majors in 2007, the typical winning score looks to be somewhere in the mid-teens.

So the difference is probably around 7 strokes or so. After correcting for the higher caliber of golfer in the majors, you might get to, what, 12 or 13? You're still short of the 17 the regression found. And, again – that's BEFORE adjusting for the difficulty of the course! So I just don't understand.

What's going on here? Am I figuring something wrong?

Hat Tip:
The Sports Economist

Labels:

12 Comments:

At Wednesday, March 12, 2008 6:53:00 PM, Blogger Michael Kelly said...

While I can't explain the 17 strokes-in-a-major thing, I can explain how Phil or anyone else can 'try less.' Say a player is approaching the 16th green, it's a par 5. There's some water in front left of the green, with the pin (naturally) in the back left. A player can choose to play aggressively, going for the green on their second shot. They could be even more aggressive, going for the left of the green on their second shot. The third alternative of laying up within 50 yds or so is obviously the conservative approach.

If a player is close to the lead, they are likely to take more chances, play more aggressively, and likely post a better score. However, if a player is in 2nd, but 4 strokes behind Tiger with 3 holes to play, he's likely to try to protect his 2nd place and go for more pars, since dropping a ball in the water could cost him multiple positions and lots of cash. The riskier strategy probably has a better expected final score (explaining the noted difference) but since the player may be especially risk-averse he passes on that opportunity.

 
At Wednesday, March 12, 2008 6:54:00 PM, Anonymous Anonymous said...

One of the main differences in golf courses from tournament to tournament is pin placement: where the hole is on the green. That can have a significant effect on scores. If you think about it, 17 strokes is only 3 strokes per round, or one extra shot every four holes.

-Bryan

 
At Wednesday, March 12, 2008 8:45:00 PM, Blogger Phil Birnbaum said...

Michael: That's true. I should have mentioned that the author looked at the variance of per-hole scores to check for that possibility, and found no such effect. I should add that to the post.

 
At Wednesday, March 12, 2008 8:56:00 PM, Blogger Brian Burke said...

Mr. Kelly beat me to it. Competitive golf certainly has a lot to do with risk.

But my first instinct was opposite of what Michael says (I think--correct me if wrong). I agree that golfers playing behind Tiger would have to take more risks, but therefore have lower, not higher, expected scores.

It's a good bet that a pro golfer's normal state is at optimum expected risk/reward balance. Increasing or decreasing risk moves him downward on the utility curve. If playing more aggressively increases expected performance, wouldn't he be playing that way from the outset regardless of the competitive field?

If Tiger is in the mix, the best strategy (for an exempt player) would be to gamble a bit and increase risk. The expected average performance level would decrease, but the variance of performance increases. This gives the golfer at least a chance of challenging Woods.

In other words, assume Tiger usually shoots a -7 at a certain tournament. For a challenger, if playing at optimum risk/reward utility yields a 2SD range of outcomes from -5 to +10 strokes, then more aggressive play might yield a 2SD range of outcomes from -10 to +20. Aggressive play yields a worse expected score, but allows for a realistic possibility of beating Tiger.

I think it would be the non-exempt guys, the golfers scratching and clawing for tour cards, who would be conservative when Tiger is playing. Their primary goal is to stay on the tour rather than win the whole enchilada. In that case, it would be preferable to adopt the score optimization strategy.

(This all assumes the study's methodology is proper.)

 
At Thursday, March 13, 2008 12:26:00 AM, Blogger Phil Birnbaum said...

>"This all assumes the study's methodology is proper."

And I'm wondering about that methodology, because I don't see why you'd get such a huge coefficient for majors, when you're already controlling for the course and the quality of the opposition. I won't feel comfortable about any of these results until someone explains to me why that 17 stroke difference makes sense.

 
At Thursday, March 13, 2008 4:44:00 AM, Blogger David Barry said...

The +17 thing is a puzzle. You don't have to eyeball the winning scores to guess what the average difference should be - Table 1 gives average scores for regular and major tournaments.

The difference jumps about from year-to-year, presumably due the difficulty of each year's majors. The biggest difference is about 12 strokes relative to par, and the smallest is about 3.

 
At Thursday, March 13, 2008 11:36:00 AM, Anonymous Anonymous said...

I've only skimmed this, but couldn't some of the difference between Tiger and non-Tiger tournaments be the difference in score between Tiger and the last guy to make the cut? You're basically swapping Tiger for the worst score in the tournament -- wouldn't that be something like 12 or 15 strokes? Even spread over 70players (and fewer exempt players), that could be a factor.

 
At Thursday, March 13, 2008 1:38:00 PM, Blogger Phil Birnbaum said...

David: oops, good point, I should have just used Table 1.

Guy: nope, because the regression adjusts for which players are playing. So swapping Tiger in and another player out shouldn't affect the difference once you adjust for their quality.

 
At Saturday, March 15, 2008 3:28:00 AM, Blogger Unknown said...

I wonder if some of the coefficients are a bit bogus, particuarly for the dummy variables.

Take major tournaments. It is very rare for a major and a regular PGA tournament to be played on the same course. This means that the course dummy variable is effectively redundant when it comes to assessing the stroke different for majors.

So, Phil, this means you Pinehurst #2 isn't grounded in reality. Add in the fact that major course are set up to be tougher, and in general will be harder courses you can start to see some of the gap.

The 17 stroke difference does sound a lot but (and this is a hypothesis) although the winning score in a major may only be a few shots higher (on average) than a regular PGA event (say -12 vs -7), you'll acutally find, I suspect, that relatively few players have low scores in majors. In other words the mean score will be higher (and I would imagine the std dev may be tighter).

-Beamer

 
At Thursday, March 27, 2008 5:53:00 PM, Anonymous Anonymous said...

The problem you're pointing out is case of collinearity. It's not a fatal flaw.

All golf played at Augusta is also a major. However, not all golf played in a major is played at Augusta. This should generate variation for the "major" and "course" dummies.

If all golf played at the 4 majors were always played at the same (1) course, then there'd be a problem separating the effects of major from course.

Even if you don't buy my quick explanation, only the coefficients on the course and major dummy variables are "contaminated," and the essential findings remain the same.

As far as explaining the +17, I think they cut the rough deeper at at Bethpage Black a few years ago. Pin-placement has also been mentioned.

 
At Sunday, April 13, 2008 3:51:00 PM, Anonymous Anonymous said...

I believe that par is also reduced for the US Open, maybe for some of the other majors. They take 2 of the par fives, and make them par fours instead. So that would be 8 strokes right there.

 
At Sunday, April 13, 2008 4:05:00 PM, Blogger Phil Birnbaum said...

Daniel: Thanks, your explanation that makes sense. Never thought of that.

Of course, it would have to happen EVERY time to be worth 8 strokes, not just in the US Open. And we're still trying to explain 17. But it's a start!

 

Post a Comment

<< Home