Wednesday, February 24, 2010

Evaluating field-goal kickers

In the third quarter of the Super Bowl, the Colts, up by a point, sent in kicker Matt Stover to attempt a 51-yard field goal on fourth-and-11. He missed.

Was the field goal attempt bad strategy? As you would guess, there was a fair amount of second guessing. Carl Bialik, the Wall Street Journal's "Numbers Guy," pointed out one of the more bizarre criticism -- that Stover shouldn't have been sent out because he was too old.

That's bizarre. Jamie Moyer was 46 last year, four years older than Stover. He got blown out a few times, but, after a bad game, you didn't hear journalists and bloggers argue that he shouldn't have pitched because he was too old. Everyone understands that pitchers sometimes have bad games, and it's silly to harp on a single performance. Moreover, age is irrelevant -- what counts is performance. A bad pitcher or bad kicker is just a bad pitcher or bad kicker, whether he's 25 or 45.

My feeling is that the reaction to Stover's miss, the second-guessing and search for causes, is partly a result of the culture of NFL football. In hockey, a player fans on a goalmouth pass, or misses an empty net, and it's unfortunate, but it's part of the game, and there'll be other opportunities later. In baseball, strikeouts happen all the time, even (perhaps especially) to the best players, and they're considered routine.

But in my (limited) experience of watching football, it's different. A dropped pass or a missed block is taken very, very seriously; there's a presumption that every player is expected to make every play, and that every miss isn't just part of the game, but a failure of the player to do his job. A shortstop bobbles a grounder, and life goes on. A receiver lets a pass bounce of his chest, and the announcers are all over him for his lack of concentration and letting his team down.

If you look at the statistics, I'd bet that missed catches are much more common than errors on ground balls. But in baseball, it's just understood that even the best infielders are going to wind up with a few errors over the course of a season. In the NFL, at least in the heat of the moment, it seems like the only acceptable level of performance is 100%.

And that just seems wrong to me. League-wide, a 51-yard field goal attempt is successful only about 55% of the time. Admittedly, Stover has been worse than that, but still, when you try something that succeeds only about half the time, and then it winds up failing, all that second-guessing isn't really called for. Kicking a 51-yard field goal has about the same success rate as making two consecutive foul throws. Has any player in the history of the NBA missed a foul, and then criticized for missing it because he was too old?

I just don't get it. Is it just an NFL perfectionism thing?

-----

In any case, another thing that strikes me about the NFL is how kickers are evaluated on such small sample sizes. A 30-yard field goal is successful about 90 percent of the time. A kicker misses a couple of those, and suddenly he's a goat, and his job is in jeopardy. Why is that? The chance of a kicker missing two out of three such kicks is about 1 in 37. That's about the same chance of a .300 hitter going 0-for-10. If Albert Pujols did that, everyone would know, from his record, that he's just having a little slump. But if a field goal kicker misses a couple, where's the perspective? It seems like, fairly often, a high-profile miss and a guy loses his job.

I suppose it's possible to think up a situation where that makes sense. If a kicker can go along for years and years, then suddenly lose his abilities overnight, then it might indeed make sense to worry when he uncharacteristically misses a couple. You could easily think up a mathematical model where the team's best strategy is to fire a any kicker who misses two out of three easy kicks. But that doesn't seem realistic to me, even though it's theoretically possible.

Maybe it's that we're not keeping the right statistics. If one kicker is 90%, and another is 85%, well, maybe the first guy kicked for shorter distances, which is why his success rate is higher. If the second kicker went 2-for-4 from 55 yards, that's actually an excellent performance, but it reduces his overall rate. So the success rate may mean very little, especially considering that kickers have fairly similar success rates and that the sample sizes are small.

Why not start by keeping a record of how the kicker does relative to average? If a player makes a 30-yarder, one that normally gets made 90% of the time, he beat the average by 0.1 kicks. If he misses, he's below average by 0.9. Add up all his kicks over all his seasons, and you'd get a reasonably accurate picture of how good he is, at least if he has a reasonably long career. What's Stover's record in that regard? I don't know.

But even so, there are more basic stats we can look at. I can go to his Pro Football Reference page, and see that Stover had a pretty good season, it looks like. He went 9 for 11. He missed one between 30 and 39 yards, and he missed his only attempt of 50+ yards. From 30 yards or less, he's missed only one kick in his last ten seasons. That seems like a reasonable record ... there's nothing there that suggests that there's something wrong when he misses a single 51-yard try in the Super Bowl.

And what I don't understand is why teams don't have a better idea how good a kicker is. If there are only 35 kicks a season on which to evaluate him ... well, how about evaluating him in practice? Get him to kick a few every week, and keep track of how he does. Sure, practice conditions aren't the same as game conditions, but they're better than nothing, aren't they? If you have one guy who makes 75% of 50-yarders on an empty field on a weekday afternoon, and another guy who only makes 50%, isn't that still useful information when you're trying to decide who you're going to sign for next season?

I don't know anything about actually playing the game of football, so this could be completely wrong. But wouldn't it be a good idea to audition kickers by making them kick and seeing how well they do? You could try them at various distances, with good holds and bad holds, with good blockers and bad blockers. I don't know how long it takes to recover after a kick, but if it takes, say, five minutes, you could still get a guy to kick a season's worth of balls in an afternoon. If you think your guy is losing it, can't you just test him out? Why wait until he misses an important kick, then decide he's washed up on that very little evidence?

And here's another idea: is there any predictive value to where the kick goes, rather than just whether it's good or not? I'd think that a boot right down the middle is better than one that just sneaks inside the goal post. If you had tapes of all the kicks, you could give every guy an accuracy score, and see if that has any correlation to future field-goal percentages.

There's gotta be more information out there than "he went 31 for 35 last year." And if you don't have it, it seems like it would be pretty cheap to get.

Labels: , ,

Thursday, February 18, 2010

An economist predicts the Olympic medal standings

Daniel Johnson is an economics professor who, according to Forbes magazine, makes "remarkably accurate" predictions on how many Olympic medals each country will win. But I'm not sure, based on the description given, that the predictions are all that remarkable.

From the article, it sounds like what Johnson is doing is running some kind of regression, on "per-capita income, the nation's population, its political structure, its climate and the home-field advantage for hosting the Games or living nearby." He doesn't consider anything specific about the sports or athletes.

How accurate are Johnson's predictions? I'm not really sure. Forbes says,

"Over the past five Olympics, from the 2000 Summer Games in Sydney through the 2008 Summer Games in Beijing, Johnson's model demonstrated 94% accuracy between predicted and actual national medal counts. For gold medal wins, the correlation is 87%."


What does that mean? From the word "correlation," my guess is that those numbers are the correlation coefficient, or "r". But an r of .94 doesn't mean that the predictions are 94% accurate. It just means that the *best fit straight line* is 94% accurate. It's possible to be wrong, perhaps badly wrong, in every guess, but still have an r of 1, which means 100% correlation. For instance, if you underpredict every below-average country by 50% of the difference, and you overpredict every above-average country by 50% of the difference, you'll get a perfect correlation, but really crappy guesses. For instance, here's an example of how that might happen:

Country A: estimate 80, actual 65
Country B: estimate 60, actual 55
Country C: estimate 40, actual 45
Country D: estimate 20, actual 35

Regressing estimate on actual (or is it actual on estimate? I forget which way the word "on" implies, but never mind) gives a 100% correlation, but the actual guesses aren't spectacular in the least.

Anyway, it might be some other method that Johnson uses to compute the accuracy percentage, but it's hard to evaluate the claims without an explanation.

(UPDATE: as this blog post was going to press, I found Johnson's website, which confirms that it *is* correlation. It still could be that it's some kind of method that doesn't have the flaw of my example above. The site contains a media release, but no actual copy of the paper, which was published in "Social Science Quarterly" in December, 2004.)

More importantly, you can't tell how impressive a set of predictions is without something to compare it to. At Forbes, commenter "Doubter" points this out, and tries using the results of the previous Olympics to predict the current one. For the top five countries, he gets an 85% accuracy rating, and correctly points out that "include a bunch of countries with stable medal counts (Jamaica, Japan, Nigeria, Kenya, most European countries) and I am sure it gets much better."

I'm pretty confident that if you were to just use a weighted average of previous Olympics results and adjust for home field advantage, you'd come pretty close to what Mr. Johnson was able to do. Forbes should have realized that the results probably aren't "remarkably accurate" -- just "accurate".

Also confusing is the estimate of home field advantage. There were no results given for the Winter Olympics, but for the summer games, the host team "typically garners 25 additional medals compared with its expected performance, 12 of them gold." It doesn't really make sense that the home field advantage should be a fixed number of medals. Shouldn't it be a percentage increase? Canada won 11 medals in 1976 in Montreal, none of them gold. That was a few more medals than usual, probably because of home field. Should they really have been expected to win minus 14 medals in 1988 in Seoul, of which minus 12 would be gold? Or, on the other hand, were they just unlucky in Montreal, where they should have won about 30, when they were in the single digits in 1968 and 1972?

Or, if Pakistan were to host the Olympics, would you really expect them to jump from (say) 1 medal to 26?


Oh, and one more thing: in his 2010 predictions, Johnson has the top 13 countries winning 250 medals, but only 57 golds. Overall, gold are 33.3% of medals, but for those 13 countries, Johnson has them winning only 22.8% golds. How come? An eyeballing of the 2006 chart shows about 1/3 golds for those countries then ... I wonder why the drop?


Labels: , ,

Thursday, February 11, 2010

Dynamic ticket pricing and sabermetrician salaries

Over at Sabernomics, J.C. Bradbury defends "dynamic pricing," which basically means teams charging different prices for different games. It might cost a bit more to see the Yankees, or to see a weekend game, or (especially) to see the Yankees on the weekend. Conversely, a night game against the Royals on a cool night in April might be cheaper.

He's absolutely right that this is a good idea. A Yankees game and a Royals game are different products, so why should the price be the same? It's like if the Beatles play Shea Stadium, and charge $100 a ticket ... the next night, Bobby Vinton plays, and tickets are only $25. You get the same seat, in the same venue, listening to the same sound system, and let's even say it's the same stage. But, obviously, there's a reason the Beatles cost more.

I don't see that there's any issue at all. I think that Tango gets it right, when he says some of the complaints are due to the way the explanation is framed. The media are describing the situation as if Yankee games cost extra. The extra cost bothers people -- they think they're being ripped off. But, instead, if teams presented Yankee games as regular price, and Royals games as *discounted*, I think Tango is right that there would be a lot less resistance.

And, in any case, high prices for some games can be a good thing for fans overall. There are only a certain number of seats in the ballpark. What works out best is if those seats go to the fans who value them the highest. If ticket prices were absurdly low, say, five cents, you can imagine how difficult it would be to get a seat. There would be millions of people vying for those 40,000 seats, and entering a lottery for $4 season tickets. A lot of those tickets would go unused -- I can tell you for sure that if I won the lottery, I might only go to four or five games, and the other tickets would be wasted. Sure, I could give them away, but they might go to people who are as unenthusiastic about seeing the games as I am.

Have you ever gone to a store to get a particular item that you needed badly, but when you get there, you find out it's on sale this week, and as a result, the store is sold out? It's happened to me, and it drives me nuts. I really need a pound of ground beef, because I'm having friends over and making burgers. But the ground beef is on sale for $3, and so there isn't any left! But I was ready and willing to pay the $6 regular price, and so I'm frustrated.

Or ... imagine that you've waited all your life to see the Cubs make it to the World Series. Finally, it happens. You're ready to pay $1000 for a seat, because it's a once-in-a-lifetime experience, and maybe you're even one of the top 10 Cubs fans in the world. There is nothing you want more than to see that game. Alas, the Cubs, fearing bad press, sold that seat for $50, and some guy bought up your seat who doesn't even like baseball much, just to impress his friends that he was there.

That sucks. But thank God for the Coase Theorem, which says that when someone gets a seat for $50, but doesn't want to see the game that badly, he'll sell the ticket to you for something closer to $1000. But why should anyone complain about paying a higher price to the Cubs, instead of to the scalper? What good does the middleman do? And, at least if the Cubs are the ones make more money off the $1,000 ticket sales, they have an incentive to build a better team in the future.

------

Anyway, an interesting thing about all this is that a couple of days before Bradbury's post, the Cleveland Indians advertised a couple of jobs, one of which is for a "baseball analyst." That appears to be a combination programmer/statistician/sabermetrician job.

Over at Tango's blog, there's a lot of interest in the position, as you might expect. Eventually, the discussion turned to pay, and the consensus seems to be that there is a "baseball discount" when working for the front office of a baseball club. That is, posters expect the Indians pay significantly less than what a similar job would pay elsewhere. A couple of commenters suggested that it might pay as little as half the market salary for similar positions outside of baseball. (The Indians' posting doesn't mention salary, so this is all speculation.)

Which, at first, sounds like a ripoff. Why should the Indians be able to underpay productive staff, just because of the glamour of working in baseball? But, just as Tango points out for the dynamic pricing of tickets, it makes more sense if you look at it the other way. It's not that the Indians are paying less because the job is so good -- it's that other places are paying *more* because they don't offer what the Indians do!

If the Indians do pay less, then whoever winds up with the job will actually have no complaint: if the Indians had offered more, that person probably would have had a lower chance of getting the job, because there would have been more qualified candidates applying. And, just as in the ticket situation, wealth (or happiness, or utility) is created this way.

Suppose Joe and Bob are exactly equally qualified and doing exactly the same work. If Joe has the Indians job at $40,000 a year, and Bob has a job at a fertilizer factory at $70,000 a year, and neither would swap with the other, then neither has any cause for complaint. Bob could say, "well, if the Indians had paid a fairer wage -- say, $50,000 -- I'd go work for them." But all that shows you is that Joe values the Indians' environment at $30,000 a year, while Bob values it at only $20,000. Just as it's better to put the Cubs tickes in the hands of whoever wants it the most, then, productivity being equal, it's better to put the Indians' job in the hands of the applicant who appreciates it the most.

Labels: ,

Thursday, February 04, 2010

Does it matter that the Yankees keep buying pennants?

As most baseball fans are aware, the New York Yankees have been spending more money on payroll than any other team in the major leagues, by a long shot. In 2009, for instance, the Yanks spent $201 million, about two-and-a-half times the average, and $76 million more than the next highest team (the Mets).

And so, as you would expect, the lavish-spending Yankees have been very successful. The Yankees made the post-season every year but one since 1995. That's 14 out of 15.

In an excellent post in November, Joe Posnanski wondered why fans are willing to put up with this. He gave two reasons:

1. In baseball, unlike football and basketball, a truly dominant team still wins only about 60% of its games. This tends to hide the extent of the dominance:

"I would bet if the Indianapolis Colts played the Cleveland Browns 100 times, and the Colts were motivated, they would probably 95 of them — maybe even more than that. But if the New York Yankees played the Kansas City Royals 100 times, and the Yankees were motivated, I suspect the Royals would still win 25 or 30 times. That’s baseball.

"So you have this sport that tends to equalize teams. That helps blur the dominance of the Yankees. If the New England Patriots were allowed to spend $50 million more on players than any other team, they would go 15-1 or 16-0 every single year. And people would not stand for it. But in baseball, a great and dominant team might only win 95 out of 160, and it doesn’t seem so bad."


And, given that the Yankees should only be expected to win 97 games or so, there will likely be other teams that come close to them, so it winds up looking like the Yankees are one of many quality teams. Of course (and now this is me, not Posnanski), the Yankees are expected to do it every year, whereas whatever team challenges them is probably just a random team that got lucky. But you can't tell that just by watching, so that Yankees don't look all that special in any given season.

2. Under the new, post-1995 playoff system, a team has to win three rounds to win the World Series. But in a short series, anything can happen, and the better team will lose with pretty high frequency.

A team with a 60% chance of winning each game will only win a best-of-five series about 68 percent of the time, and a best-of-seven series 71 percent of the time. (If I've got the numbers right.) So the chance of winning three consecutive rounds, and the World Series, is .68 * .71 * .71, which is about 34 percent.

So even if the Yankees are 60% favorites every game of the post-season -- the equivalent of 97-65 against three of the best other teams in baseball -- they'll win the World Series only about one year out of three. Posnanski:

"And in that way the expanded playoffs have been genius for baseball — not only because they are milking television for every dime but because the short series have been baseball’s one Yankee-proofing defense against the ludicrous unfairness of the New York Yankees. ... They are the best team with the best players every year — that sort of big money virtually guarantees it.

"So, you create a system where the best team doesn’t always win. In fact, you create a system where the best team often doesn’t win. For years the Yankees didn’t win. They lost to Florida. They lost Anaheim. They blew a 3-0 series lead against Boston. They lost to Anaheim again and Detroit and Cleveland — and how could you say that baseball is unfair? Look, the Yankees can’t win the World Series! See? Sure they spend $50 million more than any other team and $100 million more than most. But they haven’t won the World Series! Doesn’t that make you feel better?"


------

Last week, at the Sports Economist blog, Brian Goff agreed and disagreed with Posnanski's analysis. His agreement was that Posnanski got it right in terms of understanding why MLB did what it did with the expanded playoffs. His disagreement was that, while Posnanski thinks it's a bad thing for the fans, Goff thinks it's a *good* thing.

Why? Because Yankee-haters get a lot of satisfaction out of seeing the Yankees lose. And so MLB's strategy is win-win. Yankee fans get to see their team in contention every year, which creates a lot more revenue for the league and utility for fans (since the Yankees have the largest fan base in MLB). And then, Yankee-haters get to see their least-favorite team defeated two years out of three, which makes *them* feel good and open their wallets. MLB deliberately designed the system this way to squeeze more money out of its fans.

That may be true, but I'm not so sure the strategy is still in baseball's long-term interest. The sports economists I've read note that fans spend more money when their team is successful, and, from that, they conclude that it maximizes profit for the league to ensure the cities with the most fans win the most often.

I'm not convinced. That may work in the short run, when the fans still have memories of when payrolls were more even, and playoff berths were earned more by other means than money. But what happens longer term, when the Yankees make the playoffs for 28 of the next 30 years, and it becomes more and more obvious that the Pirates and Royals will seldom (if ever) be able to compete? And what happens when even Yankees fans start to get uncomfortable noticing that there's a lot less to be proud of when your management is just buying all the best players, and a playoff berth is just being purchased every year?

Maybe it's just me, that it's my personal taste that I'd rather all teams have an equal payroll, and that success on the field be "bought" with intelligence, strategy, and luck, rather than money. I've been a fan of the Toronto Maple Leafs all my life, but if the Leafs finally won the Stanley Cup again, but by spending three times as much as any other team ... well, I don't think I'd really care that much. And I'm sure there are many more like me. And so I wonder if a "we make more money when we rig the system so the Yankees win more often" strategy might backfire.

If you asked me a few months ago, I'd say for sure it would backfire, and fans would never put up with years and years of the Yankees buying pennants. But, after reading "Soccernomics," I'm not so sure. What I learned (pp. 48-49) was that, in the English Premier and Championship Leagues, there is a huge tendency to purchase wins. From 1998 to 2007, Manchester United had three times the average team payroll, and finished second, on average. That's second out of 58 teams, not second out of five teams in the AL East. Moreover, that's not second one year and then tenth the next -- it's an *average* of second, over ten years. They finished first five times, second twice, and third three times.

And they weren't even the highest-spending club ... that was Chelsea, who spent 3.5 times as much as the league mean, and had finished third on average.

The flip side of Man U is a club called "Brighton & Hove Albion," which spent 1/7 the average payroll (and finished 42nd, on average). So, in English soccer, you have the biggest team spending 23 times as much as the smallest team. Compare that to MLB, where the ratio was only 6 times for 2009, and is probably a lot smaller than that when you average out 10 seasons.

Moreover: in baseball, the Yankees stand alone in payroll: last year, they spent almost 50% more than the second-highest paid Mets. In English soccer, there were four teams at double the average (compared to one in MLB), and 13 teams at less than a quarter of the average (compared to none in MLB). And, again, these are ten-year trends in soccer, compared to a single year in baseball, which makes then even more shocking.

(One disclaimer: the soccer teams are, technically, divided into two leagues: the (first-tier) Premier League, and the (second-tier) Championship League. You'd expect that teams in the lower league would pay less. However, every season, as I understand it, the three best teams in the second tier swap places with the three worst teams in the first tier. So, theoretically, even the lowest paid second-tier team has a hope of being the overall champion two years from now. In that sense, it's really one league.)

But, despite the payroll and standings disparities, Man Utd still has a rabid fan base, and, as a result, the club is valued at $1.87 billion, even more than Forbes' appraisal of the Yankees at $1.5 billion.

So, what I'm thinking is: if British soccer fans can tolerate huge pay differences, and accept the fact that it's almost always going to be one of the richest teams that win ... well, maybe baseball fans can accept that too, especially since it's on a much smaller scale. Maybe the New York Yankees can become baseball's Manchester United, the Red Sox can become Chelsea, and fans of the Marlins and Padres can hope to fluke into the postseason and engineer an upset.

Major League Baseball might very well lose me as a fan if they do that, but if they can make it up in revenues from everyone else, who am I to say they're wrong?



Labels: , , , ,