Back in 1984, Bill James examined the value of an MLB draft pick. In a definitive 36-page issue of his newsletter, he looked at the draft history from 1965 to 1983, and found, among many other things, that:
-- players out of college make much better selections than high-school players, producing 84% more value after adjusting for draft position.
-- pitchers make slightly worse choices (about 12% worse than average). They were 44% worse than average in the first round (relative to all first-round choices), but about average in subsequent rounds.
-- players from California and the northern states delivered about 20% higher return than expected. Players from the South were poor choices, producing 35% less than average.
The study was done as a special issue of James' newsletter, and, to my knowledge, has not been reprinted since. That's unfortunate; it probably went out, originally, to only several hundred people, and it's on a topic of very much interest to economists these days. And its regression-free methodology is completely understandable and convincing.
Anyway, the reason I bring this up is because I came across a 2006 study (probably from a mention somewhere on The Wages of Wins) that touches on some of the same issues that James did. It's called "Initial Public Offerings of Ballplayers," by John D. Burger, Richard D. Grayson, and Steven J. K. Walters (I'll call them BGW).
One of the most interesting things in the study is actually the authors quoting from another study by Jim Callis (of Baseball America). Callis looked at the first ten rounds of all the drafts from 1990 to 1997. Breaking down the data in various ways, he counted the number of players who became "stars," "good players," or "regular players."
Callis found that high-school draftees have closed much of the gap as compared to college players, at least in the first round:
High School draftees: 4.8% star, 10.3% good, 12.4% regular
Collegiate draftees: 4.0% star, 6.7% good, 16.1% regular
(BGW find that none of the differences were statistically significant.)
The results appear to be different from Bill James' results back in 1984. Why? The most obvious explanation is that general managers read Bill's study, and adjusted their drafting accordingly.
It's important to note that Bill's study didn’t show that high-school players are worse than college players – it showed, rather, that the high-schoolers underperform *relative to teams' expectations,* where the expectations are measured by how high they drafted the player. So, between 1984 and 1990, perhaps teams just learned to lower their expectations.
BGW write,
"In 1971, for example, *every* first-round draftee was a high-school player, but by 1981 the majority of first-round picks were collegians for the first time. Over the 1990-'97 sample period ... the proportions from each pool were rougly equal, though recent years have seen a greater predilection for college players."
The James study also shows some evidence of teams moving more towards college players over time. 1967 apparently saw NO college players drafted in the first few round, and 1971 was second-worst with high-school picks forming over 96% of the expected value of the draft. But if you remove 1967-71, years when very few college players were drafted, the trend from 1965 on shows a level trend of about 25% of draft value used for collegians.
BGW took Callis' numbers one step further, and looked at first-round pitchers specifically. They found that they were moderatly less likely to become regulars (as compared to position players), but FAR less likely to become above-average players:
Pitchers: 2.67% star, 2.67% good, 16.67% regular
Position: 5.96% star, 13.91% good, 11.26% regular
I couldn't find the sample sizes, but BGW report that only the difference in the "good" column is statistically significant.
----
These results are very interesting, but they're not really what the paper is about. What BGW are trying to do is look at bonus payments made to draftees, and to see if they match what the player actually did. That is, they're trying to do for baseball what Massey and Thaler did for football.
Here's how they did it. First, they ran a regression to estimate how much money every additional win is worth to a team. They then went through every draft choice to see how many wins each player contributed (WARP). This let them figure out how much a particular player contributed to the team's bottom line.
Then, they ran a second regression to estimate, on average, how much a particular draft choice, at a specific draft position, will expect as a bonus. Comparing actual performance dollars to bonus dollars paid allowed BGW to see which draft choice positions are the best financially. Is it the early choices, which are expensive but produce the best players? Or do the later choices, which are much cheaper, provide the best returns?
It's a good idea, but I think there's one fatal flaw. The authors compute "best buys" based on "internal rate of return." That's basically the effective annual interest rate the owners are getting on that player. (To use a simplified example: suppose the team pays $1 million in bonus to player X. Three years later he earns the team $4 million by his performance, and then suffers a career-ending injury. The team turned its $1MM into $4MM in three years, which is an internal rate of return of 59% -- $1, compounded at 59% over three years, returns $4.)
But I don't think that's the right way to do it.
To see why, look at one of BGW's findings, which is that drafted college players earn 43%, while drafted high-school players earn only 27%. The authors conclude that college players are better choices than high-schoolers.
But that's not necessarily right. High-school players take longer to mature than collegians (if only because they're younger). So the rates of return are based on different time periods. Which would you rather have as a return on your investments: 43% for one year, or 27% a year for five years? I'd rather have the 27%. After five years at 27%, my $1 will have grown into $3.30. But after one year at 43%, I'll have only $1.43. Sure, I'll have four more years to invest my $1.43, but now the 27% option is no longer available, so I'm stuck investing at 7% or something, which brings my total to $1.87 – still much lss than the $3.30 I got with the high-schooler.
Put another way: would you rather pay $1 to get "Rance," a decent but unspectacular player now (worth $2 next year), or pay $1 to get "Junior," who in five years will be an MVP candidate (eventually worth $20)? The first way gets you a 100% return on your investment. The second way gets you "only" 82% a year for five years. But no team would choose Rance over Junior under those circumstances.
In the economics of the draft, it's not capital that's scarce: it's the draft choices themselves. The important thing is not how much you make per dollar of bonuses, it's how much you make per player drafted. Rance is worth $1, while Junior is worth $19. To account for the fact that Junior's value doesn't surface for an additional four years, you might bump up the four years by an appropriate rate on the bonus money you spent – maybe 10%. That means that, after five years, Rance is worth $1 plus four years' interest on the $1 – for a total of $1.46. That's still much less than the $19 Junior is worth.
So for this reason, I think the authors' conclusions – that teams irrationally overvalue high-school players compared to collegians – are not supported by their methodology.
----
As for how much money a draft choice is actually worth, the authors do have a bit evidence that casts some light on the question:
"In 1996, four first-round draft picks took advantage of a loophole in the draft's rules, becoming free agents because their drafting teams had missed a specified deadline for making contract offers. In the frantic bidding that resulted, the four players [Travis Lee, John Patterson, Matt White, and Bobby Seay] received bonuses that *averaged* over $5 million, more than two-and-one-half times the amount paid to the first (and otherwise-highest-paid) selection in that year's draft."
Assuming what was paid to these four players is typical, the signing bonuses that draftees actually receive are about 40% of their actual value as free agents. Additionally, BGW calculated that the actual performance value of a first round choice is $3.25 million; in that case, the $5 million paid is actually about 50% too high, but the implied $2 million signing bonus is not – it's only 60% of the actual value.
Still, paying a draft choice 60% of what he'll eventually be worth is a pretty good chunk of money. So, similar to what Massey and Thaler found for football, draft choices in MLB aren't as lucrative as one might think.
Labels: baseball, draft, economics