Tuesday, October 30, 2012

Would Toyota sell fewer cars if they raised the price by a penny?

Last post (which I'm about to delete), I asked this question:

Suppose that for the last ten years, Toyota had priced a Camry one cent higher than they did -- so, for example, a $23,000 model would have been $23,000.01.  How many fewer cars would they have sold in that time?

I asked a couple of my friends, who answered zero -- Toyota wouldn't sell any fewer cars at all -- because nobody would change their mind about buying a Camry for a single penny. And in the comments to the blog post that led me to ask the question, many commenters also believed the answer must be zero.

But I don't think it can be.  

Suppose a typical price of a Camry is $25,000.  Now, suppose that Toyota had raised the price to $45,000 instead.  

You can be pretty sure that Toyota wouldn't sell a lot of Camrys at $45,000.  There's too much competition in the midsize sedan market.  Few people would pay an extra $20,000 to get a Camry instead of, say, an Accord or a Malibu.  Some Toyota loyalists might, but, let's suppose 95% of them would not.

From 2000 to 2009, it looks like Toyota sold close to 4 million Camrys in the USA and Canada.  At a $20,000 price hike, we assume they would have sold only 200,000 of them.  

So a $20,000 increase results in a sales drop of 3.8 million units.

In other words, when they increase the price 2 million pennies, they lose almost 4 million sales.  That's an average drop of two cars per penny.

That's a reasonable first estimate, that if Toyota had bumped the price by a single penny, they would have lost two sales.  

-----

Now, you may object.  You may say, why should I assume that *every* penny costs two sales?  Maybe sales only start dropping when the price gets very unreasonable, like, say, $35,000?

Well, yes, that's possible.  If you redo the argument on that assumption, you have to conclude that if Toyota raised the price from $35,000 to $35,000.01, they'd lose *four* sales.  

That's possible.  But ... do you have a basis for your assumption that consumers only start caring at $35,000? There's a strong basis for assuming that's *not* the case. If raising the price the first $10,000 had no effect on sales, Toyota would have done it! Their revenues would have increased by $10,000 a car, or $40 billion dollars over 10 years.  To make that argument stick, you have to assume that you're forty billion dollars smarter than the people at Toyota who actually study this stuff.

Furthermore ... it seems more likely, to me, that the early pennies are more important than the late pennies.  At $44,900, there are so few buyers left -- less than 6% of the original -- that a single penny can't eliminate two of them.  It seems more likely that, at the actual price of $25,000, a single penny might eliminate at least three buyers, and at $30,000 it might eliminate only one (since there are fewer left to eliminate).

In fact, my best guess would be around 3 fewer sales per penny.  Maybe even a bit higher.

-----

Another argument might be: consumers aren't actually responding to a single penny.  They might be responding to, say, $100 increments.  No buyer will change his mind over one cent, but maybe 20,000 buyers will change their mind over $100.  

That sounds reasonable when I write it, but, when I think about it, it doesn't hold up.  When do the 20,000 buyers change their mind?  If it's only at $100, then it's the single penny between $99.99 and $100 that makes the difference!  There's no way to plot a decrease in sales without SOME penny making a difference.

Here's a graph, with price on the horizontal axis and sales on the vertical axis.  Connect them any way you want -- but you'll find that there has to be *at least one time* that a penny difference has a difference in sales.  And you'll find that the average penny difference has to be around two cars, no matter how you draw the graph.


-----

Now, maybe it's not every single penny that's the problem, but specific pennies.  Maybe pricing the car at $30,000 leads to many fewer sales than $29,999.99, for psychological reasons.  So, yes, that one penny makes the difference, but only because consumers are irrational.

But ... consumers are rational in not wanting to buy a $45,000 Camry, right?  So the total sum of the sales drops is rational.  Could it really just be the sum of a bunch of irrationalities?

And, we'd probably all agree that there would be fewer sales at $28,999 than $27,999.  So it can't just be that.

In any case, not every potential buyer sees the same number.  Many people have trade-ins, of differing values, so the actual bottom line is fairly random compared to the total price.  An irrational consumer might pay $18,999.99 after a $6240.50 trade, but not $19,000.00 after the trade.  In terms of the price of the car, the difference is between $25,239.49 and $25,239.50.

Factor in sales taxes and fees (which vary by state/province), and the fact that there's also a "99" effect in monthly payments (which vary by trade-in, interest rate, term, and dealer extras), and it becomes hard to argue that only certain pennies can be important.

-----

What we have, I think, is a very strong argument that a penny makes a difference.  But, still, we can't imagine a consumer walking away on the basis of a single penny (I certainly can't).  So what's going on?  

This is my speculation, rather than a real part of the argument, but what I think is this: when a buyer is kind of on the verge of whether to buy or not, there's a certain amount of randomness involved, in terms of how the buyer feels at that moment.  He might be seesawing between the Toyota and the Buick, which both seem like good values for the money.  He's in the Toyota dealership, with his wife, looking at the final offer ... and, it might be 50-50 whether he goes for it or not.  

I think that, unconsciously, if the price is a penny higher, instead of 50-50, it might be 49.9999/50.0001.  I bet that's actually how it works.

I'd agree that for buyers who aren't on the fence, and know they want that car, a penny won't make a difference.  If you *really* want a Camry, even after looking at the competition, then, to you, it's worth substantially more than the retail price (you have a high "consumer surplus").  The penny won't bother you.  (But in that case, even $1000 may not bother you.)

------

So: anyway, that's the argument.  I think it's a strong argument, especially the part where you see you can't connect the two dots on that graph without individual pennies making a difference.  

But, let me know what you think.  

------

Hat tips:

The commenters weren't completely convinced at this post from Brian Caplan, which makes a slightly different argument (and also brings up a point that I think also applies  to sabermetrics (more on that here later)).  

Also, the book "Mathsemantics" had a similar example, about how moving an airport ten miles farther from the city will reduce air travel (by 10 percent!!!).


Tuesday, October 23, 2012

Yet another r vs. r-squared explanation

There's an election with one million voters, who randomly choose between Party A and Party B.  The margin of victory is important, not just who got the most votes.

If you run a regression to predict the margin of victory, based on a single vote, what will the r-squared be?  It will be 1/1,000,000.  That's because, if you knew all the votes, you'd know the margin of victory perfectly and the r-squared would be 1.  Since no vote is more important than any other, each must be equal in r-squared.  And the r-squareds have to add up, since the voters are independent.  So, 1/1,000,000 is the answer.

But, here's a different question: what is the impact of one vote on the margin of victory?  Well, that margin will be fairly small -- if you do the calculation, the SD of the difference between A and B will be 1,000 votes.  So, a single vote will be 1/1000 of the margin.  Not one in a million, but one in a thousand.

It's not that hard to see why.  When a million people vote, their votes will mostly cancel out, since they're all choosing randomly.  We know that they cancel out by the square root of the number of votes, since SD goes down by the square root of sample size.  So they'll cancel out to a difference of 1,000 votes.  John Smith becomes one vote in a margin of 1,000, instead of 1 vote in 1,000,000. 

That's the r: 1/1,000.  It's the square root of the r-squared of 1/1,000,000.

Effectively, a single vote sticks out higher because everyone else cancels out.

-----

This, I think, is a good analogy to visualize the difference between r-squared and r:

-- r-squared tells you how important the factor is relative to all the other factors.


-- r tells you how important the factor is relative *to the size of the final outcome*.

The size of the outcome -- in the sense of the difference from the mean -- is the square root of the size of the number of independent "others", which is why this works out.

-----

This appears to lead to a contradiction: if there are a million voters, and they're all equal, how can they ALL be 1/1000 of the outcome?  That would add up to one thousand outcomes!

But that's OK.  Remember, we're talking about the *size* of the outcome, not the responsibility for it.  If all million voters voted for A, the outcome would have been a 1,000,000 vote margin, instead of just 1,000.  So, all the voters combined DO add up to 1,000 outcomes -- by size.

If you don't like that, here's a non-statistical analogy.  Suppose there's an election where party A wins by one vote.  Whose vote tipped the balance?  Everyone's!  That is, everyone who voted for party A.  There might have been 500,001 votes for A, and 500,000 votes for B.  If *any* of the A voters had voted the other way instead, B would have won.

That is: 500,001 voters can say that *they* made 100% of the difference in the election -- and they'd all be right!  That is, it's perfectly OK that the sum of the voters' effects add up to a huge number.  It's an illusion that it seems they shouldn't be able to.

-----

Moving to a sports example ... let's go back to payroll vs. wins in baseball.  Suppose you do a regression, like the ones in this Freakonomics post, and you find that the r-squared equals .1, like it was in 2008. 

What that means is: payroll explained about 1/10 of the variance of wins.  That means that, in a sense, there could be 9 other factors that are just as important as payroll.  (Or, one factor that's "nine times" as important.  Or one factor "five times" as important, and two other factors "two times" as important, or some combination like that.) 

That is: payroll gets "one vote out of 10" in determining wins.

OK, fair enough.

But: those 9 other factors can be treated as independent and random (as an assumption of the regression -- and, also, if they were correlated to payroll, the regression would lump that in with payroll).  Therefore, they mostly cancel each other out, down to their square root.  So the SD of the other 9 factors is only 3 times the SD of payroll.

If you add payroll back in as the 10th factor, you get that the SD of the total is the square root of 10 times the SD of payroll (the square root of 3 squared from the other factors, plus 1 squared from payroll).  That's around 3.1.

If payroll represents a single vote, the margin of victory is 3.1 votes.  So, salary influences wins not by 1/10 (0.1), but by 1/3.1 (0.32), which is the square root.

Which is why we say, if you increase your payroll by 1 SD, you increase your wins by 0.32 SD.  If you move one inch up or down the normal curve for payroll, you'll move 0.32 inches up or down the normal curve for wins.

That's fairly large.

-----

The moral of the story is:

1.  The r-squared tells you what percentage of the *votes* you got.

2.  The r tells you what percentage of the *result* was because of you.

For a cause-and-effect relationship, like payroll and wins, you almost always want number 2.


------


(I've written about r and r-squared numerous times in the past, such as here.)



Labels: , ,

Tuesday, October 16, 2012

Can money buy meat?

If you want to have meat in your diet, you have to spend money in the grocery store.  At least, that's the conventional wisdom.

But is that really true, or is it just a myth?  Let's look at the evidence.

I took a (made up) random sample of 30 shoppers in my local supermarket earlier this year.  I ran a regression to predict the total amount of meat they had, from the total amount of money they spent.  It did turn out that the York family, the one who spent the most money by far, did get the most meat.  And, that there was a positive slope, meaning that spending more money leads to more meat.

However, there was one very important issue: the link between meat and money was not statistically significant.  In other words, we can't argue that money spent and meat obtained are actually related to each other in 2012.

It's easy to understand why we got this result.  Some of the lowest-spending families wound up with a lot of meat -- one was stocking up for a BBQ, and one owned a cattle ranch.  And a few rich-spending families barely had any meat in their houses at all -- they paid a lot for only a few ounces of filet mignon.

But 2012 isn't typical.  When I (pretended that I) did the same experiment for other years, I got statistically significant results.  But even for those years, explanatory power is quite low.  Only 17 percent of the variation in meat over the last 25 years is explained by variation in spending.  So much of the variation in meat obtained is not explained by how much money was spent.

And if you look at each year individually -- as the following table (with made-up numbers) illustrates -- the power of money buying meat seems to vary quite a bit:

2012: not significant
2011: r-squared = .17, p = .01
2010: r-squared = .13, p = .04
2009: r-squared = .21, p = .02
2008: r-squared = .10, p = .06
2007: r-squared = .25, p = .00
2006: r-squared = .29, p = .00
2005: r-squared = .24, p = .00
2004: r-squared = .29, p = .00
2003: r-squared = .18, p = .02
2002: r-squared = .20, p = .01
2001: r-squared = .10, p = .04
2000: r-squared = .10, p = .04
1999: r-squared = .50, p = .00
1998: r-squared = .47, p = .00
1997: r-squared = .22, p = .01
1996: r-squared = .34, p = .00
1995: not significant
1994: r-squared = .16, p = .07
1993: r-squared = .09, p = .09
1992: not significant
1991: not significant
1990: not significant
1989: not significant
1988: r-squared = .18, p = .00


From 1996 to 2001, supermarket spending and meat were statistically linked each and every year.  However, explanatory power varied.  If we look at shoppers before 1993, we see four years where where the relationship was again not significant.

So here is the big question: Why is the relationship not stronger?  One would think that as shoppers spend more, they would wind up with more meat.  But, often, that's not what we see in the data.

One issue is that you can get meat at other places than the supermarket -- butchers, say, or gifts, or the slaughter of animals you own yourself.  Another issue is that it's hard to predict what shoppers will buy any given week. 

But, does the result from 2012 show that spending and meat will not be statistically related in future?  We don't know.  But what we *do* know is that spending does not guarantee a shopper more meat.

That's the nature of shopping.  Sometimes you don't get enough meat, and, it seems, no amount of spending can change that reality.

--------

So: do you believe me?  Do you believe that how much meat you have in 2012 doesn't depend on how much money you spend?  I hope not.

What, specifically, is wrong with the logic?  Lots of things, many of which I've written about before.

------

1.  Even if you don't get a statistically-significant relationship between spending and meat, that does NOT mean that you "can't argue that money spent and meat obtained are actually related to each other".  Of course you can!  Lack of significance just means that, in one specific, narrow, sense, you don't have enough grounds to assert a relationship *on this evidence alone*. 

But, of course, there's LOTS of other evidence that meat and spending are related.  For one thing, there's a big sign in front of the steaks, that says, "$7.99 per pound."  For another thing, millions of people will tell you that they have successfully exchanged money for meat.

You can only argue that there's no relationship if you choose to ignore all those things.  Which, I hope, you wouldn't.

2.  The implicit assumption in the argument is that every year is different.  That is: money bought meat in 1993 and 1994, but not in 1992 or 1995.  Why would you assume that, that the nature of shopping changes so often and so much that you can buy meat in 1994, but not 1995?  If we're going to assert that, we need some kind of explanation of how that could be plausible.

3.  Also, if you're interested in statistical significance, shouldn't you care about checking if 1994 and 1995 are actually significantly different?  What do you do if there's not statistically significant difference between them, as there probably isn't?  How can you say money bought meat one year, but not the other, when the p value of the difference is very high? 

You have a contradiction:

1994 is significantly different from zero
1995 is NOT significantly different from zero
1994 is NOT significantly different from 1995.


Isn't it just as reasonable to say there's no difference, than to say that money bought meat in 1994 but not 1995?  Even if you're depending on statistical significance, you still have to make an argument.

4.  Why use "different from zero" as your significance criterion anyway?  In this particular situation, there is no real reason to think that zero is more likely than any other value -- and, in fact, there's very, very good reason to believe it's different, unless you have good reason to believe that big spenders don't buy more meat than the guy in the express lane with one item. 

In some cases, like whether prayer cures cancer, a default of zero makes sense.  But not here.  Saying, "we'll assume money can't buy meat until we see strong evidence otherwise" ... well, that's just privileging your hypothesis.

5.  If you get a value that's significant in the real world sense, but isn't statistically significant, you need more data.  You can say, "I don't have enough evidence."  You can say, "there isn't enough evidence HERE."  But you can't just assume that there's no relationship.  Otherwise, it would be easy to argue that smoking is harmless.  You just do a double-blind study that's really small.  And then you say, "even though 40 percent of the five smokers got lung cancer, and only 20 percent of the five non-smokers got cancer, we got an r-squared of only .1, and that's not statistically significant.  So, there's no evidence that smoking causes cancer."

Yes, the evidence of THAT study is weak.  But that's because the study is too small.  Twice the risk of cancer is plausible, and important, and you can't just dismiss it because you deliberately designed your study the way you did.  And there are lots and lots of other studies showing a link, and a biological mechanism by which it happens.

If you did that study, and you deliberately ignore all the other evidence, than it's fair to say that YOU can't conclude that smoking causes cancer.  But WE can certainly conclude it. 

Similarly, if all you know is that within the dataset of your 30 individuals, the correlation between meat and spending is low ... YOU can conclude you don't have evidence that meat can be bought.  But WE cannot, because WE have other evidence: we've been to a supermarket.  We know something about how the market for meat works.

6.  Even noting that the r-squareds jump around a bit -- and that the jumping around is statistically significant -- that doesn't necessarily mean that the relationship between money and meat has changed.  The r-squared depends not just on the relationship, but on the scattering of the values in the actual dataset. 

So an increasing r-squared could simply indicate a larger variation in overall spending.  Think about it ... if some families spend $1, and some spend $1000, it should be easier to notice the relationship between spending and meat, which means a large r-squared.  But if everyone spends exactly $100, it's going to be harder -- a lower r-squared -- even if money buys the same amount of meat as always.

So when you see a changing r-squared, you can't really be sure what's going on.  It would be better to look at the coefficient estimate of the regression equation, rather than the r-squared.

In fact, for any arbitrarily low r-squared, I can construct a dataset where the coefficient is as statistically significant as you like, and meat costs any amount per pound you like.  (I thought I wrote about this fact before, but I can't find it.)

7.  Even though an r-squared less than .10 may look small intuitively, it probably isn't.  A low-looking r-squared can be very important in real life.  You can't just say ".10 is small".  You have to *argue* that, in context, it's small.

If you did a regression of suicide vs. life expectancy, the r-squared would be at least as small as the ones here.  But suicide and life expectancy are most definitely linked. 

You have to interpret the r-squared for what it is.  It's not really an indicator of how easily money buys meat.  It's a measure of how well you can predict meat from money, *relative to all the other things* that help you predict meat*. 

If cancer kills a million people, and suicide kills 10, the r-squared between suicide and life expectancy will be low, because suicide is being compared to cancer.  That's true even though a single suicide has a bigger effect on life expectancy than a single case of cancer.

8.  You'll notice how large the relationship really is if you look at the r, instead of the r-squared.  The square root of .17 is .41.  That means that for every standard deviation difference in money spent, you get 41% of a standard deviation in meat obtained.  That's a pretty strong association: if you move two inches to the right on the supermarket-spending bell curve, you move 2/5 of an inch to the right on the meat curve.

9.  The r-squared doesn't really tell you whether meat CAN be increased by increasing supermarket spending.  It tells you how much meat WAS increased with supermarket spending.  Obviously, you'd expect an imperfect correlation.  People use money on all kinds of things -- TVs, cars, tofu, vegetables.  They get meat from sources -- their own animals, gifts, butchers -- other than supermarkets.  And, they buy different kinds and forms of meat, at various prices: steaks, hamburger, spam, TV dinners, dog food, and so on.

Given all that variation, *of course* you're going to find a less-than-100% correlation between supermarket spending and meat purchased.  That doesn't mean that there's no cause-and-effect relationship of deliberately spending more money and getting more meat, at the margin.

This is easier to understand if we look at something other than meat -- say, hair. 

Hair CAN be bought for money.  If you're bald, and you want to have hair, you can write a check to Hair Club For Men, and they'll add hair to your head.  But if you look at whether hair HAS BEEN bought for money, very little of it has -- most of it we got free, from God.  The r-squared between "hairs on head" and "money spent" is low, because most hair is not bought for money, and most money is not spent on hair. 

But if you have money, and you choose to buy hair, you'll get it.  


Same for meat.

------

And so, the botttom line is: even if we get a legitimately small correlation, you CANNOT say that "no amount of spending can buy meat."  That's exactly like noting that the correlation between shooting yourself in the head and lifespan is small, and saying, "no amount of shooting yourself in the head can change your life expectancy." 

That's just not true, because it's just not what r-squared means.

-------







(Inspiration: this Freakonomics post.)



Labels: , , ,

Tuesday, October 02, 2012

HOF selection and bicycle helmets

Whenever someone makes an argument, it's usually based on a reference to some broad principle. 

We should not discriminate against blacks, because -- broad principle -- all men are equal in rights and dignity.  We need to provide medical care for the poor because -- broad principle -- we should not let anyone die because of lack of money.  We should not put people in jail for burning the flag because -- broad principle -- freedom of speech must not be abridged.

We need those principles because, otherwise, we don't have a real debate.  I say, "we should not put people in jail for burning the flag because I say so," and you say, "we *should* put people in jail for burning the flag, because *I* say so."  That's not a rational argument.

But, if you use a principle for justification, you have to stick to it.  Some people, who don't believe in gay marriage, will say, "gay marriage shouldn't be allowed because marriage is designed only to recognize relationships with the potential for procreation."  But those same people don't think that sterile people should be prohibited from getting married, or women over menopausal age.  And so, they look like hypocritical idiots -- stating a principle, but having no intention of abiding by it.

(And they should look like idiots to you even if you also oppose gay marriage ... a bad argument is a bad argument.  However, people seem to have a tendency to defend people who share their views -- who are "on their side" -- even if they're saying dumb things.  This is regrettable, but not within the scope of this post.)

The point is, if someone invokes a principle, they should be held to it.  Otherwise, people will invoke a principle when it suits them, and pretend it doesn't exist when it doesn't suit them.  One of the most beautiful things about the US Bill of Rights is that it just states broad principles, and then the courts make sure that government lives by them.  If we say we believe in freedom of speech, and then Congress passes a law that violates the principle, the courts say, "You can't do that.  You're contradicting yourself.  If you really want your new law, amend your principle first."

------

Recently, Tom Tango and Joe Posnanski applied this argument to Baseball Hall of Fame voting.  They use the word "framework" instead of "principle," which I like, because it sounds less political and less confrontational.

To those who want Jack Morris in the Hall, they say, "well, by most reasonable frameworks, it appears that Rick Reuschel is more qualified than Jack Morris.  Do you also want Reuschel in the Hall?  If not, tell us what your framework is, that puts Morris in and keeps Reuschel out.  And, be prepared to live by that principle once you declare it."

Seems reasonable, right?  But, geez, the commenters didn't get it.  One commenter put together a framework, with lists of players.  Then, Tango pointed out that it ranked Pedro Martinez too low -- and the commenter got mad!

Another commenter said things are too complicated for a "facile" framework.  So?  Come up with a less "facile" one.  If things are too complicated for that commenter, that's fine -- but that doesn't mean he's allowed to dismiss the idea that you can just proceed arbitrarily.  Because, you *have* to have some kind of principle.  It's like saying, "you know, it's impossible to codify free speech principle perfectly -- you've got shouting "fire" in a crowded theater, how do you work that in? -- so let's just forget about free speech as a framework."

That won't do.  If it's too complicated, do the best you can, but that's not an excuse for *eliminating* principles.  Otherwise, as I said, you can't have a debate at all.

Other commentators came forth with frameworks, some of which indeed put Morris ahead of Reuschel, but didn't bother checking their framework to see if they were really willing to live with the results.  Again, that misses the point.  As Tango says, paraphrasing Bill James,

"The exercise here is to force you to be consistent and end the idea of starting with your opinion and then trying to justify it. That is, you should start with the evidence, and let that lead you to the conclusion, and not the other way around."

I find it amazing that people don't get that. 

------

So, I don't think anyone has answered the question yet.  I look forward to seeing some attempts.  The requirement, for answering the question honestly, is:

(a) state your framework
(b) show all the players in the HOF by your framework
(c) show all the players *not* in the HOF by your framework

Of course, no framework is perfect.  It's perfectly OK to say, "well, there are certain exceptions that I would invoke, but I haven't figured out how the principle by which I believe that, yet."  If you have, say, five or six exceptions, then, great.  If you have fifty, there's something wrong with your framework.  So, maybe add (d)

(d) explain where you and the framework disagree, and why you haven't changed the framework to make them agree.

That's what answering the question really entails.

I don't have a framework for the HOF question, personally ... the one I agree with the most, so far, is the Bill James HOF standards test.  It doesn't try to say who SHOULD be in the Hall, but, rather, who IS in the Hall.  Still, it's a pretty good framework, which, I guess, it has to be, since the reporters who do the voting are generally aligned with the fans, and so the "should" corresponds pretty well to the "is".

But if you can do better, show us.

------

The HOF situation isn't the best example of failure to think things through, for a couple of reasons.  First, any actual HOF framework is going to be complicated, with so many measures of performance out there.  Second, even the fairest-minded HOF analyzers among us probably can't perfectly articulate what we're doing.  And, third, the failure to defer to the framework is obvious to the sabermetric community, since we've been dealing with the issue and the Keltner- and Morris-advocates for a long time.

So, let me give you a real life issue.

I live in Ottawa.  We have a lot of nice recreational bike paths here, that actually go to decent places, like Parliament Hill.  I ride them without a helmet.  Some of my friends are OK with that.  Some of my friends are horrified.  Some are concerned.  Some think I'm nuts.  Some think there should be a law forcing me to wear a helmet.

Are you one of those who thinks I should wear a helmet, or even that there should be a law forcing me to?

If you are, I don't agree with you.  But, try to convince me, by telling me your framework for protective-gear-wearing. 

Just like a HOF framework should explain why Jack Morris should be in and Rick Reuschel should be out, your helmet framework should be able to explain:

1.  Why cyclists should wear helmets, but not drivers or pedestrians
2.  Why cyclists should wear helmets, but not elbow pads or knee pads or stomach pads or body armor
3.  Why recreational ball players don't necessarily need protective gear in the outfield
4.  And so on.

Don't answer those questions directly: just state a principle (or set of principles) that you're willing to live by -- such that if I point out that your principle requires you to wear a bulletproof vest while jogging, you'd say either, "yeah, my principle is wrong," or "you're right, I should go buy a bulletproof vest."

This should be a lot easier than the HOF one.  It doesn't have to be perfect.  You can change your principle if need be: it's hard to get things right the first time.  Just do the best you can.
 

Personally, I do not believe that the public's advocacy for bicycle helmets is principles-based or framework-based at all.  Here's what I think, just so you see where I'm coming from:

I think a bit part of the reason (many) people advocate bicycle helmets is the "availability bias" -- it's easy to imagine horrific bicycle accidents that bash in the rider's head.  I think another part of the reason is that it's socially acceptable to wear helmets, but you get made fun of for wearing helmets for other activities that are just as risky.  I think people underestimate the diversity of human preferences, and think that if *they* don't mind wearing a helmet, other people shouldn't mind either, unless they're stupid.  I think people advocate helmets to show they're thoughtful people and not the kind of dumb rubes that don't know enough to keep their heads protected.  I think people who advocate helmets don't mind them for themselves, and therefore don't mind imposing them on other people, because it doesn't cost them anything.  And, I think people just have a strong intuitive feeling that helmets are appropriate for cyclists and not for drivers, and they don't question that feeling.  I think people are just trying to Jack Morris me into wearing a helmet.

You don't have to agree with me on those, and I don't want to debate you on those.  I'm just telling you what I think, so you understand my issue better, and why you're going to need a framework to convince me. 

Also, here's one framework I reject, that I saw on some blog a while ago: that protection should be required when it doesn't change the nature of the activity.  I'm going from memory here, but ... the idea is, that if you put on a helmet, it doesn't change the activity of cycling too much, so it should be required.  But, putting on a bulletproof vest DOES change the activity too much, so not required.

I reject that for these reasons, among others: (a) certain cyclists DO think it changes the activity, and that's why we don't want to wear a helmet.  You'd have to say, "if Joe Blow doesn't think it changes the activity," which is obviously arbitrary.  (b) wearing a helmet when driving would seem to be on the same order of magnitude in terms of changing the activity.  So, you need to add something to exempt drivers, if you believe they should be exempted.  (c) does wearing a condom change the nature of the activity?  I think it's roughly the same principle as a helmet, so you'd have to force that on people, too.

-----

So: what's your framework for helmet wear?  Remember: I'm not looking for an argument to tell me why I should wear a helmet.  I'm looking for a framework to tell me -- and you, and everyone -- about activities and protective gear, in general.













Labels: ,