Monday, June 27, 2016

NHL teams strategize when to play for overtime

Here's an article I found a year ago in the Journal of Sports Economics, but didn't get around to writing about until now.

It's by Michael Lopez, and it's called "Inefficiencies in the National Hockey League Points System and the Teams That Take Advantage.

As is well-known, NHL teams have an incentive to get games to go to overtime. If a game is settled in regulation, the winning team gets two standings points, while the loser gets none. However, if a game goes to overtime or a shootout, the winning team gets the same two points -- but the losing team gets one point too.

An overtime game is better for teams, in general, because they get to split three points between them instead of just two. So it's no surprise than NHL teams respond to the incentive. In the first thirteen seasons after the "loser point" rule was adopted, the frequency of overtime games jumped from 20.2 percent to 23.6 percent. (Coincidentally, it's the same 23.6 percent before and after the shootout was adopted.)

Lopez's paper was able to quantify two new additional findings:

1. Games are more likely to go to overtime later in the season than earlier; and

2. Games are more likely to go into overtime when teams are not in the same conference.

These make sense, intuitively. Later in the season, some teams are fighting desperately for a playoff spot, and the extra standings point is much more important for them in terms of leverage. And, whether a team makes the playoffs depends only on the other teams in its own conference, so sharing an extra point with an other-conference opponent doesn't cost anything at all. (Well, maybe it might, rarely, cost home-ice advantage in the finals, but that's highly unlikely.)

As mentioned, 23.6 percent of games went to overtime in the shootout era. But the overtime percentage varies substantially by situation:

25.4% Inter-conference games
23.2% Within-conference games 

22.0% September-December games
23.8% January-February games
25.6% March games
29.3% April games

The conference difference is only significant at p=.08, but the month difference is significant at p=.001.

-------

But, Lopez found, the differences are actually larger than those raw percentages, because the two situations aren't independent. As it turns out, the NHL tends to schedule within-conference games late in the season. That's for drama, so that the most meaningful, high-leverage games are likely to be against historical rivals.

Because of that, the two effects partially cancel each other out. The late-season effect tends to increase overtimes, but those games tend to be within-conference, which decreases them.

Lopez separated out those factors with a regression. Calculating from his coefficients, and assuming teams of equal talent, I get:

23.5% within conference, early in season
26.2% different conference, early in season
31.8% within conference, April
35.0% different conference, April

So, the differences are a lot bigger than the raw numbers show.

---------

Something else that's interesting: the "within conference" effect is very recent.

The overall conference effect was 2.7 percentage points (26.2 versus 23.5). But, almost the entire effect came from the last two seasons in the study. For the study's first twelve years, there was almost no difference at all, on average. But in 2010-11 and 2011-12, the conference effects were 4.7 and 5.8 percentage points, respectively.

It's like teams suddenly caught on to the idea that they don't want to give points away to conference opponents.

But ... well, it seems to me that strategy doesn't really make a whole lot of sense.

Yes, it's true that you don't get an advantage against your rival by playing for three points instead of two. It almost seems like it's worse -- if you win in overtime, you only gain one point on your opponent (you get two, they get one). But if you win in regulation, you gain two points!  Except that it's symmetrical ... if *they* beat *you* in overtime, they only gain one point on you. 

The disadvantage comes not from any negative expectation -- it's symmetrical, after all -- but that the other-conference games come with a *positive* expectation.  You share three points instead of two, but your opponent's gain is not your loss, so the more points to split, the better.

So, against that particular opponent, the inter-conference overtime game is much better for you, with 50 percent more points up for grabs, and no penalty for the points the other team takes, beyond your disappointment at not getting them yourself.

The problem, though, is: that's only true for the one team you're playing against. But, you're not just competing in the standings against this one particular opponent. You're also competing against the other 12 (West) or 14 (East) teams in the conference! If you can raise the expected payoff to 1.5 points each instead of 1.0, you break even against the one same-conference opponent, but gain an expected half point against at least 12 other teams!

Sure, there's *a bit* less incentive within conference, because you stand to gain on only 12 teams, instead of 13 teams when you win an inter-conference game. But, that's so small a drop in incentive that you shouldn't even see it. 

To repeat an analogy I've used in the past: If you see a $2 coin in the street, you'll pick it up. If it's only a $1 coin, sure, you're less likely to pick it up, in theory. But, in practice? You'll still pick it up so often that nobody will be able to tell the difference. It's like a 99.99% chance compared to a 99.98% chance, or something.

-------

Also: why should there be a March/April effect?  Every game counts equally in the standings. A November game is just as important for making the playoffs, on average, as an April game.

Of course, in April you *know* how important the game is, whereas for a November game, it might, in retrospect, turn out to have been meaningless. But, since games count equally, the overall leverages have to be the same. If that's the case, then for every absolutely crucial April game, there must be an offsetting meaningless one, in order for the April average to equal the November average.

I wonder if the April effect applies only to the most important games. Maybe teams are thinking, "well, we feel a bit weird lowering our intensity to play for the regulation tie, so we're only going to do it when it's really, really important."  In other words, the probability of overtime doesn't increase smoothly with leverage -- instead, it takes a big jump when the pressure to gain points is exceptionally high. 

Maybe I'm 100% willing to steal food if I'm on the brink of starvation, but I'm not 50% willing to steal food if I'm only halfway to starving. In the latter case, the risk isn't worth it.

It could be the same thing here. Maybe teams aren't willing to play a less intense strategy (or whatever they do to play for overtime) when it's an ordinary, early-season game. But, when it's *really* important, that's when it's worth the trouble.


Labels: , , , ,

Thursday, June 16, 2016

Consumer Reports on Sunscreens

Consumer Reports' July, 2016 article on sunscreens promises to "expose startling truths about product claims and effectiveness."  Their most important claim is this:


"43% of sunscreens in our tests failed to meet the SPF claim on their labels."

Call me skeptical, but my first reaction was to suspect that their testing wasn't quite right. 

CR and I have quite different priors about how companies operate. I believe that companies are generally honest, and dishonest manufacturers and bad products get shunned by consumers quicker than CR thinks, and there's no way 43 percent of manufacturers are outright lying and ripping us off. 

But, sometimes, it seems like CR thinks the manufacturers would cheat us from head to toe if they thought they could get away with it. 

Here's how they tested:


"A standard amount of sunscreen is applied to small areas of our panelists' backs. Then they soak in a tub of water. Afterward, each of those areas is exposed to six intensities of UVB light from a sun simulator for a set time. About a day later, the six spots are examined for redness."

Aha! CR rated sunscreens only on how they performed after water exposure, like swimming and heavy rain. They didn't include "dry" protection, or even protection after sweating. Their accusation that "almost half" the products fail to "live up to the SPF claim on their labels" is based solely on performance after a soaking period. 

It took me a while to figure that out. The article does state that explicitly, but not in the main article. There are only two places that I can see where CR mentions it.

The first is in a full-page sidebar on page 27. But not in the main text. There, CR touts their "troubling" findings and demands FDA action, but doesn't mention they report and rate only on post-immersion performance. You have to look down into the smaller print, beside a couple of pie charts, to find that out. (It's in larger print on their website, which includes only a part of the full magazine article.)

The second is in the ratings. Well, not the ratings themselves -- the charts themselves are headed "Tested SPF" with no mention of water. You have to look at the footnotes.

In fairness to CR, they do note that every product does make claims of water resistance, either for 40 or 80 minutes, which is why they feel justified in testing this way. But ... they never say that the manufacturers guarantee the exact same SPF value after immersion. Couldn't it be that the sunscreens just vary in how water resistant they are?

In other words -- maybe the sunscreens just vary in water resistance! Isn't it possible that the manufacturers are claiming a "dry" SPF, and the products just vary in how well they perform after swimming? 

That was my first guess, but, probably not. I found a website that says the law *does* require the label SPF to be met after immersion

But, maybe the immersion tests are weaker than real life, the same way that cars usually get lower mileage than their EPA estimates. According to the website, the test involves submerging an arm in a Jacuzzi, then measuring the SPF afterwards. It could easily be that arms are not the same as backs, or a Jacuzzi is not the same as the tub CR used. 

Who knows, maybe, in their 40 or 80 minutes of soaking, CR's participants leaned their backs against the sides of the tub, and some of the lotion rubbed off? I'm betting CR was too careful to let that happen, but ... in any case, it would be nice if CR would consider that there might be something going on other than fraud. 

If it's true that the FDA does require independent lab testing, it wouldn't just be that 43 percent of manufacturers are lying, it would be that 43 percent of manufacturers are falsifying results. That's a strong accusation to be making, and, to me, not a very plausible one.

------

Even if CR's testing is perfectly accurate, why don't they mention the products' performance in other contexts? Nowhere in the article or sidebars or footnotes does CR mention, even once, how the products did in dry or sweaty conditions. Their accusation that "almost half" the products "fail to live up to the SPF claim on their labels" is based only on performance after soaking. 

I count at least five times where CR accuses the manufacturers of overreporting the SPF, but only one of those times does CR mention "after water immersion" at the same time.

And why don't they report on dry performance at all? In a second test, they report that they "smear sunscreen on plastic plates and pass UV light through and measure the amount of UVA and UVB rays that are absorbed."  So, they actually *do* have test results for UVB effectiveness when dry. 

They report on the dry UVA numbers -- those form the basis of their UVA rating --- but the dry UVB numbers are completely ignored.

(UVB is the part of the spectrum that's relevant to SPF; UVA is a more dangerous component of sunlight that doesn't cause burning or tanning, but still can lead to damage and cancer. CR chose to use a more strenuous test for UVA than the FDA requires. They didn't accuse any products of failing to meet their UVA claims, even those that rated "poor".)

Why don't they tell us how the products performed dry? Is it because, in non-immersive use, they do indeed live up to their claims? Wouldn't it be fair to mention that? 

Why bury the important qualifier that the failures are all related to post-immersion performance? 

-------

It must be that even for the best products, water resistance isn't perfect -- that the performance of all the sunscreens is reduced after a swim. 

In that case, if a product tests at SPF 50 after immersion, doesn't that mean it must have been higher than SPF 50 beforehand? That must be the case, right? Unless some products are so completely, perfectly unaffected by water that they don't degrade at all.

But if products didn't degrade in water ... well, first, wouldn't manufacturers claim more than 80 minutes of wet protection? Second, if natural friction with water isn't sufficient to degrade the layer of sunscreen even a little bit, wouldn't that make it really hard to rub off by other means, like a towel? And, third, if there's no degradation, why are we advised to reapply sunscreen after swimming?

My guess is, if Coppertone Water Babies SPF 50 "meets [its] claim" after 80 minutes of immersion, it must have been a lot more than SPF 50 beforehand. 

Why isn't that a problem for CR, that people are buying what they think is SPF 50, but it's actually SPF 100 when dry? Probably because CR thinks higher is always better. They rate the products by actual (wet) SPF, "not how close a sunscreen comes to meeting its SPF claim."

Which is OK, I guess. The difference between 50 and 100 isn't that big a deal. The SPF is the fraction of UVB the product allows through, so SPF 50 means you're exposed to only 1/50 of what you'd get without it, and SPF 100 means you're exposed to a 1/100 dose. Sure, you get twice as much UVB from SPF 50 as SPF 100, but it's twice a very small amount. The difference, after four hours in the sun, is the equivalent of 2.4 unprotected minutes. 

Fair enough. I can't see anyone saying, "I feel ripped off that my SPF 50 is actually SPF 100. I was counting on those 2.4 extra minutes to tan better."  If you wanted to get a tan, you'd probably use something lower, like SPF 4 or 8 or 15. (That's when you have a problem, that your SPF 4 is actually SPF 10 when dry, which *is* a big difference.)

In any case ... why does CR not advise, explicitly, that higher is better? They imply it, by using a "higher is better" standard in their ratings, but don't actually say, "just get a big number over a small number."

-------

Instead, CR advises you to use only products that are SPF 30 or higher. That "30" doesn't come from any first-hand evidence or argument, but because CR just defers to what the American Association of Dermatology recommends. 

Accordingly, CR didn't test or rate any products labelled lower than SPF 30.

What's strange is ... they don't even *mention the existence* of those other products! More importantly, they don't even mention why lower SPF products exist. I'm guessing it has something to do with tanning but not burning? Except that CR doesn't talk about the difference between a tan and a burn, except that "a burn and the tan are kind of the same response ... you've recieved an injury that can lead to cell mutations that trigger skin cancer."

Nowhere in the article does CR mention what to do if you want to tan as safely as possible.

CR's mission statement says they "empower consumers with the knowledge they need to make better and more informed choices". But, in this case, they don't seem to care about helping us make the choice of whether a tan is worth the risk. For pale caucasians like me, "Stay pasty white in order to be safe!," seems to be only valid choice in their eyes. 

-------

When a product failed to live up to its SPF label after immersion, CR reports the actual SPF they found, like those two kids' sunscreens that tested at 8 instead of 50.

But when a product *did* meet its claim, CR doesn't tell you the actual number. In that column on the chart, they just say, "Meets Claim".

Well, that's strange, isn't it? If an SPF 50 product tested at SPF 60 after immersion, you'd expect CR to say, "hey, this is better than advertised!"

Why don't they? Maybe it's part of their "companies are always out to rip us off" attitude, so they don't want to promote the idea that sometimes we get more than we pay for.

Or, maybe they think that if the manufacturer is only promoting SPF 50, that's all they're obligated to deliver. So, maybe this batch turned out to be SPF 60, but the next batch might not.

That would actually have some logic behind it. But that can't be CR's rationale. If it were, then, for those products, they'd base their ratings just on the label claims. But, instead, they seem to be actually basing their ratings on the full, observed SPF. 

We know they're doing that because: in the lotions chart, "California Baby Super Sensitive SPF 30+" has a "Meets Claim" and a UVB rating of "Good". But, "California Kids #supersensitive SPF 30+" also has a "Meets Claim," but a UVB rating of "Very Good". 

Clearly, they can't be basing the rating on the label "30+" claims, because then the ratings would be identical. So, they must be using the actual, observed SPF.

So, the mystery remains. Why won't they tell us about the products that deliver more than they promise?

-------

A fair bit of the article isn't about specific products, but about sunscreen use in general. And a lot of it doesn't make sense to me. For instance:


"It's not true that sunscreens with higher SPF block double or triple the rays as those with lower ones. They really only provide slightly more protection," [Dr. Mina] Gohara [associate clinical professor of dermatology at Yale School of Medicine] warns. The breakdown: SPF 30 blocks 97 percent of UVB rays, SPF 50 blocks 98 percent, and SPF 100 blocks 100 percent."

OK, sure. But you can recast that like this:


"It IS true that sunscreens with lower SPF allow double or triple the rays as those with higher ones. The breakdown: SPF 30 admits 3 percent of UVB rays, SPF 30 admits 2 percent, and SPF 100 admits only 1 percent."

Put that way, do you still want to say that higher-SPF products only give "slightly more protection?"  

Which perspective makes more sense -- 97 to 100 percent, or 1 to 3 percent? Do you care how much UVB gets rejected, or how much gets through?

Well, you do, in fact, get three times as much harmful UVB using the SPF 30 than you do using the SPF 100. Sure, it's three times a small number, and, sure, you expose yourself to *one hundred* times the harm if you don't use sunscreen at all. So, on the one hand, who cares if the SPF 30 is triple the exposure, if that exposure is small in the first place? 

But, CR isn't about to take that tack, that 1 percent or 3 percent of UVB is nothing to worry about. If they did, they wouldn't stop at SPF 30. Because, they could easily have made the same argument for SPF 20:


"The breakdown: SPF 20 blocks 95 percent of UVB rays, SPF 30 blocks 97 percent, SPF 50 blocks 98 percent, and SPF 100 blocks 100 percent."

But CR won't even consider recommending an SPF 20, even though it takes you 95 percent of the way to safety. It seems kind of arbitrary to insist that (a) 97 percent is acceptable, (b) going up to 99 percent doesn't matter much, but (b) dropping to 95 percent is too dangerous to even talk about.

-------

It's not that hard to quantify the SPF difference in other terms. 

Suppose you spend 8 hours in the sun. With SPF 50, it's like spending 9.6 minutes unprotected, since 8 hours divided by 50 equals 9.6 minutes.

With SPF 100, it's like spending 4.8 minutes unprotected. 

Either way, you're not going to get a sunburn, right? But, maybe 4.8 minutes is still harmful, to a small extent. And maybe that 4.8 minutes worth of damage is cumulative, like smoking. Maybe you don't get cancer from a one-time 4.8-minute session, but expose yourself regularly and your odds get much worse. 

So, is the damage cumulative? CR doesn't tell us. They imply that it is, but their argument doesn't say what they think it does. They hypothetically ask, "If I've never used sunscreen my whole life ... isn't the damage already done?"  And then they respond with a completely irrelevant factoid: 


"Hardly. For years, it was believed that we got the majority of our lifetime dose of UVA and UVB exposure before the age of 18. But now experts know that it's cumulative over your lifetime. By age 40, you've received just 47 percent of your lifetime sun exposure. ... The upshot: It's never too late to start protecting your skin."

That sounds like it's answering the question, but it's not. The "cumulative" here talks only about exposure, not damage.

It's more obvious if you make the same "exposure" argument for car accidents: 


"By age 40, you've driven just 40 percent of your lifetime miles behind the wheel ... so it's never too late to start wearing a seat belt."  

That's true, but that obviously doesn't mean the miles you drove beltless before you were 40 are going to cause you damage in the future.

-------

From the main discussion of sunscreen use in general, here's something that didn't make sense to me:

"Not realizing [the small differences in higher SPF values] may lead people to think that if they use a higher PDF, they don't need to reapply or practice other sun-savvy behaviors, such as seeking the shade or covering up."

Well, I'm "led to think that" because it has to be somewhat true, doesn't it, that if you use a higher SPF, you probably don't have to reapply as much? Not because the higher SPF lotion is more durable, but as it breaks down, there's more protection left over.

The standard advice is to reapply every two hours. But it's not like the protection turns into a pumpkin right at the 2:00 mark. It drops gradually. Suppose, for the sake of argument, that after two hours, you've lost half your sunscreen to friction and sweat, and your SPF 50 is an SPF 25. (This seems reasonable, since CR tells us that if we use half the recommended amount of sunscreen, we get half the SPF.)

If you had used an SPF 100 instead, then after two hours, you're still at SPF 50. That's still pretty good. 

I'd be tempted to get an SPF 100 and reapply every four hours instead of getting an SPF 50 and reapplying every two hours, just for convenience. Would that work? CR doesn't care, or won't tell me. 

But maybe the protection *does* turn into a pumpkin after two hours, after all. Maybe sunscreen drops off bodies at a constant rate, like sand drops through an hourglass -- half is gone after one hour, and the other half after the second hour.* Maybe, when hit by a photon of UVB, each particle of sunscreen absorbs it like a human shield, dying in the attempt, and there are exactly enough particles to last exactly two hours.

(* Actually, I'm pretty sure the second half of the sand in the hourglass drops slower than the first half ... for water, I think I read that the flow rate is proportional to the depth. Maybe it's different for sand. Whatever.  :))

Maybe the deterioration is even faster than constant -- like human aging. If you hire an army of teenagers to protect you, you still have 90 percent of your protection after 40 years. But then your protection gets worse, and by 80 years, your army has almost all died away. 

Would be nice to know, wouldn't it? "Reapply every two hours" sounds like an arbitrary rule, not an attempt to maximize protection given constraints of cost and convenience.

Instead of "reapply every two hours," CR should tell us, "almost all the protection disappears between the second and third hour."  Give us useful information, instead of perplexing advice.

-------

"Won't getting a tan actually protect my skin?" the article asks. And CR answers:


"A tan provides the equivalent of up to an SPF 4, but any darkening is just a sign that your skin is defending itself against a solar assault and attempting to prevent further damage."

So, yes, CR admits, a tan *does* protect my skin. But, CR says, some damage was caused in the process of getting the tan in the first place.

What they don't say is: is the tradeoff worth it? Maybe it takes me 50 hours to tan, but, as a result, my next 4000 hours only count as 1000 hours. In that case, I've saved myself the equivalent of 2950 unprotected hours by doing the extra 50 in the first place!

Of course, it depends. The act of tanning might be especially damaging. I can't get a tan with an hour a week over 50 weeks; it has to be 50 hours over a short period. Maybe those hours are especially dangerous. Or, maybe in the 4000 hours that come later, I'd have gotten a tan anyway. Or, maybe if I didn't get a tan because it was only an hour a day, those hours wouldn't be dangerous at all.

In any case, once I have a tan, I may have an automatic SPF 4. Does that mean if I use an SPF 15, I really have an SPF 60? Because, first the sunscreen lets only 1/15 of UVB get to the surface of the skin, and then the tan admits only 1/4 of what remains. That works out to 1/60. 

It does seem like that should be how it works, based on what CR tells us about how sunscreens provide protection. Doesn't that mean that people with tans can get by with a lower SPF?

Not only does CR not admit the possibility -- they actually go out of their way to implicitly reject the idea, by warning black people to follow the same advice as white people:


"Nor are people of color immune to skin damage from the sun's rays. People of all ethnicities can get sunburn and skin cancer. "There is no circumstance under which dark-skinned people shouldn't be wearing sunscreen when exposed to the sun," Gohara adds."

Well, taking literally, Dr. Gohara's assertion is pure nonsense. Of course there are circumstances in which dark-skinned people -- or people of any other color -- don't need sunscreen. Walking 100 feet to the mailbox, say. I'm pretty sure that even Dr. Gohara herself doesn't slather on the minimum one ounce of SPF 30 to walk from her house to her car before driving to work. 

And what about winter?

This sounds like I'm nitpicking ... after all, I know what she means! But it's not. Because, it's an evasion. CR (and Dr. Gohara) are ducking out of the obligation to admit there's a tradeoff between convenience and safety, and to quantify the tradeoff. To them, wearing sunscreen is a moral obligation, like wearing a bicycle helmet or a seatbelt -- you should do it just because you should do it, because that's what right-thinking people do!

Dr. Gohara isn't just saving words by saying "there is no circumstance."  She, and CR, are trying to bully their way out of acknowledging that the risk is much lower in some situations than others, and that the sun is indeed less dangerous for darker-skinned people than lighter-skinned people. 

I mean, *of course* darker-skinned people can still get skin cancer, just like, *of course* you can still get a concussion if you fall on your head walking without a helmet instead of biking without a helmet. That's not the point. The point is: what's the risk, and is it high enough to justify the trouble and inconvenience?

A quick Google search tells me: white people have *twenty-five times* the risk of skin cancer than African Americans. CR could have, and should have, told us that.

And, given that that's the case, CR should definitely be advocating less stringent requirements for dark-skinned people than light-skinned people. By any cost-benefit calculation under which CR determined that SPF 30 is good enough for white people, the same calculation would come up with a much lower threshold for black people. 

-------

Oh, and by the way: there is no mention of time of day. Ultraviolet radiation, as measured by the "UV index" you see on weather reports, is at its highest around noon -- in the earlier morning, or late afternoon, it looks like it drops to maybe half its peak level. Wouldn't it be OK to wear just SPF 15 if we go out in the morning? Wouldn't a walk in the evening, where the UV index is "low," be one of those non-existent circumstances where dark-skinned (and light-skinned) people can skip the sunscreen? 

I'm pretty sure that if I went to the beach at 7pm, when the shadows are long, and started slathering on the sunscreen, people would look at me like an idiot. But at least Dr. Gohara would be happy with me!

How could CR do an entire article on sunscreens and sunburn and cancer without mentioning the danger varies significantly by time of day?

-------

Are the chemicals in sunscreens toxic? CR says: no. But they hedge their bets. First, they say the risk of getting skin cancer from sunburn trumps any concerns about toxicity. And then, having spent most of the article so far telling us over and over and over that we should use sunscreen, now they tell us, well, maybe it's not a good idea to rely on it too much anyway!

It's a weird non-sequitur. It reads to me like CR admits only reluctantly that sunscreen is safe, because they're vaguely bothered by the idea that people now feel freer to lie on the beach. Because ... well, sunbathing is kind of unwholesome, and even with sunscreen it's not *perfectly* safe.

I bet if you ask CR if smoking causes ingrown toenails, they won't say "no."  They'll say, "no, but it causes lung cancer and emphysema and all kinds of other dangerous conditions, so don't smoke!"  

So, instead of just saying "sunscreen doesn't cause ingrown toenails," CR says "sunscreen doesn't cause ingrown toenails, but it could still be dangerous because you might stay out in the sun longer and not use it properly and wind up getting cancer and even if you do use it you might still get cancer even if you're black and look at all the people who use sunscreen and get cancer anyway!"

Of course, they express it in a higher-class way than that:


"Experts speculate that sunscreen use -- and misuse -- may give some people a false sense of security; people think that no matter how casually they slather the stuff on, they will be protected and therefore they stay out in the sun longer -- often without reapplying. Remember, no sunscreen blocks 100 percent of UV rays, and sun damage is cumulative."

This has absolutely nothing to do with the question of toxicity. They just want to scare us. And, having just finished telling us that sunscreen itself doesn't actually cause cancer in the normal, cause-and-effect way, they proceed to try to scare us into believing it causes cancer statistically. Their statistical arguments are appallingly bad:


"In two European studies, people who used SPF 30 sunscreen sunbathed up to 30 percent longer than those using SPF 10."

Well, duh! Phrase it this way, which means the same thing:


"In two European studies, people who chose to sunbathe longer chose SPF 30 instead of SPF 10."

People are being rational, and making reasonable tradeoffs -- but because they like sunbathing, and CR doesn't, they can't possibly be rational. 

Here's another one:


"... a number of studies have shown a correlation between the use of sunscreen and increased incidence of sunburn."

Well, duh! People who don't go out in the sun (a) don't get sunburn, and (b) don't use sunscreen. CR might as well have written,


"... a number of studies have shown a correlation between the amount of time wearing a seatbelt and increased incidence of car accidents."

This one is just as bad:


"For instance, a Danish study reported that 66 percent of sunburned people had used sunscreen to prolong time spent in the sun."

Well, yes. And probably 66 percent of people who die in car accidents had used seatbelts to prolong time spent in the car.

And, finally:


"On average, your risk for melanoma doubles if you've had more than five sunburns."

Doubles? Meh! That's nothing! You know what? Your chance of induction into the Baseball Hall of Fame probably goes up by twenty-five thousand times if you've had at least one MLB base hit!

The point, of course being: "more than five" includes six, but also includes 50 and 100 and 1000, and people who are so sunburned that they're skin looks dried up and wrinkly and creepy. CR is trying to shoehorn "six sunburn" people into the same classification as "100 sunburn" people, the same way I shoehorned "one base hit" into the same group as "4192 base hits."

Moreover ... your risk doubles compared to what? Compared to zero sunburns? Compared to average? I think it's probably compared to everyone with less than five sunburns, but I'm not sure. So not only is the statement misleading, it's fatally ambiguous.

The American Academy of Dermatology reports that the incidence rate of melanoma, for non-Hispanic Caucasians, is 25 cases per 100,000 population per year. That includes everyone who has had sunburns, as well as those who haven't.

For argument's sake, let's assume that the "five or less" group is the same size as the "more than five" group. In that case, the respective rates must be 16 cases per 100,000, and 32 cases per 100,000, respectively.

So if you're in the latter group, and you're typical of that group -- which probably means substantially more than six sunburns, since six is the minimum -- your excess risk is 16 cases per 100,000 per year. Over 40 years, that's 640 cases cases per 100,000, or 6.4 cases per 1,000. 

In other words, your sunburns have given you an excess risk of six-tenths of a percentage point.

The ADD reports that skin cancer is "highly curable" if detected early, with a 98 percent survival rate. So, the death rate is 50 times smaller than the incidence rate -- 12.8 deaths per 100,000 people over a lifetime of "more than five sunburns."

How risky is that? Well, it's the equivalent fatality risk you get driving an extra 12,800 miles in your lifetime. 

Suppose that the average "more than five" sunburns is 10. That means one sunburn is the equivalent death risk of driving 1,280 miles. 

Coincidentally, I actually make a drive of about that distance, Ottawa to Toronto to Pittsburgh to Ottawa, twice a year. If that's all the risk I get from a sunburn, it seems pretty safe to me. It's a risk I'm certainly willing to take.

-------

Remember when I wrote about Consumer Reports' article on extended automobile warranties? They spent the entire article telling me why I shouldn't buy one, and then, at the end, out of the blue, they said I should!

They did it again here, in the context of "false sense of security":


"Don't think that reapplying sunscreen meticulously allows you to stay in the sun longer than a sunscreen's approximate maximum protection time. After you've exceeded that time, your best option is to cover up, wear a hat, and seek the shade."

Huh???

Both Dr. Gohara and CR spent the first part of the article warning me to reapply every two hours -- but now CR is telling me that reapplying doesn't work!

But reapplying *must* work, right? I mean, the physics of UVB reflection and the laws of chemistry still apply. If the SPF 30 was blocking 97 percent of ultraviolet for the first two hours, it should do the same for the second two hours, no? 

This feels like the kind of tainted, emotion-driven "logic" that a panicky, anxious parent would say. "You've spent two hours in the sun already! That's enough for one day! Come inside! Playing in the sun is dangerous, you have to do it in moderation!"

Sunbathing has a health stigma attached to it, so it feels like a vice, even when the risk is mitigated. There's no way CR would take this kind of position for activities it approves of.
"When your bicycle helmet breaks, don't think that meticulously reapplying a new helmet allows you to ride safely longer than a helmet's maximum protection time. After you've exceeded that time, your best option is to walk, drive, or seek a safer activity."

That's not any more absurd than what CR wrote about sunscreen. 




Labels: ,

Sunday, June 05, 2016

Charlie Pavitt on "Redskins"

Here's occasional guest blogger Charlie Pavitt commenting on the use of the nickname "Redskins."  He invites intelligent responses, as always.

------

Along with others who are troubled by the commercial use of the term "Redskins" in reference to the Washington area football team, I was surprised by the results of the survey indicating that about 75% of Native American respondents did not find the term insulting, and only about 10% did (I don’t remember the exact numbers). Now one thing I have learned over my years as a social scientist is to be suspicious of surprising survey results. I automatically think in terms of biased question writing, but I did have the opportunity to read the question, which was worded something like (again memory fails) the following:

Do you or do you not find the term “Redskin” insulting?

I don’t find that biased, so I can’t quibble with the reported results on that issue.

But I wish we could find out how that sample of Native Americans would have responded to two other questions. One was suggested by a statement in response made by an activist on this issue (third memory flaw – forgotten name):

Are you or are you not comfortable with the commercial use of the term “Redskin” by the Washington area football team?

Is that not the issue that is really at stake here?  The second comes from a remark by a sportswriter made on ESPN (fourth memory failure):

Would you or would you not be insulted if a Caucasian referred to you personally as a “redskin”?

My guess here is that a large proportion would find that insulting.  And if I am correct about that guess, then the term is insulting no matter the results of the question that was actually asked.

One more issue – I am perturbed by those who defend the use of the term because of its “tradition.” In a roughly parallel circumstance, more and more people have realized that the “tradition” of flying the Confederate States flag on state government grounds is insulting, because the reason why there was a Confederate States independent of the United States was to protect the institution of slavery. (And I don’t want to hear about “states’ rights”; the reason why the Confederate states wanted their “rights” was to protect the institution of slavery.) And there is a tradition in part of Africa to mutilate female genitalia.  I could go on and on with examples, but I’ll stop here; “tradition” is no reason for continuing a bad practice.

Anyway, I do think that some of the discussion of this issue is overblown; two hundred years of ill-treatment has left the Native American nations with far worse problems than a football team’s commercial trademark. And maybe I am wrong about the answers one would get from the two survey questions suggested above. But it doesn’t matter to me. I shall continue my practice of referring to the “Washington area football team” by that title rather than their commercial trademark.  And I invite all like-minded people to do the same.

-- Charlie Pavitt

Labels: ,