This Willie Mays PSA really scared me as a kid.
You protect your arms and hands legs and save your eyes. If you see a blasting cap remember now, don’t touch them. Tell a police or a fireman or whatever it is. Have fun like I do with those (baseball mitt, bat and ball) and not with these (blasting caps!!)
Anyone I’ve ever met who is exactly my age in years and grew up near Detroit remembers this with fear. I’ve often wondered who produced it. It was sponsored by the “Institute of Makers of Explosives,” which has a lovely website as well as the killer tagline: “Explosives make it possible.”
Mark “Language Log” Liberman is taking Steven D. “Freakonomics” Levitt to task for either misunderstanding the language of statistics, or the underlying statistics theory itself.
In a blog post, “Medicine and Statistics Don’t Mix,” Levitt tells the story of friends of his who spent $5,000 on Preimplantation Genetic Diagnosis (PGD) — a test for genetic viability of embryos — during fertility treatment. The test came back positive for the possibility of birth defects. The friends decided to go ahead anyway–after spending another $5,000 and getting a positive result–and their babies were born without defect. Levitt’s point is: “I never trust statistics I get from people in the field of medicine, ever.”
Liberman rightly criticizes Levitt for confusing ‘false positive rate’ with the probability that a condition holds given a test indicating the condition, or “positive predictive value.” Some of the direct commentators on Levitt’s blog post have mentioned this error as well.
But no one seems to ask the basic question in the current case: what is the positive predictive value of PGD? I’m not exactly sure, but this PubMed article seems to indicate it’s in the 90-95% range, which (given the hearsay chain: doctors tell friends, who tell Levitt, who tell us) seems roughly consistent with the 10% ‘false positive rate’ Levitt cites. So perhaps the problem is a terminological one — when Levitt said ‘false positive rate,’ he really meant ‘positive predictive value.’
Anyway, I think the morals of the story are:
- If you have a medical test and it shows positive, you should ask your doctor what the “positive predictive value” of the test is. These seems to be the term of art in the medical field. Although there will be lots of assumptions built into this, this will tell you the probability that you have the problem given the positive test results.
- You should also ask what the base rate of the condition is — the probability that you have the condition before the test is done.
- Before spending money on an expensive test, it might be worth asking: what difference will it make if I have the additional information? In the case of Levitt’s friends, they spend $10,000 (out of their own pocket, I think) for tests whose results they ignored.
Dear statistical and medical friends: tell me if and how I’m wrong.
I talked to my brother Dave on the phone today, and we agreed that we haven’t played basketball in 300 years. But we have a hoop in the backyard, and I’ve decided to try to get ‘good’ at shooting foul shots. The advantage of foul shots, of course, is that you don’t have to move very much.
My first try was abysmal: 11/50. Yesterday was worse: 10/50. Today, almost, but not quite, not terrible: 19/50.
This is the sort of thing Twitter was invented for.
My co-minions at Powerset have been growing (or faking growing) facial hair in the run up to our beta launch. And, of course, it has its own blog: Powerstache.com. It’s even made the LA Times, and Michael Arrington’s initial take on our beta product even mentions it.
Chris pretty much wins, though.