In a random act of sanity, the Bernards (New Jersey) Board of Education narrowly rejected random drug testing of its students last Monday.
Before I step into the whirling void of passion that passes for rational discourse, let me preface this post with a few things that should be (but are not) needless to say:
1) I do not condone the use of recreational drugs by minors.
2) Ethanol (alcohol, hooch, whatever) is a drug.
3) Nicotine (butts, bogeys, whatever) is a drug.
4) Caffeine (java, joe, whatever) is a drug.
5) I accept the use of recreational drugs (ethanol, nicotine, caffeine) by adults. I'm not saying it's smart.
I am a retired board-certified pediatrician--while that does not make my views sacrosanct, cut me a little slack. I've seen the damage drugs can do. I've also seen the damage thoughtless drug screening can also do.
The American Academy of Pediatrics (AAP) opposes involuntary drug screening of adolescents. The AAP certainly does not support young adults hanging out behind the local WaWa sharing a spliff.
You can go all over the web reading the pros and cons of drug screening, and I hope you do. I want to focus on just piece of the argument, but it's a big one, and one not well understood by many physicians, never mind the general public.
It involves mathematics, and it will turn common sense on its head. So break out your abacus, bear with me, and learn why even a good test can be a lousy one in certain situations.
To understand testing, you need to know a couple of terms.
Sensitivity in a drug screen is the percentage of students who are actually using the drugs in the test coming out positive. If 99% of the kids smoking weed behind Wawa are found positive by the test, the test is 99% sensitive.
Specificity in a drug screen is a little more complicated--it tells you what percent of the students not inhaling are correctly identified as not inhaling. If a test is 99.9% specific and you're not using the drugs tested, there is only a 1 in a thousand chance you will be wrongly identified as a drug user.
So, you think, if a test is 99% sensitive and 99% specific, it's a pretty good test. And it is.
Then you say, well, hey, if you test positive, then I can be 99% sure you are using the drugs.
And you'd be 100% wrong.
Huh?
The accuracy of the test depends on what percent of the population is actually using drugs.
Let's suppose you just developed the Spiffy Spliff Test, a cheap, amazing screen that is 100% sensitive and 99% specific.
You need to get FDA approval, so you are looking a benevolent group to donate their time and urine to your fine nonprofit company incorporated solely to save the yewt of America.
Let's say an order of monks lives on an atoll in the middle of the Pacific. They use drug sniffing dogs to prevent any marijuana coming onto their island. Still, one of the monks has end-stage cancer, so he has a prescription for medical marijuana, which he uses, um, religiously.
This is a pretty popular place for monks. 10,000 monks live here, and one is a known marijuana user. Let's say for the sake of argument that not one of the other monks has used marijuana in the past decade.
Now let's test them. The test is 100% sensitive, so the one monk using reefer gets identified as such. So far, so good.
There are 9,999 more monks to be tested. If the test is 99% specific, then 1% of the remaining monks will be falsely identified as using mary jane.
1% of a big number leaves a lot of monks--about 100 of the remaining monks will test positive.
So now we have 101 positive tests, and only 1 monk has truly used grass. Despite a test that's 100% sensitive and 99% specific, the vast majority (over 99%) of those that tested positive have never used ganja.
What if the test is 99.9% specific? Well, then about 10 monks will be falsely positive. For every true postive (the cancer-stricken monk), we have 10 monks on the verge of getting kicked out of the monestary for "wrong" results.
I know this is counterintuitive. Still, in order for the test to be accurate, you need a fairly high proportion of the monks to be hanging out bhind the Wawa.
What if 20% of the monks are potheads? Let's crunch the numbers again.
20% of 10,000 is 2000, so right off the bat we have a couple thousand positive tests. 8000 monks are left. If the test is 99% specific, then 1% of these 8000 monks, or 80, will test falsely positive. In this case, only 80 out of the 2080 (or about 4%) tests will be false positive.
Same test, drastically different false positive rate.
Take home message? The predictive value of a drug screening test, even a really good one, depends on how many kids in the population are actually using drugs.
Until people can wrap their heads around the testing, urine belongs in a toilet, not a test tube.