ARTICLE
WASHINGTON POST ARTICLE

Spike in Disease Does Not Always Mean an Epidemic

By Roy Richard Grinker

Washington Post, Health Section, A4, October 30, 2007

When my wife was in labor in 1991, the doctor attached an electronic fetal monitor to her belly to record her contractions and the baby’s heart rate. I remember being so transfixed on the monitor that I forgot about my wife. At one point, she said, “Wow, it hurts.” But because the graph on the monitor hadn’t risen above the levels signaling a contraction, I said “No it doesn’t.”

Technology and numbers, like husbands, are never perfect.

Every day, Americans are confronted with graphs, trends, averages, and percentages and few of us have the expertise to figure out what they all mean. “There is terror in numbers,” wrote Darrel Huff and Irving Geis in their popular book How to Lie with Statistics. They note, for example, that if a small increase in the incidence of a disease over, say, a five year period is projected out twenty or thirty years, with the same rate of increase, the disease suddenly becomes an “epidemic,” a powerful word that evokes fear and a sense of alarm and danger.

Indeed, our child, who was born on the day I incurred my wife’s wrath, became part of the so-called autism epidemic. The autism rate (the number of cases divided by the total population) has gone from fewer than 4 in 10,000 in 1990 to more than 66 in 10,000 in 2007. The statistics have led to widespread concern and social traction: a proliferation of advocacy organizations, scores of new, unproven therapies, and thousands of lawsuits filed by parents who believe the government’s vaccine program is responsible for the higher rates of autism. Between 1997 and 2007, when government funding for most diseases was unchanged, autism funding at the National Institutes of Health increased from $22 million to $108 million.

But numbers and rates rise for many reasons. There may actually be more of the disease, but several other factors may work in concert, such as greater awareness, better detection, broadening of the criteria for what qualifies as a particular disease, even better records and statistical methods.

Panic at an Atomic Weapons Laboratory

In 1979, Dr. Stuart Gunn, a chemist at the Livermore Nuclear Weapons Lab in California , died of melanoma, the deadliest form of skin cancer. Just a year later, the nation’s major newspapers reported that the rate of melanoma for Livermore employees was five times higher than for residents of the surrounding area. Fears of an epidemic spread throughout the lab despite the fact that melanoma had never been linked to any occupational exposure. As it turned out, the increase was an illusion.  In response to Gunn’s death, scores of employees had gone to doctors to have their skin examined, and a few had thin, curable lesions, probably in proportions no higher than among non-employees in the community, if they had bothered to go to a dermatologist.

More doctor visits, more biopsies, more cases detected, and pretty soon you had an epidemic. Or at least, it looked like an epidemic.

Dermatologists report a 273% increase in the rate of melanoma over the last decade. Some are convinced there is a true rise in its incidence, a steady increase that began long ago in the 1940s when it became fashionable to have a tan. But others, like Dr. H. Gilbert Welch of the Veterans Administration Outcomes Group and Dartmouth Medical School, believe that melanoma is diagnosed earlier because patients are more aware of the dangers of the sun, are more likely to see doctors on a regular basis, and because malpractice insurance companies encourage doctors to do more and more lab tests and even make diagnoses to justify additional tests. The earlier the diagnosis, the more cases there will be at any particular point in time. Welch notes "Doctors are punished for undertesting and underdiagnosing but are rarely punished for overtesting."

So who is punished? “The patients,” Welch says, “because they may get unnecessary and invasive tests with all the side effects that go along with them.”

Other illnesses are on the rise as well:  hypertension; Alzheimer’s Disease; cervical cancer; thyroid cancer; prostate cancer; autism; and bipolar disorder. The list goes on. But do these changes in numbers mean that there is really more of the disease? Many scientists believe that what happened in Livermore with skin cancer is happening with other illnesses.

Better and Earlier Detection

Between 1987 and 1992 the incidence of prostate cancer increased 85%. Why? Because something dramatic happened in 1987. Doctors started to perform a simple blood test to screen men for PSA (Prostate Specific Antigen) levels, a marker for prostate cancer. Almost overnight, more cases – more early stage cases – were detected. Before that time, cases were detected only if a man received a TURP (transurethral resection of the prostate), which involved inserting a small telescope into the prostate through the penis and chipping away pieces of the prostate for analysis. Not surprisingly, doctors performed the procedure only when truly necessary. Then, just as suddenly, the proportion of the population with prostate cancer began to drop. Between 1992 and 1996, the incidence dropped 29%.

According to Thomas M. Pisansky, Professor of Oncology at the Mayo College of Medicine in Rochester , Minnesota , “This didn’t mean there was necessarily more disease during the rise or less disease during the decline.” Pisansky says “most researchers agree that the rise was due to the PSA and the decline was the result of having diagnosed all the previously undiagnosed men with prostate cancer until we could achieve a more stable rate.” What looked like an epidemic of prostate cancer was, in fact, major progress early detection.

Lowered Thresholds

Lowering the threshold for diagnosis can quickly change the prevalence of a disease. Take hypertension (high blood pressure), a condition that affects more than 50 million Americans. Hypertension awareness campaigns since the 1960s lowered mortality from coronary heart disease and stroke, but the prevalence of hypertension has risen over the last ten years – possibly because of diet and behavior, but also because the threshold of what is defined as hypertension has been lowered. In 2003, the Joint National Committee on Hypertension reviewed reports that individuals with a diastolic pressure at the high end of what was then called “normal” (85-89 mm Hg) were at risk of developing hypertension-related disease and disability and should be called “prehypertensive.” The result was that physicians began to treat many such previously normal patients for hypertension, and for insurance and medical records they were coded in the same way as someone with a much higher blood pressure. The number of diagnosed cases of hypertension thus rose considerably.

The average patient with what was newly considered “high blood pressure” also fared better because the pool of patients with hypertension now included people who were previously considered normal. This might also help explain part of the drop in mortality: the average patient with the diagnosis was now healthier.

Thresholds have dropped for many other common diseases, like obesity and diabetes, and the criteria for disease classifications have broadened. Autism, for example, previously a narrowly defined disorder, is now used to describe a wide spectrum of severity, from the profoundly mentally retarded person to the socially awkward mathematics professor.

Better Methods of Counting

The methods one uses to count also affect numbers. If, for example, you count the number of cases of a disease by examining insurance records, you miss all the people who do not have insurance (more than 40 million Americans). If you count cases through health care providers, you miss all those patients who did not seek treatment. If those patients are minorities, immigrants, and others for whom there are significant disparities in access to care, the prevalence of the disorder will appear lower in those populations. One reason for the higher rates of many diseases is that researchers are being more thorough in their methods and many of the records they analyze are computerized and better organized. They try to leave no stone unturned.

For example, the Centers for Disease Control (CDC) recently studied multiple sources looking for autism cases and found the highest rates of autism to date.

Still, the numbers were misinterpreted by the media. By searching through medical and educational records, the CDC found that the proportion of children with autism in New Jersey was nearly four times higher than in Alabama . The most likely explanation for this disparity is that Alabama lags far behind New Jersey in providing medical and educational services for autism. Without services, many autistic people in Alabama could not be counted because there was simply no sign of them in the kinds of records the CDC analyzed. When the numbers were released in early 2007, the New Jersey papers were filled with alarming articles about the epidemic, but the statistics could easily have been interpreted as confirmation of how much New Jersey is doing for children with autism.

Despite all the tragedies we hear about in the news, our world is actually safer than it has ever been. Yet we live in dread of epidemics and anxiously await the release of the latest figures from the country’s health care leaders. Some doctors, like Welch, are worried about an epidemic of diagnoses. He says, “epidemics of diagnoses can lead to epidemics of treatments, not all of them safe or beneficial.” Ironically, many of our fears are the result of the knowledge generated by the many real advances in medicine. So the next time you see statistics documenting the increase of a disease, take at least a moment to consider whether they may be evidence not of harm, but of good.


©2007 Roy Richard Grinker