“Epidemic” is one of the most overused words in popular science. There is a new one everywhere you turn. A single Google search reveals “epidemics” of rumors, loneliness, absence, infallibility, and even, curiously, fire. We have an epidemic of epidemics. Yet using the term to describe the explosion of attention-deficit hyperactivity disorder (ADHD) diagnoses over the last decade feels completely reasonable. By some estimates, 11% of American children have an ADHD diagnosis,1 which is more than double the prevalence from the 1970s.2 “Epidemic” almost doesn’t feel grand enough.
The literature is saturated with debate over the provenance of the surge in ADHD diagnoses. Some think that we are doing a better job of recognizing legitimate cases, whereas others believe that physicians have been seduced by a diagnosis of convenience that is tacitly encouraged by our education system and explicitly sold by the pharmaceutical-industrial complex. Still, more physicians are certain that the unique pressures of contemporary society made the marked increase in these diagnoses all but inevitable.
I’m not certain what the right answer is, and don’t think that anyone should be, but all of those possibilities sound more or less like reasonable explanations to me. What is less reasonable, I think, is the implicit assumption that we need to determine the actual rate of ADHD diagnoses precisely and then guard against overdiagnosis vigilantly. Could it be the case that we would be better off overdiagnosing (and even overtreating) in an attempt to make sure that we don’t miss even one legitimate diagnosis?3 The question is not just academic. The frequency with which we fail to uncover patients with ADHD (or any other condition) is, statistically speaking, quantified by the sensitivity of our methods. Roughly, a high level of sensitivity means that we have very few “false negatives” — people whose legitimate diagnoses are missed. However, it does not necessarily make sense to strive for perfect sensitivity because high sensitivities are invariably linked with higher rates of false positives — people who are diagnosed with the condition even though they do not have it.
The danger here is made apparent by the extreme case — we could easily achieve perfect sensitivity for ADHD by diagnosing every child with the condition, but in that case, many children will be falsely diagnosed and fed an unnecessary cocktail of expensive stimulants as a consequence. We can say confidently that outcome is not ideal, but how can we say confidently what is?
The assumptions that we insert into debates about medical policy rise, we hope, from a deeper normative understanding of medical practice, which is to say that we are motivated by the overarching goals of medicine. However, it is not immediately obvious from where these principles stem. Optimistic philosophers would argue that, optimally, our medical goals are grounded in ethics, but even then, clarity is hard to come by, because philosophy provides us with a variety of ethical rubrics to guide our decisions. The obvious ethical underpinning for medical decisions — first, do no harm — is right there in the oath that we are made to recite when we finish medical school. Following the edicts of our oath is our duty. The idea that this duty amounts to a guiding obligation is what philosophers refer to as deontologic ethics.
It sounds great — nobody goes into medicine intending to hurt others — and seems fairly straightforward. The application can get tricky, though. If, for instance we think that the treatments for ADHD (including both the drugs we give patients and the stigma attached to the diagnosis) are completely benign, then “do no harm” suggests that we should push for the absurd scenario where every single person is tagged with a diagnosis — after all, if the treatment is benign, then the only harm is in leaving someone with a genuine diagnosis untreated. On the other hand, if we agree that the treatments have side effects (as all treatments do), then are we compelled to consider treating nobody for fear of harming any person with needless side effects? I sure hope not.
A strict reading of that deontologic commitment obviously leads to unjustifiable results, in part because its absolutist language does not make room for comparing and balancing competing priorities. The recognition that a robust system of medical ethics requires something that approaches holistic thinking suggests that utilitarian ethics might be more useful. Utilitarianism strives to maximize the total amount of happiness or benefit or, in our case, health, over the entire affected population. A utilitarian analysis of the ADHD problem would demand that we assign a numeric value to the outcomes associated with true positive (that is, a person properly diagnosed with ADHD), false-positive, and false-negative outcomes.
After multiplying each value by the size of the group, we would then determine the sensitivity level that maximized the value. The trouble is that, in addition to the difficulty associated with figuring out the number of people in each group accurately, it would be almost impossible to assign a value describing the benefit or harm done to each of those groups. A big part of the issue is that each individual patient would assign a different value to the benefit or harm that they had experienced under any given circumstances. Any attempt we make at precision using this framework would be an exercise in self-delusion.
Instead, we need to think about ethical problems in a way that allows us to balance competing priorities and also leaves room for personalized decision making. There is no single, absolute correct answer — that is undoubtedly the most important takeaway from this discussion — but I think a good place to start might be with what philosophers call “contractualism.”
Contractualism implies that an action is right if the principle permitting it could not be reasonably rejected by a group of people concerned with establishing general ethical principles. Stated differently, this suggests that a physician’s decision is ethically defensible if a principle underlying that decision could not be reasonably rejected by a group of physicians concerned about medical ethics. Fortunately for us, physicians have already agreed on a basic set of ethical principles — autonomy, nonmaleficence, beneficence, justice, and perhaps a few others. Therefore, if a physician’s decision to prescribe pharmaceuticals to a patient for ADHD is supported by one of those handful of ideas, then it is ethically justifiable. Contractualism implies that this analysis has to occur on an individual, which is to say patient-to-patient, basis. Population-level data have little effect on the analysis. As it turns out, if you are trying to make an ethical decision, it doesn’t matter if there is an epidemic or not.
- Attention-Deficit/Hyperactivity Disorder (ADHD): Data & Statistics. US Centers for Disease Control. https://www.cdc.gov/ncbddd/adhd/data.html. Updated November 17, 2017. Accessed December 18, 2017.
- Timeline of ADHD prevalence, medications, and diagnostic criteria from 1990s to current. US Centers for Disease Control. https://www.cdc.gov/ncbddd/adhd/documents/timeline.pdf. Accessed December 18, 2017.
- Davis JE. ADD for all. The New Atlantis. 2017;52:100-110.
This article originally appeared on Medical Bag