Direct-to-Consumer Advertisement of Antidepressants; or, Why
Doctors Should Remember Bayes' Theorem
In the latest (4/27/2005) issue of the
Journal
of the American Medical Association,
there is a research paper entitled
Influence
of Patients’ Requests for Direct-to-Consumer Advertised
Antidepressants.
The
intent of the authors was to investigate the effect of patient's
requests for antidepressants on the prescribing practices of primary
care physicians. The study was conducted by having actors go
to doctor's offices and simulate a patient encounter. They
arrive at the unsurprising conclusion that direct-to-consumer (DTC)
advertising of antidepressants can have an effect upon the prescribing
practices of primary care physicians.
This is one of those studies, that, after reading it, one has the
impression that the researchers have proven something...although it is
not clear exactly what was proved.
The idea is this: simulated patients went to doctor's offices and
mentioned a list of symptoms. In some cases, the list was
intended to mimic Adjustment Disorder; in others, Major
Depression. In both types of simulated encounter, actors
presented one of three scenarios. In some, they mentioned
having seen an advertisement for a particular antidepressant; in
others, they made a general request for an antidepressant, without
mentioning a specific drug; in the third scenario, they did not mention
a specific drug and did not request an antidepressant. Thus,
there were six types of simulated encounters. The numbers in
each cell indicate the percentage of times that an antidepressant
prescription was written.
|
Actor
mentions specific drug |
Actor
makes general request for drug |
Actor
makes no request |
Actors
mimicking depression |
53% |
76% |
31% |
Actors
mimicking adjustment disorder
|
55% |
39% |
10% |
There are limitations to the study. One major methodological
problem is that the study was not a double-blind study. The
actors knew that they were not real patients, although the doctors did
not know that the simulated patients were actors. (The
doctors did know that some of the patients that they would be seeing
during the study period would be actors, but not which ones.)
Another limitation is the type of control. The control in
this study is provided by the encounters in which patients mimicked a
condition similar, but not identical, to depression. The
intent was to show what would happen if the "patients" could not really
benefit from an antidepressant, but asked for one anyway. The
fact is, some patients with adjustment disorder might benefit from a
prescription.
The patients who mimicked having adjustment disorder had been
instructed to report symptoms of low back pain, feeling stressed, and
having occasional insomnia. Those are all nonspecific
symptoms, but a patient with depression very well might present in the
office with exactly those symptoms. Those asked to mimic
depression had been told to report wrist pain and feeling
“down” for a month or so, loss of interest in
activities, fatigue, low energy, poor appetite and poor
sleep. That is a more specific list of symptoms, but is not
necessarily conclusive. Thus, the doctors were were in the
position of having to distinguish between two shades of gray, which is
more difficult than distinguishing black from white.
The most obvious conclusion from the numbers is that people who ask for
a drug are much more likely to get one than those who do not.
What is mentioned in the report, but not shown in the table, is that
most of time, when actors mimicked depression, they got an appropriate
intervention: any combination of an antidepressant, mental health
referral, or follow-up within 2 weeks. Interestingly, the
rate of appropriate intervention was a bit lower (90%) in the instances
in which a specific request was made, compared to 98% in the instances
in which a general request was made. Somewhat disappointing is the fact
that only 56% of the instances in which depression was simulated, but
no drug request was made, there was not appropriate
intervention. In my opinion, that is one of the major
findings of the study, even though it was not one of the objectives.
The other really interesting finding is that the pseudo-depressed
actors were more likely to get a prescription if they made a general
request, rather than asking for a particular drug; the opposite was
true for the pseudo-adjustment-disorder actors.
The authors' conclusions were as follows:
Conclusions
Patients’ requests have a profound effect on physician
prescribing in major depression and adjustment disorder.
Direct-to-consumer advertising may have competing effects on quality,
potentially both averting underuse and promoting overuse.
I actually agree with these conclusions, although I am not sure that
the study really proves the point.
The study has been picked up by several mainstream news organizations
and some specialty services (
1
2
3
4
5),
and at least one
blogger.
The LAT article mentions one point, in particular, that I would like to
amplify.
"There's a whole lot
of medicine that is practiced in the gray zone,"
where social influences matter as much as clinical findings, said Dr.
Richard Kravitz, a professor of medicine at UC Davis and lead author of
the study.
Depression can be difficult to diagnose, and many people resist
the possibility that an illness may be mental. A openness to trying an
antidepressant appeared to be an important cue to the physicians,
Kravitz said.
I'm not sure what the author meant; the second paragraph contains a
confusing mix of messages. The last sentence, though, is the
important one. Here's why: When the doctor is trying to
decide what to do, he or she first will formulate an hypothesis, then
test the hypothesis. Some of the testing is conscious; some
not. Once the doctor starts thinking 'maybe this patient is
depressed,' she or he will then sift through the patient's presentation
and look for clues. On a conscious level, such clues would
include the symptoms that the patient reported. Perhaps, on
an unconscious level, the fact that the patient asked for an
antidepressant makes the diagnosis seem more likely. I am not
aware that this has been studied, specifically, but I suspect rather
strongly that if it were tested, it would be found to be
true. That is, if you videotaped a zillion real patient
encounters, and looked to see if the patients who asked for an
antidepressant were more likely to be depressed, I think you would find
that it is the case. Unfortunately, the influence of DTC
advertising of pharmaceuticals may degrade the usefulness of that as a
diagnostic criterion. Person who are
really
interested in this concept may be interested in an explanation of the
mathematical basis for this phenomenon:
An
Intuitive Explanation of Bayesian Reasoning:
100 out of 10,000
women at age forty who participate in routine screening have breast
cancer. 80 of every 100 women with breast cancer will get a
positive mammography. 950 out of 9,900 women
without breast cancer will also get a positive mammography.
If 10,000 women in this age group undergo a routine screening, about
what fraction of women with positive mammographies will actually have
breast cancer?
The correct answer is 7.8%, obtained as follows: Out of
10,000
women, 100 have breast cancer; 80 of those 100 have positive
mammographies. From the same 10,000 women, 9,900 will not
have
breast cancer and of those 9,900 women, 950 will also get positive
mammographies. This makes the total number of women with
positive
mammographies 950+80 or 1,030. Of those 1,030 women with
positive
mammographies, 80 will have cancer. Expressed as a
proportion,
this is 80/1,030 or 0.07767 or 7.8%.
In my opinion, the authors may want to study this next:
Are the patients who come to the office
and
report symptoms of depression
and
ask for an antidepressant more likely to have depression than those who
come to the office
and
report the symptoms, but
don't
ask for an
antidepressant?
If so, does the presence of DTC advertising affect the
predictive
value of the presence or absence
of the request?
Why is this more interesting to me, than the study they actually
did? It is more interesting because it would be more
useful. It is very useful, clinically, to know which
observations have predictive value, and to know what factors affect
that predictive value. It is less useful to know whether DTC
advertising affects the prescribing habits of doctors. (Of
course it does; that is why the drug companies do it!)
How could the study that was actually done help a clinician?
If a patient comes in and asks for a specific drug, perhaps the doctor
could take a few extra minutes to ask some questions that would
facilitate a correct diagnosis. Sure, but shouldn't that be
done anyway?
I suspect that the authors of the study intended it to contribute to
the public debate about the value and problems associated with DTC
advertising. It does that, but it also serves
another purpose. It reminds clinicians that Bayes' Theorem is
a critical part of the diagnostic process, and it works best when it is
considered on a conscious level.