Is there room for improvement in medical decision making ?
-
Upload
univ-paris8 -
Category
Documents
-
view
2 -
download
0
Transcript of Is there room for improvement in medical decision making ?
MINISTAGE BIBLIOGRAPHIQUE
Master 2 Recherche en Sciences Cognitives
EHESS/ENS/Université Paris Descartes
Is there room for improvement in medical
decision making ?
Rédigé par Timothée Behra
Sous la direction d'Elena Pasquinelli
Janvier 2013
1
Introduction
According to a report of the Institute of Medicine [1], 44000 to 98000 preventable deaths
occur each year in the United States of America as a result of medical errors. Pharmaceutical
industries are exposed for commercializing drugs with deadly side effects. Unfounded medical scares
regularly make the headlines. The era of organ transplants and other chirurgical triumphs also sees
"alternative medicines" flourish, and many people don't trust medical interventions like vaccines.
Health care is a central political and public issue, and reasonable people generally assume that the
practice of medicine is guided by the scientific method. How do we know what is efficient and what is
not in medicine ? An effort to provide an answer to this question is the movement called Evidence
Based Medicine (EBM). It's a call for grounding medical decisions in scientific evidence rather than
intuition. EBM advocates for standardized good practice in collecting medical data [2]:
- clinical trials should be rigorously designed. The "gold standard" is the randomized
controlled trial (RCT): subjects must be randomly assigned to a group or the other. The
different groups must be followed in exactly the same way
- all the evidence about a particular intervention should be combined through systematic
evidence review and meta-analyses. The Cochrane collaboration has specialized in the
publication of such systematic reviews.
- evidence can be graded according to an evidence hierarchy: systematic reviews of RCTs
constitute the best evidence, expert opinion and clinical experience the worst
However, EBM has not met the successes it promised. The actual reliability of “gold standard”
evidence is much less than the expected reliability [2]. Why is it so ? Something must go wrong in the
way medical information is produced. Can we improve medical decision making ? In addressing this
question, we start by presenting a source of concern which lies upstream of medical decision making:
the production of unreliable medical information. In a second part, we describe how a collective
misunderstanding of health statistics leads to bad decisions, and discuss the concept of rationality
under uncertainty. Finally, we explore some solutions that have been proposed to optimize the practice
of medicine.
I. Unreliable medical information
How reliable is the medical information physicians rely upon to take their decisions ? In his
book Bad Pharma [3], Ben Goldacre explores the scientific and legal processes drugs must follow to
reach the market, and how evidence on a drug's effectiveness is gathered. He argues that there is much
to be done to improve the quality of data regarding the outcomes of pharmaceutical treatments, and the
2
professional and public access to these data. This situation has direct consequences on medical
decision making. Indeed, in the absence of reliable data regarding the efficacy of treatments,
physicians cannot take informed decisions.
1. The drug development process
The drug development process is long and expensive. One must first look for molecules that
have an effect on some strategic biological targets. Those molecules are then tested on animals,
generally rodents or dogs. If no serious adverse effects have been found, the first human tests can be
performed. In a phase I trial, a handful of healthy subjects are recruited and their physiological
reactions are monitored. Phase II and phase III trials follow, with increasing sample sizes of ill
subjects. A phase III trial aims at showing a significant benefit imputable to the drug on the outcome
of a disease, while reporting side effects. Health regulators then survey the collected evidence, and
may allow the drug to reach the public market. If so, the drug is then patented for a limited duration,
during which the pharmaceutical laboratory has a monopoly on the sales of this drug.
As the amount of money involved is large, the pharmaceutical industries need a return on their
investment. Therefore, they have a strong incentive to pass through the regulation process so that the
drug can be profitable, but their job is not to maximize the efficiency of the treatments produced.
Therefore, the quality of the evidence gathered depends on the regulators. Currently, and consistently
across developed countries, the regulation process suffers from several serious flaws, some of which
we will now describe.
2. Flaws of the regulation process
A) Publication bias
First and most strikingly, the results of drug trials on humans aren't required to be published.
Consequently, negative results tend to be left unpublished. This publication bias leads to a distortion
of the available evidence in favor of the efficiency of the drug. This is not restricted to pharmaceutical
trials, nor to medicine. Surprising results are very likely to be published, whereas failures to replicate
often don't. In 2012, a team of researchers [4] failed to replicate 47 of 53 published early laboratory
studies of promising targets for cancer treatment – a troubling picture. For drugs, the non-publication
of deadly side effects can have large scale real-life consequences. The case of lorcainide, with an
estimation of 100000 persons who died prematurely while evidence of harm was left unpublished [5], dramatically illustrates this point. In addition to the non obligation of publishing, even more
concerning is the practice of the pharmaceutical industry to put constraints on publication [6]. In
industry founded trials, researchers can be pressured to sign contracts so that the sponsor simply owns
the data, and has the right to stop the trial at any time, or has the last word on whether to publish or not.
3
Those gagging clauses are never mentioned in the published articles. This is just not the correct way
to gather scientific evidence, and no acceptable reason can justify these practices.
B) Surrogate outcomes
Some drugs follow an accelerated approval procedure, for example when the drug is a
candidate for treating dangerous conditions for which no medication already exists. In these cases,
health regulators may accept significant effects of the drug on surrogate outcomes as sufficient
evidence to let the pharmacological industry commercialize the drug. Examples of surrogate outcomes
are reducing blood cholesterol level, or arrhythmic heartbeat. These may be good signs that the drug
has an interesting effect, but in no way they guarantee that the drug does more good than harm. The
aim of a drug is reducing life threatening outcomes, like heart attacks and deaths, or reducing
disagreeable symptoms that matter to patients, which can only be assessed with a properly designed
long term trial.
C) Me too drugs and tests against placebo
Finding a new molecule that has similar effects as a known one is much easier than finding a
new molecular mechanism. Consequently, most trials test "me-too drugs", treatments for medical
conditions that are already treated with existing drugs. It is a good thing to have different treatment
options, but it provides no incentive for innovation. Moreover, it becomes more problematic when we
add that drugs can be approved on the basis of a trial showing that they have more effect than a
placebo. Physicians don't need to know if a drug is better than nothing, they want to give the best
treatment available to treat their patient. This also raises an ethical issue: patients that enroll in trials
put themselves at risk, by receiving a placebo or a treatment with unknown consequences rather than
the best drug available.
In addition to these flaws, tactics that make statistics appear better than they really are are also
of common use in medical trials: uncorrected post hoc or multiple comparisons, stopping the trial as
soon as a significant effect is found, ignoring drop outs, subgroup analysis… and sometimes outright
fraud.
3. What can be done ?
To assess the true benefits of drugs, it is advocated that [3]:
- the sample of subjects involved in a trial should be representative of the population to
which the drug will be given, and not idealized patients
- evidence on a specific drug should be assessed by systematic reviews: reviews that gather
all the evidence, from all the trials, and not only the published ones
4
- if available, the drug should be tested against the best available treatment, and not against
placebo
- a true benefit can be assessed only if people in the drug group die less from any cause,
rather than die less from the cause the drug is supposed to treat
- the protocol and statistical analysis should be published before starting to collect the data
- access to information should be guaranteed
Those recommendations are in the continuation of EBM, however the EBM approach is
controversial, and we will here mention 2 types of criticism. First, EBM has been criticized on an
epistemological point of view. Besides measuring outcomes, there is another way to gather
knowledge: it is by understanding the physiological underlying mechanisms and research should rather
focus on what causes the outcome. [7]. According to a second criticism, research is interesting at an
epidemiological level, but has not much to say about how to treat individual patients. It is argued that
when there is a conflict between the recommendation of research and what the physician thinks,
research cannot and should not always adjudicate: the intuition of the clinician, based on years of
practice, should not be thrown out [8].
Both those criticisms are failures to grasp the statistical nature of medical decisions, and
repose on two 19th century ideals, as described by the German psychologist Gerd Gigerenzer[9]. The
first criticism conjures the figure of the determinist: the physician must know exactly what he is doing,
find the true causes of the disease, and treat them with the adequate medication. In this conception,
there is no place for doubt or probabilities. Physiological reasoning is a good starting point, but must
still be tested in the real world. The second criticism conjures the figure of the physician-as-artist, who
relies on his intuition and acquired experience to treat the unique, specific case of the patient. This is
accompanied by paternalism, the patient engages in a relationship of absolute trust and obedience.
Those two ideal figures don't make mistakes, and are omnipotent. This leads us to our second section:
do health practitioners take good decisions ?
II. Bad decision making
The ideal of a doctor-patient relationship should be a shared decision making, based on the
interest of the patient [10]. The doctor provides the medical information, describes the different
available treatments, in a transparent way so that the patient understands the benefits and sides effects
that can occur. At the end, the patient should be able to take an informed decision, that is to chose the
alternative that best suits him, according to his preferences, depending on how much risks he is ready
to take, and so on. How far are we form this ideal ?
5
1. Collective statistical illiteracy
In his review of medical decision making [9], Gigerenzer depicts what he calls collective
statistical illiteracy. Patients don't realize the intrinsically risky nature of any medical intervention,
journalists propagate irrational scares but not the relevant health information, policy-makers take
costly decisions that do no good to public health. Finally, doctors themselves are not required to be
numerate, and an understanding of health statistics is barely included in the medical formation. Far
from being without consequences, the misunderstanding of the meaning of statistics causes harm and
costs money.
a. Base rate neglect
Every medical test is subject to uncertainty. The widespread failure to grasp that is described
as an "illusion of certainty" [9]. A given test is characterized by:
- its sensitivity, the probability that the test yields a positive result given that the subject is
indeed affected by the condition (also called "hit rate")
- its specificity, the probability that the test yields a negative result given that the subject is
not affected by the condition (also called "correct rejection rate"). It is also equal to one
minus the false positive rate.
Moreover, no medical treatments are completely safe – some medical tests unambiguously do
more harm than good and should be abandoned -, and they cost money. Making a correct decision
about whether the patient should receive such treatment therefore requires the positive predictive value of the test, that is the probability that the patient is indeed ill given that the test is positive.
Intuitively, most people, including most doctors, equate the positive predictive value with the
sensitivity, and this is utterly false: it ignores the possibility of false positives. To correctly compute
the positive predictive value, a third information is required: the prevalence of the disease in the
population tested, also named the "base rate". The following formula expresses the correct
computation:
Positive predictive value = ∗
∗ ( )∗( )
Even though the dangers of the base rate neglect have been studied for a long time, health
practitioners as well as patients routinely misinterpret positive test results [9]. Why is base rate neglect
still so common ?
6
b. Transparency of the information
It has been documented that humans are consistently bad at dealing with subjective
probabilities and at applying Bayesian reasoning [11]. Specificity and sensitivity are conditional
probabilities. In contrast, when the information is presented in the form of natural frequencies (for
example: "of the 990 women without cancer, about 89 nevertheless test positive") rather than
conditional probabilities, the task of computing a positive predictive value becomes much easier [12]. Doctors can be sustainably trained to convert conditional probabilities into natural frequencies.
There are other forms of nontransparent information; e.g., the distinction between relative risk increase and absolute risk increase. Relative risk is given in percentage, for example "the risk of
heart attack increases by 50%", whereas absolute risk is usually given with values in percentage points: "the risk of heart attack is 1.3% in the placebo condition and 1.9% in the drug X condition,
thus an increase of 0.6 percentage points". The relative risk is more dramatic - it has a bigger cognitive
effect, and actually changes the way we act, a phenomena which has been called the "framing effect"
[13]. In order to objectively evaluate the situation, knowing the absolute risk is necessary. Yet most of
the time, the media report the relative risk without the base value, and thus fail to report the absolute
risk. We can't blame the media alone: even published research articles put the relative risks forward.
To maximize the perceived benefits and minimize the side effects, a misleading strategy called
mismatched framing is of common practice: reporting side effects in absolute risk, and benefits in
relative risk.
c. Meaningless statistics
The X-year survival rate is the percentage of people who are still alive X years after having
been diagnosed a cancer. But the survival rate is not correlated at all with the annual mortality rate,
which is the number of people who die from cancer over 1 year divided by the number of people in the
observed group [14]. Why is there no correlation between the survival and the mortality rates ? The
key word here is "diagnosed": survival rates are dependent on the time of diagnosis. If a cancer
screening campaign is in place, the two following phenomena can influence the survival rate without
influencing the mortality rate:
- Lead-time bias. Survival rates are increased if the time of diagnosis is set earlier, even if
no life is prolonged or saved.
- Overdiagnosis bias. The tests may reveal abnormalities, which match the pathological
definition but never develop into the disease. We now know that most tumors don't
develop into a malignant cancer (non progressive cancers). Despite the fact that they cause
no harm, non-progressive tumors are diagnosed as "cancer", and so they misleadingly
increase the survival statistics.
7
An increasing in the survival rate is therefore not an indicator of the efficacy of a screening
campaign. Comparing mortality rates is the relevant thing to do. For instance, in the year 2000, the
USA had 82% 5-year survival rate for prostate cancer, whereas 5-year survival rate in UK was only
44%. The difference can be explained by the diagnostic method: by symptoms in UK, by screening in
USA. But the mortality rates in these 2 countries were identical: 26 prostate cancer deaths per 100000
American men versus 27 per 100000 in Britain. It is important to notice that overdiagnosis causes
harm. Many American men have been unnecessarily diagnosed and have undergone surgery or
radiation treatment, which often leads to incontinence or impotence. Therefore, the scanning campaign
seems to be a failure, costing much money and doing more harm than good.
d. Breast cancer screening
Let's review a still controversial public debate with large media coverage: breast cancer
screening [9]. This issue exemplifies the collective statistical illiteracy. In 1997, the expert panel of the
NCI (National Cancer Institute, USA) argued against recommending mammography for women in
their 40s, because of a lack of evidence of its benefits. The director of the NCI responded that he was
shocked by this decision, and the advisory board of the NCI voted that mammography should be
recommended, against the conclusion of its own expert panel. In 2002, new evidence that the benefits
of mammograms may not outweigh the risks has been reported - again with no effect on public
recommendations.
Mammography tests have a 91% specificity, leading to a lot of false positives. For normal risk
population, the predictive positive value is of about 10% (it means that only one woman in 10 with a
positive mammogram actually has a breast cancer). Even obstetricians don't know this. The general
public is massively misinformed: people routinely believe that mammography is 100% reliable, that it
prevents cancer, or that it can do no harm to healthy people. Media campaigns advocating screening
report the benefits in absolute risk reduction (30%), but don't mention the harms or the relevant
numbers.
The latest evidence is found in the 2011 Cochrane systematic review [15], including data on
600000 women. For every 2000 women who undergo routine breast cancer screening for 10 years, 1
woman with breast cancer will have her live prolonged (15% relative risk reduction), 10 healthy
women will be treated unnecessarily, and 200 will experience important psychological distress for
months because of false positive. Crucially, no effects have been found on all-cancer mortality
(including breast cancer) after 10 years or on all-cause mortality after 13 years. This isn't
contradictory: mammography itself is harmful, the unnecessary treatments do harm, and
mammography exposes to X-rays, which, ironically, can cause cancers.
8
Because medical decisions involve intrinsically uncertain outcomes, it cannot be emphasized
enough that statistics are relevant to the practice of medicine. Misunderstanding them leads to bad
medical decisions, understanding them save lives. Why do doctors make such mistakes ? Are we sure
those are mistakes in the first place ? If the problem is known and documented, why can't we get over
it ? Is there such a thing as an objective, indisputable way to measure the quality of a decision ?
2. What constitutes a good decision ?
a. Rationality and medicine
Statistical thinking is counterintuitive. The concept of rationality itself is really tricky,
especially rationality in an uncertain world, where the outcomes of our decisions aren't perfectly
predictable. A decision is rational if it maximizes the satisfaction of a goal, given the available
knowledge. The success must be quantified thanks to a normative theory. In an uncertain world like
the medical world, it can become very difficult to judge how rational an action. Here, the reliability of
the scientific information is also crucial: is it rational to ignore information from sources that have
been shown unreliable ? This question can't really be answered, a decision can only be evaluated in the
light of some good knowledge.
Let's consider what happened to the American physician Daniel Merenstein [9]. In accordance
with the recommendations of medical organizations, he explained the pros and cons of a prostate
cancer screening procedure to his patient, who then declined to take the screening. Unfortunately, the
patient was developing an incurable prostate cancer. He sued Dr. Merenstein for not automatically
ordering a screening test, while it was standard procedure for physicians in Virginia to do the test
without informing the patient. The jury exonerated Merenstein, but his residency was found liable for
$1 million.
By not imposing the test like his colleagues, Merenstein took the "wrong" decision. But this
was the rational decision: given the information he had (like breast cancer screening, prostate cancer
screening hasn't shown any benefits), not doing the screening test was the decision that maximized the
expected patient's health. With hindsight only can it be shown that it is the wrong decision - he paid
for not being clairvoyant. In the USA, in response to this situation, physicians practice defensive medicine, which is basically shifting the goal of a physician from "maximizing the length and the
quality of life of patients " to "maximizing health, provided I can't be sued". One can't be sued for
overdiagnosis, so many screening tests are automatically performed.
b. The rationality debate in psychology
We will now present two scientific approaches that have shaped the study of rationality. The
heuristic and biases tradition (H&B), defended by Kahneman and Tversky [16], studies the deviations
9
from the norms of rationality. They emphasize that for a large number of problems involving
probability or uncertainty, people's intuitive judgments are poor. The leading figures of Evolutionary
Psychology (EP) John Tooby and Leda Cosmides have frequently attacked Kahneman's positions [17]. They claim that the "defects of rationality" are not defects, but part of adaptive mechanisms forged by
evolution. Gigerenzer defended heuristics as being at the core of our intelligence: indeed, he
discovered that heuristics can, in many situations, yield better results than the Bayesian optimal use of
information [18].
Those two traditions are often opposed, yet their core claims are not contradictory [19]. The
proponents of EP often use the world "rational" in scare quotes to refer to logic and probability theory
as a normative theory, and insist that human beings reason according to rules that are "ecologically
rational" [18]. They use fitness in the environment as normative theory. But Gigerenzer cannot
maintain both that humans don’t violate appropriate norms of rationality when reasoning about the
probabilities of single events and that reasoning improves when single event problems are converted
into natural frequencies. He needs - and uses - the normative view of probability theory; his
ameliorative project wouldn't make any sense without it.
Far from being opposed, the two approaches are actually complementary. H&B focuses on
studying when judgments differ from the norm of logic and probability theory. EP focuses on
underlying mechanisms of reasoning, and how those mechanisms were adaptive in the environment
where the human species evolved. Although measuring it is tricky, we can find a consensual definition
of rationality. This is necessary to evaluate the quality of decisions. It is time to survey the practical
solutions that are advanced to improve medical decision making.
III. Improving medical decision making
1. Bigger and simpler trials
As we saw in the first section, more data equates with better decisions. We lack data on the
efficiency of drugs, on the efficiency of medical procedures, and on what causes diseases. Collecting
data requires condtucting controlled trials, that are very expensive and take time, in addition of putting
people in danger. The aim of the pharmaceutical industry is not to transparently share the collected
information, but to give a good image of their drugs so that they can be sold.
But there is another way to gather data. Drugs are prescribed in everyday medicine, but
nobody study the outcomes. Ben Goldacre proposes a system that enables physicians to participate in
collecting data to compare the effectiveness of different drugs: when a doctor diagnoses a condition
for which several treatments exist and there is no evidence in favor of one of them, he could just
prescribe one randomly (by letting a software chose, randomization without bias is crucial) and the
10
data would be automatically collected. The future of the patient can be automatically followed. This
would enable to collect massive amounts of data at virtually zero cost. Today, health regulations in
UK require that as soon as randomization occurs, the patient must give his consent. The consent form
must be read thoroughly in presence of the physician, which takes 20 minutes. Besides that, the patient
will now know that he takes part in a study, which might bias the outcome (Hawthorne effect). If
everybody agreed to share his medical data, or if we had a good way to collect medical data
anonymously – not an insurmountable obstacle in the internet era -, our understanding of causes and
our medical data would be much more reliable.
2. Statistical literacy
Not surprisingly, Gerd Gigerenzer argues that we should teach statistical literacy on a massive
scale, at school, as early as possible. Statistics don't need to be associated with mathematics and with
perfect precision, they can also be taught as problem-solving methods to estimate what is the best
decision in concrete, real life situations. The formation of doctors is focused on physiology. The
inclusion of statistics and decision making in medical studies is recent, and some textbooks include
errors.
With a widespread statistical literacy, unflattering numbers can't hide behind a nontransparent
framing. Informed decision making becomes possible, and the quality of healthcare increases. Besides,
the problem is not a lack of money ; optimizing the health system would actually reduce the costs
considerably. Many will fear that this view hinders the role of physicians, that it threatens their
judgment and their liberty. That's probably a matter of taste, but those criticisms show a lack of
interest for public health. The liberty of physicians to be deliberately ignorant or wrong is less
important than public health. This isn't to say that physicians must abandon their critical thinking:
when only unreliable data is available, there is no way to tell what the best treatment is.
3. Can we minimize medical errors ?
The last thing we cover in this review is medical errors, plain mistakes that doctors themselves
recognize and understand as mistakes. Sometimes, an obvious symptom is missed, a step is forgotten,
or the wrong dose of drug is prescribed.
The medical traditional way to deal with errors is to minimize and hide them, to call them
"complications", often not even telling the patient. A 2002 survey of doctors' attitudes found that
honest disclosure is "so far from the norm as to be 'uncommon.'" [20]. Hopefully, things are changing:
in 2008 the Beth Israel Deaconess Medical Center in Boston decided to adopt an open source attitude.
When an error is acknowledged, a press release is sent to the media, the entire hospital staff is
11
informed. Everybody is invited to look for potential sources of errors. Openly disclosing the
possibility of error is the first step toward avoiding them.
Atul Gawande, a surgeon in Boston, calls for applying to the medical world what worked in
the aeronautical and the construction industries [21]. Considering the sheer complexity of some
medical procedures, errors seem inevitable: hundreds of steps must be followed perfectly in the right
order, in urgency and often with some unknown team member. Surgeons are among the most
knowledgeable experts that exist, and yet like every fallible human they can make trivial mistakes at
every moment. Building a skyscraper and flying a plane are equally challenging tasks, but they have
been almost completely emptied out of any human errors. To achieve this, the solution was not hyper
specialization, but checklists: a concise list of points, each of which must be reviewed during a
procedure. There are checklists for almost all accidents that can occur in a plane, and building a
skyscraper mainly equates with following a giant checklist. Constructing a good checklist requires
expertise, and it has to be tested in the real world. Human intelligence is not lowered by checklists: the
trivial yet tricky points are not forgotten, and the complicated points that require imagination and
expertise can be focused on. Also, some items in a checklist can be designed to elicit teamwork.
Following Peter Pronovost, who established a basic checklist for line installation that
dramatically reduced infections, Atul Gawande participated in the creation and the test of a general
surgical checklist. It was implemented in 8 hospitals around the world, some of which situated in low
income countries [22]. Surgical errors were surveyed before and during the implementation of errors,
with about 4000 patients followed for each phase. The rate of death was 1.5% before the checklist was
introduced, 0.8% afterward. The complication rate decreased from 11% to 7%. Even if those results
must be replicated, this is an awesomely encouraging accomplishment. At least, this shows the
necessity to accept the possibility of medical error, and to discuss what can be done about them.
Conclusion
We hope that we have shown that if the aim of medicine is to maximize the quality and length
of life of the most people, there is much room for improvement. Better data, better access to the
information, better decision making and an honest attempt to reduce medical errors could considerably
increase the efficiency of medicine. Why is it so difficult to implement these findings ? What stops us
from using the optimization methods that have shown undeniable efficiency in other areas to evaluate
the outcomes of medicine ? The solution proposed by Gigerenzer is of great interest, but the statistical
illiteracy may not be the whole explanation. An utilitarian view of medicine raises deep questions and
shakes our moral intuitions. We refuse the idea that it can be rational or acceptable that a physician
takes the risk to harm someone healthy to save other people. Cognitive science and moral philosophy,
as they reveal our human nature, will have much to bring in this debate. Maybe it is time to see
12
statistical thinking as what it is: an attempt to increase the common good rather than a tentative of
dehumanization.
References
[1] SN Weingart, R Wilson, RW Gibberd, B Harrison. (2000) Epidemiology of medical error.
BMJ, 320(7237):774-7
[2] M Solomon. (2011) Just a Paradigm: Evidence-Based Medicine in Epistemological Context. European Journal for Philosophy of Science, 1;3: 451-466
[3] B Goldacre. (2012) Bad Pharma. HarperCollins
[4] CG Begley & LM. Ellis. (2012) Drug development: Raise standards for preclinical cancer research. Nature, 28 483(7391):531-3
[5] AJ Cowley, A Skene, K Stainer, JR Hampton. (1993) The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction : an example of publication bias.
International journal of cardiology,40(2):161-6.
[6] PC Gøtzsche, A Hróbjartsson, HK Johansen, MT Haahr, DG Altman, AW Chan. (2006)
Constraints on Publication Rights in Industry-Initiated Clinical Trials. JAMA, 295(14):1641-1646
[7] AM Cohen, WR Hersh. (2004) Criticisms of Evidence–Based Medicine. Evidence-based
Cardiovascular Medicine, 8: 197–198
[8] M Hammersley. (2005) Is the evidence-based practice movement doing more good than harm? Reflections on Iain Chalmers’ case for research-based policy making and practice. Evidence &
Policy. 1; 1: 85-100
[9] G. Gigerenzer, W. Gaissmaier, E. Kurz-Milcke, LM. Schwartz, S. Woloshin. (2007) Helping doctors and patients make sense of health statistics. Psychol Sci Public Interest, 8: 53–96.
[10] G. Gigerenzer, J.A.M. Gray. (2012) Launching the century of the patient. Chapter from "Better Doctors, Better Patients, Better Decisions : Envisioning Health Care 2020," edited by G.
Gigerenzer and J.A.M. Gray. Strüngmann Forum Report, vol. 6, J. Lupp, series ed. Cambridge, MA:
MIT Press.
[11] D. Kahneman et al. (1982) Judgment under uncertainty : Heuristics and biases. Cambridge
University Press.
13
[12] Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102: 684–704.
[13] DJ Malenka, JA Baron, S Johansen, JW Wahrenberger, JM Ross. (1993). The framing effect of relative versus absolute risk. Journal of General Internal Medicine, 8: 543–8.
[14] HG Welch., LM Schwartz, S Woloshin. (2000). Are increasing 5-year survival rates evidence of success against cancer? JAMA, 283: 2975–8.
[15] PC Gøtzsche, M Nielsen. (2011) Screening for breast cancer with mammography. Published
Online: April 13, 2011 http://summaries.cochrane.org/CD001877/screening-for-breast-cancer-with-
mammography
[16] A Tversky; D Kahneman,. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185 (4157): 1124–1131.
[17] L Cosmides; J Tooby. (1996). Are humans good intuitive statisticians after all?
Rethinking some conclusions from the literature on judgment under uncertainty.”
Cognition, 58;1: 1-73.
[18] G Gigerenzer, W Gaissmaier. (2011) Heuristics decision making. Annu. Rev. Psychol, 62:451–
82.
[19] R Samuels, S Stich, M Bishop. (2002) Ending the Rationality Wars : How to Make Disputes About Human Rationality Disappear. Common Sense, Reasoning and Rationality, ed. by Renee Elio.
New York: Oxford University Press, , pp. 236-8.
[20] K Schulz. (2011) Being Wrong: Adventures inthe Margin of Error. Ecco
[21] A Gawande. (2009) The Checklist Manifesto : How to Get Things Right. Picador USA
[22] A Haynes & al. (2009) A Surgical Safety Checklist to Reduce Morbidity and Mortality in a Global Population. nejm, 360;5