stem cell reversal of chronic kidney disease - RCSI Student ...
-
Upload
khangminh22 -
Category
Documents
-
view
4 -
download
0
Transcript of stem cell reversal of chronic kidney disease - RCSI Student ...
Bioengineered transplants:stem cell reversal ofchronic kidney disease
RCSI Royal College of Surgeons in Ireland
Student Medical Journal
smjVolume 12: Number 1. 2019
ISSN 2009-0838Inside: Bipolar disorder: the highs and lows
RCSI DEVELOPING HEALTHCARE LEADERS WHO MAKE A DIFFERENCE WORLDWIDE
AcknowledgementsThank you to the RCSI Alumni for their continued support of us as students – providing career advice, acting as mentors, enabling
electives and research, and supporting the publication of the RCSIsmj since its inception in 2008.
As today’s generation of students and tomorrow’s generation of alumni, we are very grateful for this ongoing support.
A warm and special thanks to Prof. David Smith for the time and encouragement he has given to the RCSIsmj Ethics Challenge, and for his
support of the annual debate.
We would also like to thank the Dean, Prof. Hannah McGee, for her sponsorship, and Margaret McCarthy in the Dean’s office for her
constant endorsement and assistance.
The RCSIsmj was extremely privileged to have a number of professors and clinicians involved in this year’s journal clubs. We would very
much like to thank the following individuals for their support of, and participation in, the journal club, and to express our appreciation of
their time, knowledge, and expertise:
Prof. Arnold Hill
Dr Mark Murphy
Prof. Alf Nicholson
Mr David O’Brien
Dr Joseph Galvin
Prof. David Henshall and the team at FutureNeuro
4 Editorial
4 Director’s welcome
RCSI Ethics Challenge5 RCSIsmj Ethics Challenge 2019/2020
6 RCSIsmj Ethics Challenge winner 2018/2019
Research spotlight10 FutureNeuro: translating research into action – Prof. David Henshall
Interview13 Prof. Orla Hardiman
Case reports16 Maxillary sinus cancer
21 A case of a knee slip and slide
Book review26 The Inflamed Mind
Original articles27 Can proprioception influence gait abnormalities in Parkinson’s disease?
32 Post-discharge decisions: prolonged VTE prophylaxis for colorectal cancer surgery
39 The role of lymphoscintigrams in breast cancer – a retrospective audit from Beaumont
Hospital
Review articles44 The burden of paediatric asthma in urban United States populations
50 Chronic obstructive pulmonary disorder in adult Chinese women
56 Fasting fiction: is intermittent fasting superior to conventional dieting?
62 ‘Joint’ decision-making: how to determine the best approach for total hip arthroplasty
68 The role of rapid autopsy in oncology
72 Pharmacologic management of diabetes during Ramadan
78 Influence of the gut microbiome on brain health and function
Staff reviews84 Bipolar disorder: the highs and lows
91 The intersection of depression and chronic illness
96 Bioengineered transplants: stem cell reversal of chronic kidney disease
101 The big and the small of it: paediatric implantable cardioverter-defibrillators
107 An evolving strategy for cardiovascular intervention: a hybrid approach
113 Six degrees of genetic identification
118 Against all odds: ethical dilemmas in modern peri- and neonatal medicine
Perspectives124 How fitness can fuel the brain
128 Donate or do not? Strategies to increase organ donation rates
132 E-cigarettes: a new age of smoking
Abstracts136 Novel developments in cancer immunotherapy: an analysis of compounds that inhibit
indoleamine 2,3-dioxygenase
137 Eradicating bacterial biofilms with cold air plasma under simulated wound conditions
138 An aberrant phagosomal pH impairs neutrophil microbial killing in cystic fibrosis
139 Effect of long-term storage on disintegration and dissolution of over-encapsulated
penicillin V tablets
Please email comments to [email protected], join our Facebook page, or follow us on Twitter @RCSIsmj to discuss journal articles. Submissions to [email protected]. See www.rcsismj.com to find out more, see past editions, and follow our blog.
Royal College of Surgeons in Ireland Student Medical Journal
Executive CommitteeDirector Suzanne MurphyEditor-in-Chief Rachel AdilmanPeer Review Director Cameron ChalkerSenior Editor Alyssa ContiAssistant Peer Review Director Magar GhazarianExecutive Secretary Harishitaa PrithivirajWebmaster Gemma Wright Ballester
Assistant Webmaster Tiffany Yeretsian
Peer ReviewersDavid SeligmanAlison HuntOluwabunmi AdesanyaDeena ShahClaire GalliboisTessa WeinbergSakshi KirpalaneyShaghayegh (Sherry) EsfandniaAlexandra MitchamAidan McKeeAlexandr (Sacha) MagderHannah SuchyDonal RocheHannah O’NeillCarol RizkallaJeremy LauAlexa HigginbothamAnanya PentaparthySamantha Tso
Senior Staff WriterKatie Nolan
Staff WritersJessica MillarLaura StauntonMonika CieslakOla MichalecVladimir DjedovicJeffrey Lam
Director of EducationSri Sannihita Vatturi
Education OfficersMatthew PatelKassandra Gressman
Public Relations Joseph Saleh Fraser Jang-Milligan
The Malthouse, 537 NCR, Dublin 1.T: 01-856 1166 F: 01-856 1169www.thinkmedia.ieDesign: Tony Byrne, Tom Cullen and Niamh ShortEditorial: Ann-Marie Hardiman, Paul O’Grady andColm Quinn
RCSIsmjcontents
JOURNALISM CONTENT DESIGN
Volume 12: Number 1. 2019 | Page 3
RCSIsmjeditorial and director’s welcome
Page 4 | Volume 12: Number 1. 2019
“The very first step towards success in anyoccupation is to become interested in it.”
William Osler
It is with much pride that I welcome you to the latest edition of the
RCSIsmj. Now in our twelfth year, we have undergone many changes
and have been lucky enough to chronicle the changing face of medicine
and research.
As ever, the RCSIsmj is an entirely student-run endeavour and I am always
blown away by the creativity and commitment of my peers. In my past
three years with RCSIsmj, I have been privileged to get to know so many
amazing students who are actively involved in research across the globe.
From Beaumont to Beijing, the diversity of this research is clear to see
throughout this issue. Involvement in research during their undergraduate
career is a testament to our students’ commitment to becoming the best
possible healthcare professionals. This is something we are encouraged in
from day one at RCSI, where inspiration is all around us. Prof. David
Henshall and the team in FutureNeuro are just one example of this.
I know that I leave the RCSIsmj in the hands of a strong and capable team
who will continue to encourage RCSI students to engage in research.
As ever, the RCSIsmj is grateful for the support and encouragement we
receive from the Dean’s office and the faculty of RCSI, without whom
none of this would be possible. I hope you enjoy reading this issue as
much as we did working on it, and perhaps it might even inspire you.
Medicine in 2019 abounds with ever-advancing tools and technology,
in everything from diagnostics, genealogy, and research, to
conservative and invasive therapeutics, and we are increasingly faced
with the question: “just because we can do something, does that really
mean we should?” The scope of practice of healthcare personnel today
– what we are capable of carrying out – is impressive, somewhat
daunting, and worth pausing to reflect upon. In Volume 12 of the
RCSIsmj, our authors explore this issue from birth to beyond the grave.
Monika Cieslak examines ethical dilemmas in neonatology, giving
readers a unique lens through which to contemplate the implications of
recent technological progress. Senior staff writer Katie Nolan teaches us
about modern genetic sequencing, and warns of the generational risks
posed by gaps in the interpretation and protection of our genomic data.
Today, with minimal effort, we can decode our DNA, but whether this
practice is secure or wise remains controversial.
With such comprehensive diagnostic and therapeutic advancements,
we must remain cognisant of and purposeful in our efforts to advocate
and foster the art of medicine. In this vein, Ola Michalec provides
essential insight into the consequences of chronic illness for patients’
mental health, reminding us that compassionate, holistic care
necessitates managing patients beyond mere physical symptoms and
lab values. What’s more, scientific resources now enable philanthropic
action even in death; Matthew Patel and Brian Li, respectively, discuss
the need for organ donation and rapid-autopsy tissue banking amidst
the ever-present burden of refractory disease.
We are thrilled, as always, to share Volume 12’s varied articles with you,
and thank you for your continued engagement with the RCSIsmj.
Rachel Adilman
Editor-in-Chief, RCSIsmj 2018-2019
Balancing possibility with responsibility
Director’s welcome
Suzanne Murphy
Director, RCSIsmj 2018-2019
RCSIsmjprize
Volume 12: Number 1. 2019 | Page 5
A child is born at term with hypoplastic left heart syndrome. He is
initially stable and placed on a prostaglandin infusion to maintain
systemic blood flow. His parents are informed that there is roughly a
75% chance of survival at five years, but this will require at least three
separate surgical procedures and extensive hospital time. Some
neurologic disability, although probably not severe, is likely should he
survive, and there is a small chance of significant neurologic disability.
In addition, should he survive, there is a possibility that he might
require a heart transplant later in life. The cardiology service has
recommended that the surgery be performed, but the parents have
requested that the prostaglandin infusion be stopped and the child be
allowed to die.
Questions
1. What are the ethical and legal issues involved in this case?
2. How should the clinical team proceed?
3. Is the clinical team obligated to withdraw support, as requested by
the parents, even if they disagree with the parents, and believe
surgery is the appropriate course of action?
Ethics challenge 2019/2020
CONFLICTING CLINICIAN AND PARENTAL WISHES
This is the eleventh instalment of the RCSIsmj Ethics Challenge.
The editorial staff would like to congratulate Alyssa Conti on her
winning essay in the 2018/2019 Ethics Challenge. Please see
page 6 for her submission.
We invite students to submit an essay discussing the ethical questions
raised in the scenario presented. Medical ethics is an essential aspect
of the medical curriculum and we hope to encourage RCSI
students to think critically about ethical situations that arise during
their education and subsequent careers. All essays will be reviewed
by a faculty panel of experts and the winning essay will be published
in the 2020 print edition of the RCSIsmj. The deadline for submission
of entries will be separate from the general submission deadline for
the 2020 edition of the RCSIsmj. Please visit our website at
www.rcsismj.com for specific dates. Please contact us at
[email protected] with any questions or concerns.
Submission guidelines
Please construct a lucid, structured and well-presented discourse for
the issues raised by this scenario. Please ensure that you have
addressed all the questions highlighted and discuss these ethical
issues academically, making sure to reference when necessary.
Your paper should not exceed 2,000 words.
Your essay will be evaluated on three major criteria:
1. Ability to identify the ethical issues raised.
2. Fluency of your arguments.
3. Academic quality with regard to depth of research, appropriateness
of references and quality of sources.
Good luck!
The winning entry will be presented with a prize at the launch
of the next issue.
RCSIsmjethics challenge
Introduction
Conscientious objection in the field of medicine has sparked moral
and ethical debates regarding the role of healthcare professionals
(HCPs) in medical care delivery. Conscientious objection refers to the
right of HCPs to refuse participation in certain legal, safe, and
medically indicated services on the basis of the physician’s moral or
religious beliefs. Commonly, these services include abortion,
contraception, euthanasia, and organ donation; however, controversy
around conscientious objection has also arisen in the discussion of
vaccinations, non-therapeutic male circumcision, and the provision of
blood transfusions. In 1948, at the United Nations General Assembly,
conscientious objection was introduced to international law in Article
18 of the Universal Declaration of Human Rights.1 This declaration
gives the “right to freedom of thought, conscience, and religion” and
includes the right to “manifest this religion or belief in teaching,
practice, worship, and observance”.1 Historically, these rights were
practised in the context of military service, whereby an individual’s
religious or moral objection to active duty and combat was resolved
by assigning the objector to an alternative military duty.2 As it applies
ETHICS CHALLENGE WINNER 2018/2019
Alyssa ContiRCSI medical student
Page 6 | Volume 12: Number 1. 2019
Conscientious objection in medicine
RCSIsmjethics challenge
Volume 12: Number 1. 2019 | Page 7
to medicine, conscientious objection allows HCPs to refuse, on the
grounds of personal or religious beliefs, the provision of certain
medical services requested by their patients.3 In the United Kingdom,
the Conscientious Objection (Medical Activities) Bill protects medical
practitioners with a conscientious objection from participating in
services such as the withdrawal of life-sustaining treatment, any
activity under the Human Fertilisation and Embryology Act 1990, or
any activity under the Abortion Act 1967.4 This applies to all medical
practitioners participating in such activities, including those in the
General Medical Council, the Nursing and Midwifery Council, the
Health and Care Professions Council, and the General Pharmaceutical
Council.4 Therefore, conscientious objection does not solely apply to
physicians; the rights of all HCPs including nurses, midwives,
pharmacists, and medical students, are equally included.
In 1948, at the United Nations GeneralAssembly, conscientious objection wasintroduced to international law in Article18 of the Universal Declaration of Human
Rights. This declaration gives the “right to freedom of thought, conscience,and religion” and includes the right to“manifest this religion or belief in
teaching, practice, worship, and observance”.
In the United States (US), federal statutes and professional guidelines
allowing physicians to refuse participation in legalised abortions were
instituted after Roe v. Wade in 1973, when the US Supreme Court
ruled that the banning of abortion by individual states was
unconstitutional.5 Since then, conscientious clauses have been
adopted by numerous states; some states even allow medical
professionals to refuse to refer patients to alternate, willing providers,
as that could be considered participation in the medical procedure
itself.5 In Canada, however, a recent court ruling placed restrictions on
conscientious objection by implementing the policy of “effective
referral”, wherein objecting doctors must refer patients to
non-objecting physicians to ensure access to care and protect
vulnerable patients from harm.6
Ethical implications
Medical ethics defines the set of non-hierarchical moral principles that
guide HCPs through all stages of patient care. The ethical debate
regarding the legality, practicality, and morality of conscientious
objection in medicine can be examined and interpreted using the
core principles of autonomy, beneficence, non-maleficence, and
justice.
Autonomy and beneficence
Autonomy is the right of an informed, competent adult to
independently make decisions about their own medical care.7 While
the respect for patient autonomy remains a fundamental principle in
medical ethics, conflict ensues when attempting to determine the
limitations within which this right can be espoused and applied. In
conscientious objection, difficulty arises when a physician’s ethical or
moral values directly conflict with the care requested or required by a
patient, which would result in the physician upholding patient
autonomy at the expense of their own. Therefore, the essence of the
conscientious objection discussion is determining whose autonomy
supersedes. The ethical principle of beneficence addresses the
concept of acting for the benefit of others. In professional settings,
this principle has established the fiduciary relationship wherein work
always, without exception, favours the client’s rights and wishes over
those of the professional carrying out the work.8
Resurgence of paternalism
As patient expectations and the role of doctors in the doctor-patient
relationship have changed, medicine has evolved from its historic
paternalistic roots. Paternalism in medicine stems from the notion
that doctors have greater knowledge of diseases and medical
treatments compared to the general public, and therefore should
decide which course of action is appropriate for their patients.
With the introduction of ethically sound principles such as autonomy
and informed consent, patient participation in healthcare decisions
has increased tremendously, shifting the doctor-patient relationship
away from patient passivity. However, by allowing a doctor to
conscientiously object to a treatment or service that is within the
patient’s legal right, are we reverting to paternalism? If a doctor
imposes their personal morals and opinions onto a patient’s
therapeutic decision by refusing care, is the physician’s individual
autonomy being valued more highly than their patient’s?
Is there a neutral ground?
In order to mitigate the debate over conscientious objection, a middle
ground must be established. Medicine, both at its core and in
concordance with the principle of beneficence, is an act of service. A
physician’s primary obligation is to their patient. Therefore, denying a
RCSIsmjethics challenge
Page 8 | Volume 12: Number 1. 2019
patient a requested service that is both legally available and medically
indicated could be viewed as a direct contradiction to the principle of
beneficence. Fundamentally, it is unethical for a doctor to make moral
judgements on behalf of their patients, nor should a physician’s
beliefs be imposed upon a patient or restrict their access to medical
care. In a world of absolutes, where no middle ground can be found,
a healthcare provider’s professional obligation should be prioritised
over their personal beliefs. Comparatively, in a healthcare market
consisting of numerous providers holding diverse moral and religious
views, allowing conscientious objection with the stipulation of
mandatory referral could mitigate the debate between professional
obligations and individual physician rights. However, in an
emergency or any situation in which a referral is not possible, for
example due either to time or geographic constraints, it is the
physician’s professional obligation to provide the requested or
required medical service regardless of their personal beliefs.
Non-maleficence
Non-maleficence originates from the Hippocratic tenet of primum non
nocere, meaning first do no harm.9 Commonly, non-maleficence is
thought to represent the hazardous aspect of a risk–benefit analysis
when making healthcare decisions. The complexity of this principle
lies within the definition of ‘harm’. Proponents of conscientious
objection argue that certain services requested by patients would
cause the patient harm, thus resulting in the physician acting as the
source of that harm. However, the reverse can also be argued; in the
example of euthanasia, would refusal to withdraw life-sustaining
treatment cause more harm to the patient by prolonging their life,
rather than performing that service at the patient’s wish, to end their
suffering? Evidently, within the principle of non-maleficence it is
plausible to argue both for and against conscientious objection
depending on the interpretation of ‘harm’.
In circumstances where conscientious objection is not permitted, such
as emergencies, if a service or treatment is within a physician’s scope
of practice, they are obligated to attend to the patient’s needs or
wishes over personal moral or religious beliefs. In order to avoid
causing harm in such scenarios, medical students and doctors in
training should not be permitted to conscientiously object to
life-saving or emergent procedures performed within their field. Even
if a medical student is a conscientious objector, there may come a
time when they are required, in an emergent setting, to perform the
service to which they object. Therefore, lack of prior experience due
to conscientious objection during their training years could place
patients in potentially harmful situations.
If a doctor imposes their personal morals and opinions onto a patient’stherapeutic decision by refusing care, is the physician’s individual autonomy
being valued more highly than their patient’s?
Justice
The current stipulation wherein prompt referral is necessary in the
event of conscientious objection raises concern with respect to the
ethical principle of justice. The need for justice and equality in both
humanity and medicine has been a core philosophical concept for
centuries. Aristotle’s formal theory of justice requires, in equal parts,
consideration, fairness, and impartiality.10 Before introducing
overarching laws and regulations for conscientious objection in
medicine, it is important to consider all patient populations and
healthcare settings. Healthcare systems are diverse, ranging from
public to private, and population density and infrastructure vary,
meaning that there are often large discrepancies in access to
healthcare. The requirement of prompt patient referral to
non-objecting physicians is grounded in, and reliant upon, two large
assumptions. First, the healthcare setting must be densely populated
with sufficient physicians to enable such referrals. Second, the
stipulation assumes that all patients have the financial and physical
ability to be selective and flexible in the healthcare market they utilise
and the physician they see. This concept of ‘doctor shopping’ may be
impossible in certain public healthcare systems, or impractical due to
resource and finance limitations. In light of this, suggestions have
been made to implement hiring regulations that allow regional
authorities to allocate physicians based upon their conscientious
objection morals.
RCSIsmj
Volume 12: Number 1. 2019 | Page 9
ethics challenge
Even if a medical student is aconscientious objector, there may
come a time when they are required, in an emergent setting,
to perform the service to which they object.
Arguments can be made for the potentially discriminatory nature of
mandating formal documentation of one’s conscientious objections in
the physician hiring process. However, in order to ensure that patients
retain access to legal, safe, and medically appropriate health services,
the first step is to ensure that they have access to physicians capable
of and willing to perform those services. Therefore, the identity and
‘moral scope’ of those physicians must be documented. It must be
clarified that this identification is not intended to promote
discrimination against a physician’s moral or religious views (and
indeed it mustn’t), but rather to emphasise that the primary duty of
both physicians and regulatory bodies is to ensure that all patients
have appropriate access to care. The allowance for such regional
authorities to address conscientious objection in this way might be
the next step to ensure that all medical communities are properly
serving the needs of all patients.
Conclusion
In the field of medicine, conscientious objection allows healthcare
providers to excuse themselves from providing patients with legal, safe,
and medically indicated services on the basis of personal moral or
religious beliefs. Medical practices commonly impacted by
conscientious objection include abortion, euthanasia, organ donation,
and male circumcision. There have been debates regarding which HCPs
have the right to conscientiously object, including medical students,
consultant physicians, nurses, midwives, and pharmacists. Additionally,
it has been suggested that physicians should perhaps be required to
disclose their objections, so that regulatory and hiring bodies can ensure
a sufficient mix of physicians within an area in order to cater to the full
scope of patient needs within a population. From an ethical standpoint,
in order for HCPs to have a legal right to conscientiously refuse certain
medical practices, they must be obligated to refer patients, in a timely
manner, to another practitioner who is willing to perform the
treatment. This referral must not cause the patient any emotional harm,
nor can it delay their access to timely care. In situations where referrals
are not possible, such as emergencies or lack of referral opportunities, it
is the HCP’s duty to carry out the service themselves. As stated in the
Hippocratic Oath,11 and many modernised versions used today, a
doctor’s primary obligation is towards their patients, and it is this
privilege of service and ethical responsibility that ultimately transcends
personal beliefs.
References
1. United Nations General Assembly. Universal Declaration of
Human Rights, 217 A (III). December 10, 1948. [Internet].
Available at:
http://www.un.org/en/universal-declaration-human-rights/.
2. Lippman M. The recognition of conscientious objection to military
service as an international human right. Cal W Int’l L J. 1990;21:31.
3. Shaw D, Gardiner D, Lewis P, Jansen N, Wind T, Samuel U et al.
Conscientious objection to deceased organ donation by healthcare
professionals. J Intensive Care Soc. 2018;19(1):43-7.
4. UK Parliament. Conscientious Objection (Medical Activities) Bill [HL]
2017-2019. [Internet]. Available from:
https://services.parliament.uk/bills/2017-19/conscientiousobjectionm
edicalactivities.html.
5. Misna L. Stem cell based treatments and novel considerations for
conscience clause legislation. Ind Health L Rev. 2011;8:471.
6. Symons X. Canadian court tells doctors they must refer for
euthanasia. February 3, 2018. [Internet]. Available from:
https://www.bioedge.org/bioethics/canadian-court-tells-doctors-they-
must-refer-for-euthanasia/12585.
7. Gillon R. Medical ethics: four principles plus attention to scope. BMJ.
1994;309(6948):184-8.
8. Kinsinger FS. Beneficence and the professional's moral imperative. J
Chiropr Humanit. 2009;16(1):44-6.
9. Summers J, Morrison E. Principles of healthcare ethics. Health Care
Ethics (2nd ed.). Sudbury: Jones and Bartlett Publishers,
2009:41-58.
10. Gillon R. Philosophical medical ethics. Rights. Br Med J (Clin Res Ed).
1985;290(6485):1890-1.
11. British Medical Association. BMA updates Hippocratic Oath. 1997.
[Internet]. [cited 2016 May 17]. Available from:
http://web.bma.org.uk/pressrel.nsf/wall/776B5BE6D9D1D2D0802568
F50054301D?Open.
RCSIsmj
Page 10 | Volume 12: Number 1. 2019
research spotlight
One in four people in Ireland will be directly affected by a neurological
disorder during their lifetime. An enormous gap remains in translating
neuroscience, genetics, and diagnostic discoveries into patient care; the
current system for diagnosing, treating, and managing chronic and rare
neurological diseases is inadequate and largely fails patients.
FutureNeuro is the Science Foundation Ireland (SFI) Research Centre for
Chronic and Rare Neurological Diseases, based at the Royal College of
Surgeons in Ireland (RCSI), and brings together an internationally
recognised multidisciplinary team. FutureNeuro includes neuroscientists,
clinical neurologists, geneticists, cell biologists, and analytical and
materials chemists from five different third-level institutions’ research
teams including RCSI, Trinity College Dublin (TCD), University College
Dublin (UCD), Dublin City University (DCU), and National University of
Ireland Galway (NUIG). FutureNeuro currently has eight dedicated
principal investigators.
The vision of FutureNeuro is to provide a uniquely integrated centre that
will accelerate translatable discoveries on genetic diagnosis and advanced
molecular treatments supported by enabling eHealth technology for
chronic and rare neurological diseases. Researchers in FutureNeuro
connect to a national clinical network spanning major tertiary referral
centres across Ireland, which in turn are linked via national disease-specific
electronic health systems. This is globally unique and provides an edge
over similar initiatives and a platform to deliver world-leading research.
Further, given the size and nature of the healthcare system in Ireland, this
is scalable to other neurological disorders.
FutureNeuro: translatingresearch into action
FutureNeuro Director Prof. DAVID
HENSHALL of RCSI describes this
groundbreaking project to
improve the treatment of
neurological disorders.
RCSIsmj
Volume 12: Number 1. 2019 | Page 11
research spotlight
Defining the problem
Brain diseases are hugely costly for health and society. It is estimated
that as many as 800,000 people in Ireland live with some form of
neurological condition, with a healthcare and societal cost of €3
billion.1
The problem – and solution – is threefold:
1. Diagnosis: Genomic medicine will provide a molecular diagnosis
and guide therapy so that patients receive the right drug sooner with
fewer side effects.
2. Therapy: Many patients fail to respond to available drugs, and for
most rare neurological diseases there is simply no effective treatment.
New therapeutics will control or correct the gene networks in the
brain that underlie the disease or drug resistance.
3. eHealth infrastructure: This is largely absent and has not been
leveraged for research in Ireland to date. National electronic health
record infrastructure connects individuals to their records, improves care
outcomes, and facilitates world-class research, including clinical trials.
FutureNeuro’s research aims to change patient journeys, to result in
faster, more precise diagnosis, and fewer hospital and emergency
department admissions. For people who do not respond to current
treatments, the centre will help drive the development of new
therapies that work to restore normal brain function. Clinical decision
support systems will be developed using electronic health records to
provide insights into how a person’s disease progresses, their
response to treatment, and to assess risk factors to inform a more
individualised treatment plan.
The vision of FutureNeuro is to provide auniquely integrated centre that willaccelerate translatable discoveries on
genetic diagnosis and advanced moleculartreatments supported by enabling
eHealth technology for chronic and rareneurological diseases.
The team
The centre’s director, Prof. David Henshall, is a neuroscientist whose
research focuses on discovering new treatments and biomarkers of
epilepsy. Also based in RCSI is Prof. Gianpiero Cavalleri, a human
geneticist interested in the genetic underpinnings of brain diseases
and predictors of drug responses, and Prof. Jochen Prehn, a
world-leading researcher on neurodegeneration and mathematical
modelling to predict drug responses.
The research team in TCD includes Prof. Orla Hardiman, Prof. Colin
Doherty, and Prof. Matthew Campbell. Prof. Hardiman is a world
RCSIsmjresearch spotlight
Page 12 | Volume 12: Number 1. 2019
leader in clinical amyotrophic lateral sclerosis (ALS) research, and
developed the longest running and possibly most successful
population-based ALS register in the world. Prof. Doherty is the
national clinical lead in Ireland for epilepsy, and is a pioneer in
biobanking of blood and brain tissues for research into neurological
diseases. Prof. Campbell researches the molecular technology to
deliver large molecules and drugs directly to the brain, crossing the
blood–brain barrier – this poses a significant challenge for treatment
of neurological disease.
Another nucleic acid specieshas been discovered that
rises in the blood ahead of aseizure, suggesting thatseizure forecasting mayone day be possible.
In NUIG, Prof. Sanbing Shen is a cell biologist and introduced
neuronal stem cell technology to Ireland. His research will focus on
developing brain cells from the skin tissue of rare diseases to screen
the effect of different drug treatments, thus reducing the amount of
trial and error in current therapeutic practice. Prof. Robert Forster,
based in DCU, is a materials chemist and is currently developing ways
to detect some of the ultra-low levels of diagnostic brain molecules in
blood. The ultimate goal is to develop a rapid point-of-care test for
epilepsy and other neurological diseases.
The centre is supported by research-active clinicians across Ireland.
Leading on clinician research is Prof. Norman Delanty, a neurologist
based in Beaumont Hospital specialising in the difficult-to-treat
neurological disease.
A key player in the FutureNeuro programme is eHealth and
specifically the Epilepsy Electronic Patient Record (EPR), developed as
a ‘lighthouse project’ by the HSE and led out by Mary Fitzsimons. The
EPR contains detailed clinical information and patient-reported
disease insights from 10,000 people living with epilepsy in Ireland.
This, combined with genomic data, can improve clinical
decision-making and support better research into the disease.
Flagship Irish research
Funding for FutureNeuro is secure for at least six years, with an
investment of over €13.6 million by SFI. It is one of 17 research
centres now operating across science and technology fields. These
are the flagship funding instruments for SFI – they provide
large-scale, long-term support for basic and applied research that is
strategically important for Ireland’s economic and societal progress.
A major part of the research activities of such centres is partnerships
with industry, ranging from small and medium enterprises to large
multinational companies.
FutureNeuro has now been operating for nearly 18 months. Nine
PhD-driven research projects are up and running, and the Centre’s
headcount has reached nearly 50. Progress has been made across all
three thematic areas.
Among the therapeutics projects, researchers have tested
oligonucleotide inhibitors of non-coding RNAs (part of the genome’s
‘dark matter’) and found promising anti-seizure effects in preclinical
models, and research is looking at how the blood–brain barrier can be
bypassed to deliver these to the brain.
Another nucleic acid species has been discovered that rises in the
blood ahead of a seizure, suggesting that seizure forecasting may one
day be possible. The centre is also investigating how the brain’s
endocannabinoid system responds to seizures, and testing the effects
of cannabinoids, a recently approved medicine for intractable
epilepsy.
Although FutureNeuro is still in its early days, research findings
have already been published. This includes the discovery that a
set of nucleic acids are released from the brain into the blood of
patients with epilepsy, and the development of a prototype test that
could be a useful addition to EEG, MRI, and other clinical
assessments.2
References
1. Clarke S, Craven A, Doig H, Gill M, Hardiman O, Lynch T et al. Building a
Supportive Framework for Brain Research in Ireland. Irish Brain Council.
2017. [Internet]. [cited 2019 February 16]. Available from:
https://www.nai.ie/assets/25/0425F95F-FE1F-4D25-
BE47D8C6D75756C2_document/IBC_position_paper_final_for_print_
March2017.pdf.
2. Raoof R, Bauer S, Naggar HE, Connolly NM, Brennan GP, Brindley E et al.
Dual-center, dual-platform microRNA profiling identifies potential
plasma biomarkers of adult temporal lobe epilepsy. EBioMedicine
2018;38:127-41.
RCSIsmjinterview
Volume 12: Number 1. 2019 | Page 13
Prof. Orla Hardiman is Professor of Neurology in Trinity College Dublin
(TCD). She recently stepped down as Academic Director of the Trinity
Biomedical Sciences Research Institute. She is a Consultant Neurologist
at the National Neuroscience Centre, where she is Director of the
National Amyotrophic Lateral Sclerosis (ALS) Clinic and the Irish ALS
Research Group, and is about to take up her new role as Clinical Lead
for Neurology in the HSE. Prof. Hardiman’s primary research interests are
the genomics, epidemiology, and disease phenotyping of ALS. Her
research aims to elucidate the mechanisms of disease heterogeneity,
with a particular focus on identifying new target genes and drug targets
for patient treatment.
Can you tell us about your path from medical school?
I always knew that I wanted a job that included both research and
clinical practice. My own experience of being female, and not really
having any mentors, made me want to be a good mentor for younger
people coming up behind me.
When I graduated in 1983, the landscape was fairly barren. There were
no proper specialist training schemes, and most contracts were either six
months or one year in duration. People really had to find their own way.
This was difficult as money was tight in those days, and moving from
city to city was very challenging. I was lucky as I had a one-year
internship in St Vincent’s, and then spent two years in the old Richmond
Hospital in both neurology and neuropathology.
But further training was going to be difficult at home, and emigration
was essential. I am one of five siblings, and by that time four of us had
emigrated.
After a trip to the US to look at job opportunities, my husband and I
settled on Boston, as I had matched to a good residency in the Harvard
programme, and he was able to get a job as a lawyer. I trained in
neurology and then in neuromuscular disease and cell biology.
By 1991, the Irish economy was beginning to pick up again. I already
had a BSc in Physiology, and I returned to Ireland as a Newman Scholar
in UCD in human anatomy and physiology, and used my training in
muscle biology to set up a lab. I also taught physiology to
physiotherapists and medical students.
But I really missed the clinical work, and in 1996 I applied for and was
successful in my application for Consultant Neurologist in Beaumont. I
All about the brainSenior staff writer KATIE NOLAN spoke with Prof. Orla Hardiman about
her career and her research into ALS.
RCSIsmjinterview
Page 14 | Volume 12: Number 1. 2019
worked hard for 11 years as a full-time clinician (by that time I also had
four children). There were only 11 neurologists in the country, and I was
seeing over a hundred patients a week, with a 1:3 on-call schedule. I
re-oriented my research towards epidemiology, clinical phenotyping
and genomics. I had received a good training in amyotrophic lateral
sclerosis (ALS) in Boston, as I had worked as a Fellow with one of the
leaders in the field, Bob Brown.
In 2007, I was successful in my application for a Clinician Scientist Award
(CSA) from the Health Research Board (HRB). At that time, I moved my
academic affiliation from RCSI to TCD, as I was already collaborating
with the Smurfit Institute of Genetics in TCD, and I had a strong
affiliation with Trinity College Institute of Neuroscience (TCIN). My CSA
allowed me to buy out 50% of my time, and to concentrate on building
a research programme in ALS. I had a very supportive Head of School in
TCD. He put me in touch with TCD Development Officer Zhanna
O’Clery, with whom I worked to build a new Academic Unit of
Neurology using philanthropic support. My HRB grant, and my research
output, facilitated my contract adjustment to that of Academic
Consultant, and my appointment in 2013 as the first full Professor of
Neurology in TCD (actually, in all of the Republic of Ireland). From
2015-2018 I was also the Academic Director of the Biomedical Science
Institute in TCD – a great job that opened up a whole range of
interesting collaborative opportunities.
I now have my dream job. I am still active as a clinician. This is very
important to me, as my core identity is that of a doctor. But I am also
very active in research design and development, clinical trials, and
mentoring, and most recently in service planning for neurology. There
are now over 35 people working in my research group, and I have taken
huge pleasure in attracting many extremely bright young people to
work with us, and to watch in awe as their careers blossom.
Did you know you wanted to study neurology while in
medical school? When did the desire to study the brain
become apparent?
I was always interested in the brain. Also, I have a background in
physiology. I don’t have a particularly good factual memory, and I liked
subjects I could work through by deductive logical reasoning. I also
hated being in a position where I would have to make critical decisions
in a short timeframe, so emergency medicine and intensive care were
not attractive options. My original plan was to go into psychiatry, but
after an internship in which I rotated through a service that was heavily
psychoanalytic in nature, I decided to move over to the ‘nuts and bolts’
of neurology. But in a way, my career has come full circle, as we have
recently shown a biological association between ALS and schizophrenia.
Being Irish, of a certain age and female, Iam afraid that there was not a lot of
mentoring available at the time. But I haveworked with some very able people, andlearned important skills along the way.
Have you had any great mentors throughout your career?
Being Irish, of a certain age and female, I am afraid that there was not a
lot of mentoring available at the time. But I have worked with some very
able people, and learned important skills along the way (some of these
points of learning were about how NOT to behave!)
The most positive influences on my life were the late Hugh Staunton,
who set up the very successful Irish epilepsy programme with a
combination of forceful intelligence and sheer determination, Bob
Brown in Boston, who showed me what a really successful clinician
scientist can look like, and some very strong female colleagues with
whom I have had the privilege of working along the way: Aileen Barrett,
who was Head of Physiotherapy in Beaumont when I was a young
consultant, Rozanne Barrow, who was Head of Speech and Language
Therapy in Beaumont, and Audrey Craven, who set up the Migraine
Association of Ireland. I still count all three among my close friends.
You completed part of your training in America – would you
recommend this experience to other Irish graduates?
Yes, I would. In some ways, it is tougher today than it was when I was
going through. As there were very few ‘systems’ or regulations back
then, there was a lot of flexibility, and one could receive an entire
training overseas and have it recognised here.
Now, I would recommend completing the basic training and SpR
training here, and then moving overseas for Fellowship training. Young
doctors should also look at the Wellcome-HRB ICAT programme, which
RCSIsmjinterview
Volume 12: Number 1. 2019 | Page 15
focuses on the combined academic and clinical specialist. Residency
training in neurology is still possible in the US, but engagement with the
Royal College of Physicians in advance is advisable to ensure that all the
Irish requirements for Specialist Registration are met.
When did you know you wanted to combine your clinical
skills with your passion for research?
I always liked the idea of combining clinical engagement with scientific
curiosity. I did an intercalated BSc halfway through medical school, in
UCD. This really confirmed my interest in applied clinical research.
How do you balance the demands of a medical career with
staying on track in the research lab?
In my opinion, it isn’t really possible to run a first-class basic science lab
and be a first-class clinician in Ireland. Even in the US people generally
choose between one or the other. But it is possible to run applied
research programmes that focus on “bedside-to-bench” research. In this
scenario, we set the clinical parameters and clinical questions, and work
with our basic laboratory colleagues to translate back and forth.
What is the most rewarding part of your work?
Making a difference. This can be in improving patient care, finding a
new disease mechanism through genomic analysis, or helping a
younger colleague move up the ladder.
What advice do you have for future clinician researchers?
Work hard and don’t be afraid of failing. Believe in yourself. Most new
ideas take 10-15 years to embed into the general scientific zeitgeist. Be
honest, and keep your integrity and your humility. Remember that
sometimes you are just plain wrong, and there is no honour in trying to
publish a piece of flawed research, no matter how compelling the
justification. Counter-intuitive results are there to keep us humble. And
above all, be nice to people, especially those coming up behind you.
Your lab has a diverse range of research interests. Of
particular importance is the ongoing research into the rates
and causes of familial ALS within the Irish population. The
motor neuron disease (MND) register you created has been
collecting patient data since 1994, and is a powerful research
tool. What does the genetic profile of Irish patients tell us? Is
that profile similar to patient cohorts from other countries?
The Register has helped us to define the entire range of disease, and has
enabled quite a few new observations, including the relationship
between ALS and other neurological and neuropsychiatric conditions.
The genomic profile of Irish ALS patients differs slightly from other
countries. For example, we do not have any patients in Ireland with the
SOD1 variants. Instead, 50% of our families carry the C9orf72 expanded
repeat. We don’t yet have a genetic explanation for the remaining 50%.
Our studies in epidemiology have also made another important
observation, which is that populations that are genetically admixed
seem to be protected. We have shown this in Cuba, and the work has
been replicated in other parts of Latin America. The ‘mulatto’ (most
genetically admixed) population has a 50% lower rate of ALS compared
to ‘whites’. This proves the complex genetic basis of ALS.
You set up the ALS DNA bank in 1999, which contains
genome-wide association study (GWAS) data from the Irish
patient cohort. Do you think genealogical data and the
advancing techniques in gene profiling will help identify
future drug targets for ALS? Has familial gene clustering
data been important for identifying new target genes or
mechanisms of disease progression?
Up to a point. I think that future therapeutics will certainly have a
pharmacogenomic component. But I think that our genealogical work
will also help us to identify different patterns of degeneration. Our data
suggests that the relationship with neuropsychiatric conditions is not
uniformly distributed; these conditions tend to cluster within families.
Our more recent work in neuroimaging (now led by Dr Peter Bede), and
more recently in neuro-electric signal analysis (now led by Dr Bahman
Nassereloslami), has convinced me that we should be thinking about
ALS (and probably the other neurodegenerations) as disorders of brain
networking. The genomic basis to this may provide clues, but I could
envisage a class of drugs that work to stabilise/modulate brain
networking as being of therapeutic value.
Your research has identified links between ALS and
psychiatric illnesses such as schizophrenia. This research is
expanding the field of neurology and identifying the role of
genetic pleiotropy in neurological disorders. Do you think
advances in neuropsychiatry research will help scientists and
clinicians understand the brain and neurodegeneration
better?
Absolutely – it is all about the brain! Dichotomising into ‘neurological’
and ‘psychiatric’ conditions is biologically irrational. How the brain fails
can be a function of neurodevelopmental errors, acquired disruptions in
cellular function, synaptic function, cell–cell interaction, or neuronal
connectivity, or disruptions in essential glial function (there is an
evolving literature on oligodendrocytes in this regard), etc.
There is still a huge amount to discover, and there is no reason why
neurologists and psychiatrists cannot learn from one another.
Bharti M. Kewlani1
Dr Carlos S. Gracias2
Prof. James P. O’Neill3
1RCSI medical student
2Medical intern, University
Hospital Galway
3Professor of Otolaryngology,
Head and Neck Surgery,
Beaumont Hospital, Dublin
Introduction
Cancers of the nasal cavity and paranasal sinuses
are rare and account for fewer than 1% of all
human malignancies.1 Up to 60-70% of sinonasal
malignancies are maxillary sinus cancers, with
squamous cell carcinoma (SCC) being the most
common subtype.1 Patients with maxillary sinus
cancer remain largely asymptomatic until the
tumour reaches a large size with widespread local
tissue destruction or metastasis.2 These tumours
thus present very late in the disease course, leaving
surgery as the primary treatment option despite
limited scientific rationale and weak clinical
evidence regarding treatment guidelines. Regional
lymphatic spread in the context of this malignancy
is relatively uncommon, although involvement of
the submandibular, upper jugular, and
occasionally retropharyngeal nodes may occur.
Fewer than 5% of these cancers present with
distant metastasis.2,3
In terms of risk factors, chronic irritation of the
sinus lining can lead to squamous transformation
of the epithelial cells within the sinus cavity, such
as in the case of long-term cigarette smoking or
alcohol consumption. Occupational exposure to
wood dust, nickel dust, mustard gas, isopropyl oil,
chromium, and other substances may also
Maxillary sinus cancer
Abstract
Maxillary sinus cancers are the most common type of sinonasal malignancies. These cancers are
locally aggressive, and patients generally remain asymptomatic until the tumour reaches an
advanced stage. Multimodality therapy is the gold standard for treating these cancers. This report
presents the case of a 60-year-old male, Mr GA, who received a diagnosis of advanced-stage
maxillary sinus carcinoma following biopsy of the lesion after it was incidentally discovered on a
computed tomography (CT) scan ordered in the primary care setting. Mr GA underwent elective
removal of the primary tumour followed by extensive reconstructive surgery and adjuvant
radiotherapy. This case highlights the significance of a multidisciplinary team and evidence-based
approach in achieving successful outcomes for patients with maxillary sinus cancer.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 16-20.
RCSIsmjcase report
Page 16 | Volume 12: Number 1. 2019
RCSIsmjcase report
Volume 12: Number 1. 2019 | Page 17
contribute to squamous transformation of the sinus lining and
subsequent development of malignancy. Therefore, individuals
working in the furniture-making, leather, and textile industries are at
higher risk of malignant transformation due to chronic inhalation of
such microparticles.4 Additionally, human papillomavirus (HPV) or
Epstein-Barr virus (EBV) infection may also be an early event in the
multistep progression towards malignancy.4 Multimodality therapy,
including surgery, radiation therapy, and chemotherapy, is used to
treat such cancers.
Case summary
A 60-year-old Irish businessman, Mr GA, presented to his local general
practitioner (GP) in June 2017, complaining of mild sinusitis-like
symptoms ongoing for four months. The patient had a 50 pack-year
smoking history and an average weekly intake of 20 units of alcohol.
His symptoms included occasional red-orange nasal discharge and
nasal congestion. However, apart from these mild episodes, the
patient had no other complaints. Mr GA’s past medical history was
significant for hypertension, controlled on propranolol, and he denied
any known drug allergies. His past surgical history was insignificant
apart from bilateral varicose vein removal in 2017.
To thoroughly investigate this patient’s symptoms, the GP ordered an
electrocardiogram (ECG), blood tests, and a chest x-ray, all of which
revealed no abnormalities. However, a non-contrast computed
tomography (CT) scan of the paranasal sinus revealed a mixed
intensity, expansile mass in the left maxillary sinus (Figure 1a).
Medially, the mass invaded the middle and inferior turbinates of the
nasal cavity, causing obliteration of the left middle meatus. Superiorly,
invasion of the infraorbital region was apparent, with sparing of the
orbit. Involvement of the oral cavity and gingivobuccal sulcus was
also noted inferiorly. Furthermore, transverse section of the CT scan
(Figure 1b) showed the mass breeching the confines of the maxillary
sinus, extending into the nasal cavity, and invading the posterior wall
of the maxillary sinus.
Mr GA was subsequently referred to the ear, nose and throat (ENT)
department, where a biopsy of the left maxillary sinus and nasal
polyps was conducted.
Occupational exposure to wood dust,nickel dust, mustard gas, isopropyl oil,chromium, and other substances may
also contribute to squamoustransformation of the sinus lining and subsequent development ofmalignancy. Individuals working in the furniture-making, leather, and textile industries are at higher risk of malignant transformation due
to chronic inhalation of such microparticles.
FIGURE 1a: Coronal section of a CT scan showing a mixed-intensity mass in the leftmaxillary sinus.
FIGURE 1b: Transverse section of a CT scan showing the tumour invading theposterior wall of the left maxillary sinus and causing obliteration of the left middle meatus.
RCSIsmjcase report
Page 18 | Volume 12: Number 1. 2019
Table 1: American Joint Committee on Cancer (AJCC) criteria for tumour (T) staging of maxillary sinus carcinoma.5
T stage Criteria
T1 Tumour limited to the maxillary sinus mucosa with no erosion or destruction of bone
T2 Tumour causing bone erosion or destruction, including extension into the hard palate and/or middle nasal meatus,
except extension to the posterior wall of the maxillary sinus and pterygoid plates
T3 Tumour invades any of the following: bone of the posterior wall of the maxillary sinus, subcutaneous tissues, floor or
medial wall of the orbit, pterygoid fossa, or ethmoid sinuses
T4a Tumour invades anterior orbital contents, skin of cheek, pterygoid plates, infratemporal fossa, cribriform plate,
sphenoid, or frontal sinuses
T4b Tumour invades any of the following: orbital apex, brain, middle cranial fossa, cranial nerves other than maxillary
division of trigeminal nerve, nasopharynx, or clivus
FIGURE 2: Elective neck dissection incorporating levels 1-4. FIGURE 3a: Intra-operative tumour resection using a globe-sparing total ethmoidmaxillectomy.
RCSIsmj
The histopathology report that followed described a T3/4a N0 M0
poorly differentiated, non-keratinising SCC in the left maxillary sinus
(Table 1). In addition, there was confirmed bone invasion by the
tumour both laterally (through the maxilla into the pterygoid fossa)
and medially (into the nasal cavity). No vascular or perineural invasion
was reported.
The patient was then referred to the Department of Otolaryngology,
Head and Neck Surgery at Beaumont Hospital, where a complete
head and neck examination was performed, and a repeat head and
neck CT scan with contrast and positron emission tomography (PET)
scans were conducted to further delineate the extent of malignancy.
Mr GA’s case was then discussed at a multidisciplinary team (MDT)
meeting involving the pathology, ENT, plastic surgery, and oncology
teams. The patient continued to remain stable and well with no
further deterioration of his initial symptoms.
Management
At the MDT meeting, a therapeutic decision was made to proceed
with surgical excision of the primary tumour and associated lymph
nodes followed by radiotherapy, in keeping with the National
Comprehensive Cancer Network (NCCN) guidelines.6
The extirpative component of this case included a prophylactic
left-sided elective neck dissection (END) incorporating levels 1-4
(Figure 2), and a globe-sparing total ethmoid maxillectomy (Figure
3a). The reconstructive element of the surgery included orbital floor
repair using a titanium mesh and subsequent coverage with a
temporalis pedicled muscle flap.
This was followed by insertion of an oral obturator, fashioned by a
prosthodontist prior to the operation, to seal the oral cavity from the
adjacent paranasal sinus (Figure 3b).
Histopathology on the excised tumour confirmed a SCC that was
removed with clear circumferential margins (Figure 4). Neck
dissection did not reveal any tumour dissemination. The MDT then
decided that adjuvant radiation without chemotherapy was the best
therapeutic strategy for this patient. Six weeks postoperatively, Mr GA
commenced six weeks of adjuvant radiotherapy amounting to 60Gy.
This case provides an example of an MDT approach to a patient withadvanced sinonasal malignancy,requiring the collaboration ofhead and neck cancer surgeons,plastic surgeons, prosthodontists,
radiation oncologists, neuro-radiologists,and pathologists.
Volume 12: Number 1. 2019 | Page 19
case report
FIGURE 3b: Intra-operative reconstruction of the infra-orbital floor and oral cavity. FIGURE 4: Macroscopic appearance of the resected lesion.
RCSIsmj
Page 20 | Volume 12: Number 1. 2019
Discussion
Complete surgical excision followed by postoperative radiation therapy
remains the primary treatment option for locally aggressive, resectable
maxillary sinus tumours. Aggressive surgical resection would
theoretically increase functional and cosmetic morbidity, but this has
improved significantly with modern reconstructive techniques such as
microvascular free flaps, pedicled flaps, and prosthetic obturators such
as the one used in the above case.7
Tumour dissemination to cervical lymph nodes is one of the most
important prognostic factors in maxillary sinus cancer.8 Patients with
evident metastasis to regional lymph nodes undergo neck dissection,
where lymph nodes from different anatomical levels are removed.
However, whether or not patients with clinically negative neck nodes
should undergo an END remains debatable.
This case reports a patient who had a T3/4a N0 M0 locally invasive
maxillary sinus cancer, with no evidence of regional lymph node
dispersion of the tumour (Table 1). Nonetheless, an END was
performed. According to one study, local control of the primary
tumour was deemed more important compared to neck management
via END, and it concluded that the incidence of occult cervical
metastasis of maxillary SCC was not high enough to recommend END.9
Another study, however, concluded that advanced T stage tumours
pose a much higher risk for cervical metastases compared to early T
stage tumours.10 As such, the authors recommended that an END of
levels I to III should be performed for all patients with T3/4 N0
disease.10 This is expected to significantly reduce tumour recurrence
rates and improve survival outcomes for these maxillary SCC patients.10
In the case of this patient, Mr GA’s tumour involved the gingivobuccal
sulcus, which increased the chances of tumour invasion to regional
lymph nodes, according to the latter study.10 For this reason, Mr GA
was managed with an END.
Conclusion
The complex anatomy of the paranasal sinus region, together with
the tendency for late clinical presentation due to an asymptomatic
early phase, makes maxillary cancer treatment particularly difficult.
Unlike many other cancers, standard treatment guidelines and
randomised data to guide management in maxillary sinus cancer are
lacking. This case provides an example of an MDT approach to a
patient with advanced sinonasal malignancy, requiring the
collaboration of head and neck cancer surgeons, plastic surgeons,
prosthodontists, radiation oncologists, neuro-radiologists, and
pathologists. This case also required specialised care from nursing
staff, speech and language therapists, and physiotherapists,
highlighting the importance of a holistic team approach to
evidence-based medicine and complex patient care.
References
1. Grant RN, Silverberg E. Cancer Statistics 1970. New York; American
Cancer Society, 1970.
2. Turner JH, Reh DD. Incidence and survival in patients with sino-nasal
cancer: a historical analysis of population-based data. Head Neck.
2012;34:877-85.
3. Robin T, Jones B, Gordon O, Phan A, Abbott D, McDermott J et al. A
comprehensive comparative analysis of treatment modalities for sinonasal
malignancies. Cancer. 2017;123(16):3040-9.
4. American Cancer Society. What are the risk factors for nasal cavity and
paranasal sinus cancers? 2019. [Internet]. [cited 2018 September 3].
Available from:
https://www.cancer.org/cancer/nasal-cavity-and-paranasal-sinus-cancer/ca
uses-risks-prevention/risk-factors.html.
5. Oralcancerfoundation.org. NCCN Clinical Practice Guidelines in Oncology:
Head and Neck Cancers. 2019. [Internet]. [cited 2018 September 3].
Available from: https://oralcancerfoundation.org/wp-content/
uploads/2016/09/head-and-neck.pdf.
6. Popovic D, Milisavljevic D. Malignant tumors of the maxillary sinus. A
ten-year experience. 2004. [Internet]. [cited 2018 September 3]. Available
from: http://facta.junis.ni.ac.rs/mab/mab200401/mab200401-07.pdf.
7. Capote A, Escorial V, Muñoz-Guerra MF, Rodríguez-Campo FJ, Gamallo C,
Naval L. Elective neck dissection in early-stage oral squamous cell
carcinoma – does it influence recurrence and survival? Head Neck.
2007;29(1):3-11.
8. Park J, Nam W, Kim H, Cha I. Is elective neck dissection needed in
squamous cell carcinoma of maxilla? J Korean Assoc Oral Maxillofac Surg.
2017;43(3):166.
9. Zhang W, Wang Y, Mao C, Guo C, Yu G, Peng X. Cervical metastasis of
maxillary squamous cell carcinoma. Int J Oral Maxillofac Surg.
2015;44(3):285-29.
10. Deschler DG, Moore MG, Smith RV (eds.). Quick Reference Guide to TNM
Staging of Head and Neck Cancer and Neck Dissection Classification (4th
ed.). Alexandria, VA; American Academy of Otolaryngology–Head and
Neck Surgery Foundation, 2014.
case report
Volume 12: Number 1. 2019 | Page 21
Campbell Shaw1
1RCSI medical student
Introduction
Tibiofemoral knee dislocation is a rare and serious
injury that occurs in 0.02-0.1% of all
musculoskeletal injuries.1 Blunt trauma to the
lower extremity is associated with a 28-46% injury
rate to the popliteal artery.2 Ligament instability
and absent pedal pulses (i.e., posterior tibial artery
and dorsalis pedis) are signs of a critical injury
requiring urgent vascular consultation.2 In the case
of a posterior dislocation of the tibia on the femur,
the popliteal artery is particularly susceptible to
trauma as it is anatomically anchored by
landmarks above and below the knee, rendering it
vulnerable to any movement of those points.2 The
femoral artery passes through the tight opening of
the adductor hiatus, between the femur and fascia
of the adductor magnus, to enter the popliteal
fascia and become the popliteal artery. It is at this
transition that the popliteal artery is fixed in place
A case of a knee slip and slide
Abstract
A posterior knee dislocation can present with various complications ranging from simple fractures to
vascular damage requiring immediate medical attention. Compartment syndrome is a potential
serious complication and medical emergency, involving increased pressure within a muscular
compartment that can result in diminished blood flow, compromised nerve conduction, and
impaired muscle function. Discussed here is the case of a 22-year-old female who presented with
popliteal thrombosis and compartment syndrome following a tibiofemoral knee dislocation. The use
of diagnostic imaging, including angiography and magnetic resonance imaging (MRI), is essential to
determine the complications associated with traumatic injuries. Clinical scenarios such as this
highlight the importance of thorough neurovascular assessment in complex orthopaedic injuries.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 21-25.
RCSIsmjcase report
RCSIsmjcase report
and therefore vulnerable to damage if the femur becomes displaced.3
Additionally, as the artery moves towards the foot, it travels in close
proximity to the soleus tendon, anchoring it distally.4
Compartment syndrome is a differential cause of injury to the
popliteal artery, as well as a known complication of knee trauma. As
muscles are organised into fascia-bound compartments around the
thigh, pathologies that raise intracompartmental pressure, such as
oedema due to injury, can lower vascular perfusion. Capillary collapse
occurs when the compartment pressure is raised above the capillary
perfusion pressure, resulting in ischaemia and cellular necrosis.5
Ultimately, compartment syndrome lasting one hour can result in
neuropraxia, the temporary loss of nerve conduction due to
pressure-mediated nerve damage.5
The severity and range of complicationsthat can arise following an injury to
the lower limb highlight the importance ofconsidering neurovascular function
in addition to motor ability in orthopaedicinjuries.
Popliteal aneurysms and pseudoaneurysms are additional causes of
popliteal artery damage. Popliteal aneurysms often occur in the lower
limb following trauma and therefore must be considered in the event
of disrupted perfusion following knee injury.6 Similarly,
pseudoaneurysms result in vessel wall damage with blood contained
within the tunica adventitia.6 Common peroneal neuropathy is a
frequent consequence of popliteal artery pseudoaneurysm as it
compresses the peroneal nerve.7 Patients experiencing peroneal
neuropathy often present with ‘drop foot’ (inability to dorsiflex the
foot) and sensory loss along the lateral side of the lower limb.7
The severity and range of complications that can arise following an
injury to the lower limb highlight the importance of considering
neurovascular function in addition to motor ability in orthopaedic
injuries. This case report presents a 22-year-old female with a
posterior knee dislocation resulting in popliteal thrombosis and
compartment syndrome, emphasising the traumatic neurovascular
effects that can arise due to injury involving the contents of the
popliteal fossa.
Case report
A 22-year-old Caucasian female presented by ambulance to the
emergency department of St. Catharine’s General Hospital in Ontario,
Canada, with sharp pain in her right lower extremity following a water
park accident. The patient reported sliding down a rope swing at a
local water park when her right leg became entangled in the rope. She
twisted due to the catching action of the rope and fell to the platform
directly below. The pain was described as sharp with rapid onset and a
severity of 10/10. There were no associated symptoms and no
exacerbating or relieving factors. Her past medical history was non
contributory and she had no previous surgical history. The patient was
taking the oral contraceptive pill and reported an allergy to amoxicillin.
On examination, the patient was distressed and in severe pain,
although she was conscious and oriented. On inspection of her right
leg, there was an obvious deformity and effusion of the knee.
Regarding her vital signs, the patient was tachycardic; however, her
blood pressure was stable and no signs of shock were appreciated. A
neurological examination of the right lower extremity revealed intact
motor function in the peroneal and tibial nerve distributions. Her right
dorsalis pedis and posterior tibial artery pulses were not palpable, and
her foot was cold in temperature. The patient had extremely limited
range of motion due to pain and swelling.
X-rays on initial admission to the emergency department revealed a
right knee dislocation and tibial plateau fracture (Figure 1). Emergency
FIGURE 1: X-ray of right knee indicating a right posterior knee dislocation of thetibia (distally) on the femur (proximally).
Page 22 | Volume 12: Number 1. 2019
RCSIsmjcase report
room physicians were able to reduce the dislocation under conscious
sedation (Figure 2). While pedal pulses remained absent to palpation,
vascular consultation confirmed their presence by Doppler
ultrasound. Computed tomography (CT) angiography of the right
lower extremity revealed an abrupt disruption of blood flow affecting
the right popliteal artery extending for 5cm distally, and of the right
posterior tibial artery extending for 2.5cm distally (Figure 3). There
was normal blood flow in the anterior tibial and peroneal arteries.
This case report presents a female with aposterior knee dislocation resulting inpopliteal thrombosis and compartmentsyndrome, emphasising the traumatic
neurovascular effects that can arise due toinjury involving the contents of the
popliteal fossa.
The patient was admitted to hospital, prescribed heparin, and observed
overnight. Throughout the evening, she experienced worsening knee
swelling, pain, and paraesthesia. The next morning she was taken to the
operating room for vascular bypass surgery to improve circulation in the
right lower extremity. During surgery, muscular oedema was immediately
apparent upon division of the fascia, indicating compartment syndrome as
a differential cause of the arterial stenosis. Since the muscular
compartments in the leg are separated by non-elastic fascia, injury
resulting in swelling of the muscles cannot be accompanied by a
compensatory increase in volume. Therefore, increased pressure causes
compression of the arteries in that compartment, posing a serious threat
to blood flow. In such conditions, an urgent fasciotomy is performed to
relieve pressure and restore circulation.
In addition to compartment syndrome, an arteriotomy of the popliteal
artery revealed significant intimal injury and thrombus, indicating the
need for a saphenous vein graft. While the anastomosis was successful
in reperfusing the foot, the surgeon was unable to close the incision
medially due to significant muscle oedema. A lateral fasciotomy was
then performed to alleviate the compartment syndrome; however,
closure of that incision was also not possible due to the extent of
oedema. The wound was cleaned and dressed, as closure of the
fasciotomy was to be postponed until she returned to her home town.
FIGURE 3: CT angiogram of the right knee indicating thrombosis of the rightpopliteal artery extending 5cm (right), and of the right posterior tibial arteryextending 2.5cm (left).
FIGURE 2: Anterior x-ray of the right knee showing realignment of the knee joints.
Volume 12: Number 1. 2019 | Page 23
RCSIsmjcase report
In addition to compartment syndrome, an arteriotomy of the
popliteal artery revealed significantintimal injury and thrombus,
indicating the need for a saphenous vein graft. While the anastomosis was successful in reperfusing the foot, the surgeon was unable to
close the incision medially due to significantmuscle oedema.
The following day, postoperative magnetic resonance imagining
(MRI) of the right knee confirmed the diagnosis of a complete
posterior cruciate ligament (PCL) tear and a radial tear of the
medial meniscus. The anterior cruciate ligament (ACL) was
intact. Additionally, an osteochondral impaction injury to the
anterior aspect of the lateral tibial plateau was discovered. It
was determined that the cause of compartment syndrome in this
case was a large knee joint effusion with extensive inter- and
intramuscular oedema (Figure 4).
For the following five days, the patient was placed in a knee
immobiliser and was observed in hospital for infection and
ischaemia. On orthopaedic follow-up examination, the dorsalis
pedis pulse was palpable; however, the posterior tibial pulse was
absent, likely due to extensive swelling.
The patient’s foot appeared warm and well perfused. On motor
examination, the patient was able to flex and extend her toes and
ankle. Light touch sensation was intact in the superficial peroneal,
deep peroneal, and tibial nerve distributions. The following
morning, after nine days of hospital admission, the patient was
transferred and repatriated to her home town.
At this time, she was fitted for a hinged knee brace (Figure 5) and
would still require PCL and medial meniscal surgery, closure of her
fasciotomy, and subsequent orthopaedic rehabilitation. To close the
open fasciotomy, a skin graft was crafted in theatre and placed over
the open wound.
The fasciotomy would then heal naturally as her leg was stabilised
within the brace.
FIGURE 4: Postoperative MRI of the right knee showing extensive effusion andmuscular oedema.
FIGURE 5: Hinged knee brace to stabilise the right knee joint.
Page 24 | Volume 12: Number 1. 2019
RCSIsmjcase report
Discussion
This case describes a traumatic orthopaedic injury that resulted in
compartment syndrome, emphasising the level of severity that can
accompany a simple knee dislocation. It is evident that detailed
anatomical knowledge is necessary in order to consider and decipher
the full range of complications that could arise from a joint
dislocation. Diagnosis of such complications requires proper history
taking and physical examination, as well as numerous imaging
modalities, including angiograms to identify thrombosis, and x-rays
and MRIs to diagnose fractures and ligament tears.
Compartment syndrome was first detected in this patient by the
presence of muscular oedema during surgery. Subsequently, a
postoperative MRI was ordered to monitor the swelling. As swelling
within lower limb compartments can cause serious neurovascular
injury, early diagnosis is essential to prevent limb ischaemia and
long-term damage.8 Acute compartment syndrome is a medical
emergency and requires immediate fasciotomies to relieve pressure
within the muscular compartment.8 Indeed, this patient required both
a lateral and medial fasciotomy due to the extent of muscle oedema.
Conclusion
This case highlights the numerous possible complications of a
posterior knee dislocation. Extensive damage to surrounding
structures, including the popliteal artery, cruciate and collateral
ligaments, and the tibial plateau was discovered following further
diagnostic tests. Urgent vascular surgery was required to restore
normal blood flow to the lower limb. Due to extensive muscular
swelling, fasciotomies were performed to reduce intracompartmental
pressure. The patient was stabilised in a knee brace and sent home for
a future PCL and MCL corrective operation as well as fasciotomy
closure. This case emphasises the importance of neurovascular
assessment following traumatic orthopaedic injuries, as damage
within the popliteal fossa can have significant and emergent distal
implications.
References
1. Bakia J, Tordoir J, Heurn E. Traumatic dissection and thrombosis
of the popliteal artery in a child. J Pediatr Surg. 2012;47(6):1299-301.
2. Godfrey AD, Hindi F, Ettles C, Pemberton M, Grewal P. Acute thrombotic
occlusion of the popliteal artery following knee dislocation: a case report
of management, local unit practice, and a review of the literature. Case
Rep Surg. 2017;2017:5346457.
3. Philippose S, Jacinth JS, Muniappan V. The anatomical study of popliteal
artery and its variations. Int J Anat Res. 2017;5(4.3):4679-85.
4. Parker S, Handa A, Deakin M, Sides E. Knee dislocation and vascular
injury: 4 year experience at a UK major trauma centre and vascular hub.
Injury. 2016;47(3):752-6.
5. Cone J, Inaba K. Lower extremity compartment syndrome. Trauma
Surgery & Acute Care Open. 2017;2(1):e000094.
6. Corriere MA, Guzman RJ. True and false aneurysms of the femoral artery.
Semin Vasc Surg. 2005;18(4):216-23.
7. Ersozlu S, Ozulku M, Yildirim E, Tandogan R. Common peroneal nerve
palsy from an untreated popliteal pseudoaneurysm after penetrating
injury. J Vasc Surg. 2007;45(2):408-10.
8. Reverte MM, Dimitriou R, Kanakaris NK, Giannoudis PV. What is the
effect of compartment syndrome and fasciotomies on fracture healing in
tibial fractures? Injury. 2011;42(12):1402-7.
Volume 12: Number 1. 2019 | Page 25
Page 26 | Volume 12: Number 1. 2019
book review
The Inflamed Mind:
A radical new approach to depression
Edward Bullmore
Hardcover: 224 pages
Publisher: Short Books Ltd
Published: 2018
ISBN-13: 978-1780723501
Depression is a prevalent mood disorder that spans all age groups and
is estimated to affect nearly 300 million people worldwide. The
mainstay of treatment involves a combination of cognitive therapies
and antidepressant medications. These antidepressant drugs act to
modulate neurotransmitters in the brain and, in turn, regulate mood
and emotion. While the triggers of depression are often varied and
unique, much of its management has focused on a few antidepressant
medications, which can have mixed results in different patient
populations.
The Inflamed Mind describes a chronological timeline in the history of depression, from its
early description as melancholia in 400BC, to the discovery of Prozac
(fluoxetine) in the 1980s.
An evolving field
Research in the field of depression and other mental health disorders is
evolving, and studies are focusing on new theories of disease progression
and novel treatment paradigms. One such researcher is Prof. Edward
Bullmore, a psychiatrist who published The Inflamed Mind in 2018 to
introduce a new theory about the links between inflammation and
depression. Prof. Bullmore’s research incorporates immunology,
neuroimmunology, and psychiatry to suggest that infection and
inflammation in the body and brain can result in depression.
The case that sparked his interest occurred early in his career, when
he met Mrs P, a middle-aged lady with severe rheumatoid arthritis.
During her consultation and through careful questioning he was able
to determine that she was also depressed. While his superiors posited
that Mrs P was depressed because she knew she was suffering with a
chronic inflammatory disorder, Prof. Bullmore, at the time only a
trainee physician, wondered if her inflammatory state was in fact the
cause of Mrs P’s depression.
Bullmore’s research incorporatesimmunology, neuroimmunology, and
psychiatry to suggest that infection andinflammation in the body and brain can
result in depression.
Focus on inflammation
Research is moving away from the older model of depression, which
focused primarily on the role of serotonin regulation. New evidence
shows that the blood–brain barrier (BBB) is more penetrable than
initially thought. In times of stress and inflammation, cytokines can
cross the BBB and induce microglia activation in the brain, which
damages neurons in the amygdala and cingulate gyrus. This model of
depression opens up many new avenues and routes of exploration
into causes, effects, and treatment of depression.
The Inflamed Mind describes a chronological timeline in the history of
depression, from its early description as melancholia in 400BC, to the
discovery of Prozac (fluoxetine) in the 1980s. Prof. Bullmore’s
approach is methodical and well explained even for those who may
not understand all the minutiae and facets of immunology. He
examines the pharmaceutical industry’s hesitation about pursuing
development of new treatments, and discusses the emerging field of
independent research. The Inflamed Mind is an engaging, informative,
and accessible read for students, healthcare providers, and non-clinical
individuals alike who share a curiosity about the history and treatment
of one of this century’s most prevalent disorders.
An inflammatory theory of depression?
RCSIsmj
In The Inflamed Mind, Edward Bullmore presents a new theory about
possible links between inflammation and depression,
says Senior Staff Writer KATIE NOLAN.
Mary Tien-Yi Wang1
Rebecca Chow2
Alanna Crawford1
Quincy Almeida2
1RCSI medical students
2Wilfrid Laurier University,
Waterloo Ontario
Can proprioception influencegait abnormalities inParkinson’s disease?
Abstract
Freezing of gait (FOG), clinically defined as the transient inability to move one’s feet off the ground,
is a symptom of Parkinson’s disease (PD) associated with changes to gait parameters. Since
proprioception is impaired in PD, relying on proprioception alone to guide locomotion, when
walking in the dark for example, may lead to greater gait impairments such as FOG and gait
variability. To date, no study has demonstrated whether differences in the accuracy of proprioception
in participants who experience FOG (‘freezers’) can influence gait impairment when walking in the
dark. This study aimed to identify a correlation between the degree of proprioception impairment
and frequency of freezing episodes to determine whether improving proprioception could improve
FOG. In this study, freezers performed walking trials in both dark and lit conditions. FOG severity,
step variability, and proprioception (joint position matching of the upper limbs) were measured.
Participants were divided into two groups, classified as either ‘high’ or ‘low’ proprioception error,
using a median split of error scores on the joint position matching task. High error and low error
groups were not significantly different in step time variability, FOG episode duration, or FOG
frequency in dark walking trials. However, there was a significant group by condition interaction
effect (p=0.02). This suggests that having better proprioception can mitigate the effects of a
proprioceptive processing challenge while walking, which may have clinical applications in PD.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 27-31.
RCSIsmjoriginal article
Volume 12: Number 1. 2019 | Page 27
RCSIsmjoriginal article
Page 28 | Volume 12: Number 1. 2019
Introduction
Freezing of gait (FOG) is a common and debilitating symptom of
Parkinson’s disease (PD).1 FOG is clinically defined as a transient period
during which an individual feels that their feet are “stuck to the
ground”.2 Those who experience FOG also tend to have increased gait
variability, measured by step time (ST) and step length (SL) variability
during periods of walking when FOG is absent.3 Gait variability
parameters have also been independently associated with FOG and an
increased fall risk in older adults.4,5,6 Additionally, FOG can lead to
impaired mobility, which may result in adverse consequences such as
fractures and other injuries.7 Investigating step variability parameters is
thus important when studying freezing in patients with PD.
The pathophysiology of FOG is not fully understood. Unlike
bradykinesia, tremor, rigidity, and postural instability, FOG is relatively
resistant to improvement with dopaminergic therapy.8 This suggests
that the pathophysiology underlying this feature of PD is distinct from
the dopaminergic pathway deficits that are known to cause
bradykinesia, tremor, rigidity, and postural instability. One proposed
mechanism is that impairments in sensorimotor integration cause an
increased processing demand on the sensorimotor loop of the basal
ganglia, which produces routine and automatic movements, thus
depleting processing resources necessary for motor output.8 It has been
argued that a deficit in proprioception underlies the impairments in
sensorimotor processing in individuals with PD.9
A previous study investigated this by asking patients with PD who
experience FOG (commonly referred to as ‘freezers’ in current research)
to walk in the dark and thus rely primarily on proprioception to guide
walking.10 Freezers demonstrated a greater number of FOG episodes
when walking in complete darkness, compared to well-lit settings
when visual information about their limbs or an external reference
point was available.
This suggests that when such patients are forced to rely solely on this
distorted proprioceptive feedback without compensatory visual
feedback, sensorimotor processing challenges arise. It is this
impairment in processing that leads to FOG.10 However, it remains
unclear whether differences in FOG severity and step variability in dark
walking conditions are related to differences in proprioceptive
accuracy, a parameter that was not evaluated in the previous study.
The present study investigated the relationship between proprioceptive
accuracy and performance in walking trials in both lit and dark
conditions, by measuring FOG severity (determined by FOG frequency
and duration) and step variability (measured by SL and ST variability).
It was hypothesised that participants with poor proprioception would
experience more negative impacts on gait parameters in dark walking
conditions compared to those with accurate proprioception.
Materials and methods
Ethics approval was obtained from the Wilfrid Laurier University
Research Ethics Board and all patients provided written consent prior to
study participation. Participants with PD (n=18) who experienced FOG
were recruited from the Wilfrid Laurier University Movement Disorders
Research and Rehabilitation Centre database. Participants underwent
proprioception testing, walking trials, and assessments on the Unified
Parkinson’s Disease Rating Scale (UPDRS III) as conducted by a
movement disorder specialist.11
Montreal Cognitive Assessment
Participants’ cognition was assessed using the Montreal Cognitive
Assessment (MoCA).12 The MoCA involves a series of tasks that are
graded out of 30, with higher scores indicating better cognitive
function. This test has been validated for use in PD.12 All participants
were included regardless of MoCA score. There was no significant
difference in MoCA score between groups. No participants were unable
to complete tasks due to deficits in cognition or ability to understand
instructions.
Proprioception and group allocation
Proprioception was measured using a passive contralateral joint position
matching (JPM) task.13 Participants were seated with their forearms fixed
on two rotating platforms such that both elbow joints were resting at a
90º angle. Visual and auditory feedback were eliminated using a blindfold
and noise-cancelling headphones. Participants then had one arm passively
rotated laterally at the elbow joint to one of three positions: 10º, 30º, or
60º from the starting position. The contralateral arm was moved next in
the same fashion, and participants were instructed to notify the researcher
when they believed the angle of both elbow joints was equal. Absolute
and relative error were recorded for three trials of each of the three angle
conditions for both upper limbs. Participants’ overall proprioception error
scores were calculated by summing the absolute error from all of their 18
trials. The participants were then divided into two groups, either ‘accurate
proprioception’ (low error) or ‘poor proprioception’ (high error) through
a median split. The participant on the median score was randomly
assigned to the high error group.
Walking trials
Participants performed three walking trials of approximately 9.75m
in two walking conditions: well lit and dark. Participants were asked
to walk in a straight path that was 0.3m wide. The dark condition
was intended to induce a challenge to the sensorimotor system (SMS),
as participants would be forced to use primarily SMS feedback without
visual information to compensate.
RCSIsmjoriginal article
Eight OptiTrack13 cameras were used to record the frequency and
duration of FOG episodes during walking trials, as well as SL and ST
variability. Infrared light-emitting diodes (IREDs) were placed over
each participant’s xiphoid process, tibial tuberosities, lateral malleoli,
and fifth metatarsals.
FOG episodes were defined as “any period where the centre of mass
velocity measured by the IRED on the xiphoid process dropped below
10% of the participant’s baseline” velocity.2 This definition allowed for
the inclusion of freezing episodes that present as a pronounced
festinating gait rather than a complete cessation of lower limb
movement.
Results
Proprioception and group allocation
Participants were divided into high proprioception error and low
proprioception error groups (see Table 1). Mean proprioception score
was significantly different between the two groups (low error group
92.9 ± 17.7; high error group 284.3 ± 209.6; p<0.05).
Independent t-tests were performed to determine whether any
significant differences in demographic characteristics existed between
the two groups.
There were no significant differences in terms of mean age
(p=0.192), mean UPDRS III score (p=0.093), or mean MoCA score
(p=0.147).
FOG results from dark condition walking trials
The frequency and duration of FOG episodes were analysed in only
the dark condition, since no FOG episodes occurred in the lit
condition. Independent sample t -tests were performed between
the high and low proprioceptive error groups and found no
significant difference in mean FOG frequency (p=0.28) or duration
(p=0.24). However, both mean FOG frequency and duration were
higher in the high error group compared to the low error group
(see Table 2).
Step time variability
A mixed repeated-measures ANOVA was performed to determine
differences between groups, conditions (dark vs lit), and trials for
mean ST variability. There was only a significant effect of condition
(dark vs light) on gait parameter outcomes (p=0.018). There was no
significant difference between the high and low error groups with
respect to ST variability (see Figure 1).
Table 1: Group demographics.
Group Gender Mean age UPDRS Mean MoCA Proprioception
(M:F) III Score score error score
High error 7:2 74.6 ± 8.98 29.2 ± 8.98 22.1 ± 5.16 284.3 ± 209.55(n = 9)
Low error 6:3 69.7 ± 9.37 22.3 ± 7.26 25.2 ± 3.31 92.9 ± 17.66(n = 9)
Table 2: Frequency and duration of FOG episodes duringdark walking trials.
Results High error group Low error group
Mean number x = 1.2 episodes x = 0.44 episodesof FOG episodes (p = 0.28) Mean FOG duration x = 389.2 seconds x = 146.7 (p = 0.24)
seconds
FIGURE 1: Step time (ST) variability in light and dark conditions in high and lowproprioception error groups, by trial.
100
80
60
40
20
0
-20
-40
Trial: 1 Trial: 2
DV_1
CONDITIO*TRIAL*PROP_ERROR; LS MeansCurrent effect: F(2, 20) = 1.1621, p = 0.33306
Effective hypothesis decompositionVertical bars denote 0.95 confidence intervals
Trial: 3
PROP_ERRO
R:
LOW
HIGH
PROP_ERRO
R:
LOW
HIGH
PROP_ERRO
R:
LOW
HIGH
CONDITIO 1
CONDITIO 2
Volume 12: Number 1. 2019 | Page 29
RCSIsmjoriginal article
Page 30 | Volume 12: Number 1. 2019
Step length variability
A mixed repeated-measures ANOVA was performed to determine
differences between groups, conditions (dark vs lit), and trials for the mean
SL variability (see Figure 2). The high error group had significantly higher
SL variability compared to the low error group in all trials and conditions
(p=0.039). There was a significantly greater SL variability in the dark
condition in all trials and groups compared to the lit condition (p<0.001).
There was also a significant interaction effect, i.e., group by condition
(p=0.02) (see Figure 3). The high error group had significantly greater SL
variability in dark compared to well-lit conditions, while the low error
group did not demonstrate changes between light and dark conditions.
Discussion
This study aimed to investigate whether differences in proprioceptive
accuracy among those with PD could influence walking performance in
conditions that increase one’s dependency on proprioceptive feedback,
thus challenging sensorimotor processing. Overall, the findings are in
line with this study’s hypothesis, and suggest that the degree of
proprioceptive accuracy in individuals with PD influences FOG and gait
behaviour in a proprioceptive processing challenge. Although there
was a lack of a significant difference in FOG outcomes between high
and low error groups, this could be explained by the small number of
FOG episodes captured in total, or by the small sample size. Higher
FOG frequency and duration were observed in the high error group,
which supports this assumption despite the findings not achieving
statistical significance. However, it could also be argued that
proprioceptive accuracy does not impact FOG outcome measures in
these conditions due to the lack of significance. Ultimately, repeat
studies with a larger sample size are required.
Nevertheless, proprioceptive ability did have a significant impact on SL
variability when the proprioceptive processing challenge was introduced.
Those with accurate proprioception (low error group) were less impacted
by the challenge posed by the dark environment and exhibited a
negligible decline in gait control. This group also had significantly less SL
variability than those with poor proprioception (high error group) in dark
conditions. This supports the hypothesis that an individual’s
proprioceptive ability can influence the degree to which freezing and gait
behaviour change under proprioceptive processing challenges.
SL variability has been shown to be a better indicator of gait
impairment in freezers than ST variability, although both parameters
are measures of automaticity in gait.14,15 It is known that freezers
attempt to control SL to enable gait adjustments, often using visual
cues and landmarks in their environment to guide their movement.9 SL
can be consciously controlled in order to reduce freezing behaviour
severity, and hence SL variability is theorised to be a superior indicator
of gait quality in patients who experience freezing compared to ST
variability.16 Previous work has shown ST variability in freezers to be less
correlated with gait impairments than SL variability.14 The
non-significant differences between groups for ST variability could be
due to the lower relevance that this measure has to freezing of gait.
Limitations
The study’s sample size was small due to the low prevalence of PD
freezers. This could be the reason for a lack of significant differences
FIGURE 2: Step length (SL) variability in light and dark conditions in high and lowproprioception error groups, by trial.
FIGURE 3: Interaction effect of proprioceptive ability by walking condition on SLvariability.
120
100
80
60
40
20
0
-20
Trial: 1 Trial: 2
DV_1
CONDITIO*TRIAL*PROP_ERROR; LS MeansCurrent effect: F(2,20) = 0.99366, p = 0.38777
Effective hypothesis decompositionVertical bars denote 0.95 confidence intervals
Trial: 3
PROP_ERRO
R:
LOW
HIGH
PROP_ERRO
R:
LOW
HIGH
PROP_ERRO
R:
LOW
HIGH
CONDITIO 1
CONDITIO 2
90
80
70
60
50
40
30
20
10
0
-10
-20
Conditio
DV_1
CONDITIO*PROP_ERROR; LS MeansCurrent effect: F(1,10) = 7.6260, p = 0.02008
Effective hypothesis decompositionVertical bars denote 0.95 confidence intervals
Prop error low
Prop error high
1 2
RCSIsmjoriginal article
References
1. Stuart S, Lord S, Hill E, Rochester L. Gait in Parkinson’s disease: a
visuo-cognitive challenge. Neurosci Biobehav Rev. 2016;62:76-88.
2. Cowie D, Limousin P, Peters A, Hariz M, Day BL. Doorway-provoked
freezing of gait in Parkinson’s disease. Mov Disord. 2012;27(4):492-9.
3. Nieuwboer A, Giladi N. Characterizing freezing of gait in Parkinson’s disease:
models of an episodic phenomenon. Mov Disord. 2013;28(11):1509-19.
4. Bloem BR, Hausdorff JM, Visser JE, Giladi N. Falls and freezing of gait in
Parkinson’s disease: a review of two interconnected, episodic phenomena.
Mov Disord. 2004;19(8):871-84.
5. Mortaza N, Abu Osman NA, Mehdikhani N. Are the spatio-temporal
parameters of gait capable of distinguishing a faller from a non-faller
elderly? Eur J Phys Rehabil Med. 2014;50(6):677-91.
6. Hausdorff JM. Gait variability: methods, modeling and meaning. J
Neuroengin Rehabil. 2005;2:19.
7. Okuma Y. Freezing of gait and falls in Parkinson’s disease. J Parkinsons Dis.
2014;4(2):255-60.
8. Lewis SJ, Barker RA. A pathophysiological model of freezing of gait in
Parkinson’s disease. Parkinsonism Relat Disord. 2009;15(5):333-8.
9. Almeida QJ, Frank JS, Roy EA, Jenkins ME, Spaulding S, Patla AE et al. An
evaluation of sensorimotor integration during locomotion toward a target
in Parkinson’s disease. Neuroscience. 2005;134(1):283-93.
10. Ehgoetz Martens KA, Pieruccini-Faria F, Almeida QJ. Could sensory
mechanisms be a core factor that underlies freezing of gait in Parkinson’s
disease? PloS One. 2013;8(5):e62602.
11. Fahn S, Mardsen CD, Jenner P, Teychenne P (eds.). Recent Developments
in Parkinson’s disease. New York; Raven Press, 1986:375pp,.
12. Gill DJ, Freshman A, Blender JA, Ravina B. The Montreal Cognitive
Assessment as a screening tool for cognitive impairment in Parkinson’s
disease. Mov Disord. 2008;23(7):1043-6.
13. Measurement Sciences. Optotrak Certus – Measurement Sciences. 2019.
[Internet]. [Accessed 2019 January 22]. Available at:
https://www.ndigital.com/msci/products/optotrak-certus/.
14. Lebold CA, Almeida QJ. Evaluating the contributions of dynamic flow to
freezing of gait in Parkinson’s disease. Parkinsons Dis. 2010;2010:732508.
15. Gabell A, Nayak US. The effect of age on variability in gait. J Gerontol.
1984;39(6):662-6.
16. Han J, Waddington G, Adams R, Anson J, Liu Y. Assessing proprioception: a
critical review of methods. J Sport Health Sci. 2016;5(1):80-90.
17. Tan T, Almeida QJ, Rahimi F. Proprioceptive deficits in Parkinson’s disease
patients with freezing of gait. Neuroscience. 2011;192:746-52.
Volume 12: Number 1. 2019 | Page 31
between the high and low error groups in terms of FOG duration and
frequency. Another limitation is that only one method was used to
assess proprioception, and proprioception was only measured in one
joint. Participants’ scores on JPM for forearm positioning were
assumed to be indicative of the person’s overall proprioceptive
accuracy. Other methods for measuring proprioception exist, which
have higher validity and conceptual purity.16 However, JPM has
the benefit of being easy to perform and interpret (little equipment
or tester expertise is required to run JPM testing). Ideally,
other proprioception testing methods could have been used, and
multiple joints, especially joints of the lower limbs, could have been
tested. The degree of degradation in proprioceptive ability in PD
could vary across different joints, so measuring proprioceptive ability
in a single joint may not have been indicative of total body
proprioception.17
The study could also have benefited from the inclusion of healthy
age-matched controls in order to provide a gauge against which to
measure the high and low error groups’ proprioception. This would
have indicated each groups’ proprioceptive accuracy relative to ‘normal’
levels.
Conclusion
This information has great clinical significance for devising
novel treatment strategies that can ameliorate FOG behaviours.
FOG is relatively resistant to common PD treatments, such as
dopaminergic therapy, and warrants further research.8 This lack of
dopaminergic response causes FOG to be a significant contributor
to morbidity and mortality, even in patients who are on
appropriate pharmacological therapy with clinical improvement. Thus,
there exists the potential to improve quality of life in those who
experience FOG if other treatment modalities can be found to decrease
FOG episodes. The results of this study suggest that proprioception
may contribute to FOG behaviour and to other gait impairments.
Further research is needed to expand on these clinical findings. Certain
training programmes have demonstrated an improvement in
proprioception and could therefore potentially improve FOG and gait
abnormalities. This would greatly improve PD patient safety through
reducing the risk and occurrence of falls.7 The findings of this study
could be complemented by further research examining whether
improvements in proprioception among PD patients can decrease FOG
behaviour.
Caitlin Gibson1,2
Vera Dounaevskaia3
Sandra de Montbrun4
Ori Rotstein2,4
1RCSI medical student
2Keenan Research Centre for
Biomedical Science, Li Ka Shing
Knowledge Institute, St. Michael’s
Hospital, Toronto, Ontario,
Canada
3Department of Medicine,
Thromboembolism, University of
Toronto, Toronto, Ontario,
Canada
4Department of Surgery,
University of Toronto, Toronto,
Ontario, Canada
Post-discharge decisions:prolonged VTE prophylaxis forcolorectal cancer surgery
Abstract
The connection between venous thromboembolism (VTE) and cancer was first described in the
1860s by Armand Trousseau. Postoperative VTE remains one of the leading complications and causes
of mortality in patients undergoing surgery for colorectal cancer. While there is evidence that
continuation of VTE prophylaxis beyond the hospital stay can reduce overall VTE incidence in select
groups of surgical patients, local hospital guidelines for prescribing post-discharge prophylactic
medication to prevent VTE vary. Current guidelines from the American College of Chest Physicians
(ACCP) recommend 30-day postoperative thromboprophylaxis for cancer patients undergoing major
abdominal or pelvic surgery.
This project aimed to develop an evidence-based protocol for post-discharge VTE prophylaxis in
colorectal cancer patients undergoing surgical resection at St. Michael’s Hospital, Toronto. The project
consisted of a literature review to define the indications for post-discharge VTE prophylaxis, a chart
review of 100 patients with colorectal cancer undergoing colorectal surgery to determine compliance
with best practice, and possible implementation of a new thromboprophylaxis protocol. After review
of the 100 cases, 21 cases were excluded as the patients did not have a laparotomy or laparoscopic
procedure, or were on anticoagulation therapy at time of/during admission. Only two of the 79
remaining patients (2.53%) were prescribed post-discharge thromboprophylaxis. This demonstrates a
gap in the use of recommended guidelines for VTE prevention in cases of colorectal cancer resection.
A protocol for implementing post-discharge VTE prophylaxis in cancer patients undergoing colorectal
surgery at St. Michael’s Hospital, Toronto, was developed and will be discussed.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 32-38.
RCSIsmjoriginal article
Page 32 | Volume 12: Number 1. 2019
RCSIsmjoriginal article
Volume 12: Number 1. 2019 | Page 33
Introduction
Venous thromboembolism (VTE) presents as a major complication for
patients with malignancy, especially those undergoing invasive
surgery.1 The @RISTOS project demonstrated that 40% of VTE events
occurred after 21 days postoperatively, highlighting that the risk of an
event remains elevated long after a typical postoperative hospital
stay.2 Despite continued evidence that cancer patients undergoing
major operations are at high risk for VTE, prevention with
pharmacological thromboprophylaxis is still largely underutilised.3
Possible explanations for this trend include shorter hospital stays,
earlier mobilisation of patients, and improved surgical techniques.2
Additionally, patients undergoing laparoscopic procedures, or those
without complications following surgery, are usually discharged
quickly and may not stay in hospital to receive the minimum seven to
ten days of postoperative prophylaxis.
The risk of a VTE event increases depending on the presence of other
comorbidities, patient age, type of operation, duration of surgery,
chemotherapy treatment, and other patient-, cancer-, and
treatment-related factors.4 The interrelatedness of these factors allows
patients to be divided into low-, moderate-, higher-, and high-risk
groups.5 Individuals with colorectal cancer undergoing resection
surgery are categorised as high risk for VTE events, for which the
current guidelines have recommended extended thromboprophylaxis
use.5 This patient population is deemed high risk due to the presence
of malignancy, a prothrombotic condition, as well as the longer
duration of major abdominal surgery. It has also been suggested that
extended VTE prophylaxis for 30 days postoperatively may replace
the traditional seven to ten days’ postoperative thromboprophylaxis
protocol as the new standard of care in patients undergoing major
abdominal surgery.1,6,7
The risk of a VTE event increases depending on the
presence of other comorbidities, patient age, type of operation, duration
of surgery, chemotherapy treatment, and other patient-, cancer-, and
treatment-related factors.
The underutilisation of these guidelines demonstrates a gap between
current research and clinical practice. There is a clear need for quality
communication and adjustments to be made in order to provide the
safest and most effective treatment to these vulnerable patients. The
purpose of this work was to identify best practice in this field, to
quantify the frequency of post-discharge prophylaxis in patients with
colorectal cancer who underwent surgery in a single institution, to
identify gaps relative to accepted guidelines, and to suggest
implementation of a new approach to lessen these gaps.
FIGURE 1: Consort Flow diagram.
100 colorectal cancer surgeryresection patients betweenAugust 2017 and April 2018
85 eligible patients:• 35 Laparotomy• 50 Laparoscopy
6 patients excluded
79 eligible patientsincluded in final analysis
15 patients excluded:8 colonoscopy only
3 EUA only3 trans-anal only1 exploratory only
Not eligible for post-discharge prophylaxis because:
• 5 continued anticoagulation therapy prescribed prior to admission(for prior VTE or pre-existing prothrombotic mutation)
• 1 had VTE postoperatively while still in hospital – discharged on therapeutic anticoagulation
RCSIsmjoriginal article
Page 34 | Volume 12: Number 1. 2019
Table 1: Literature review.
Type of study Study title Results
RCT Duration of prophylaxis against venous 28/332 had postoperative VTE thromboembolism with enoxaparin after • 12% incidence in placebo group (20/167) surgery for cancer1 • 4.8% incidence in enoxaparin group (8/165) Absolute RR of 7.2% Conclusion: use enoxaparin for 4 weeks postoperatively
RCT A randomised study on one-week versus 11/225 had postoperative VTE during four-week prophylaxis for venous randomisation period thromboembolism after laparoscopic • 11/113 VTE in short treatment group surgery for colorectal cancer6 • 0/112 VTE in extended treatment group Extended follow-up: • 1 VTE in extended treatment group-confirmed VTE Conclusion: prolonged treatment is preferential when compared to short-duration treatment
RCT Prolonged prophylaxis with dalteparin to Cumulative VTE incidence between 7 and 28 days was: prevent late thromboembolic complications • 16.3% in short-term treatment in patients undergoing major abdominal surgery: • 7.3% in long-term treatment a multicentre randomised open-label study7 Prolonged treatment was shown to have a significant effect at reducing VTE without increasing bleeding risk
RCT Extended prophylaxis with bemiparin for Primary efficacy outcome (DVT, non-fatal PE, and the prevention of venous all-cause mortality) at end of double-blind period: thromboembolism after abdominal or • 25 (10.1%) in bemiparin group pelvic surgery for cancer: the • 32 (13.3%) in placebo group CANBESURE randomised study8 not statistically significant Major VTE: • 2 in bemiparin group • 11 in placebo group statistically significant: P = 0.010 Conclusion: “Prolonging LMWH up to 4 weeks in patients undergoing major abdominal and/or pelvic surgery is associated with a favorable benefit-to-risk ratio”8 without increasing the risk of bleeding Guideline American Society of Clinical Oncology – Patients undergoing cancer surgery should receive ASCO4 (2015) (updated) prophylaxis before surgery and continuing for at least 7-10 days • Extending prophylaxis up to 4 weeks should be considered in those undergoing major abdominal or pelvic surgery with high-risk features
Guideline International Clinical Practice LMWH qd or low dose UFH TID Guidelines3 (2012) • Commence 2-12 hours preoperatively and continue for at least 7-10 days postoperatively • Extended prophylaxis of 4 weeks to prevent postoperative VTE after major laparotomy/ laparoscopy in cancer patients may be indicated with high VTE risk and low bleeding risk
Guideline European Society of Medical Oncology – Enoxaparin 4,000U or dalteparin 5,000U or UFH ESMO9 (2011) 5,000U for at least 10 days postoperatively for surgery >30 minutes Guideline American College of Chest Physicians – “For high-risk patients undergoing abdominal or ACCP 9th edition5 (2012) pelvic surgery for cancer who are not otherwise at high risk for major bleeding complications, we recommend extended-duration pharmacologic prophylaxis (4 weeks) with LMWH over limited-duration prophylaxis”
RCSIsmjoriginal article
Volume 12: Number 1. 2019 | Page 35
Methods
A literature review of relevant background material and current
guidelines was initially performed. Subsequently, a chart review of
100 patients from St. Michael’s Hospital, Toronto, was conducted and
analysed. Analysis of both the articles and charts led to the proposal
of a novel protocol for post-discharge thromboprophylaxis in patients
with colorectal cancer undergoing surgical resection.
Literature review
PubMed and Google Scholar were used to conduct the literature review.
The following keywords were included in the search:
thromboprophylaxis, colorectal, cancer, venous thromboembolism,
guidelines, and duration. The following types of studies were utilised:
four randomised controlled trials (RCTs); four existing guidelines; and
three cohort studies.
Patient chart review
Medical charts of 100 patients with colorectal cancer undergoing
surgery from August 2017 to April 2018 at St. Michael’s Hospital,
Toronto, were obtained and subjected to review. Patient records were
stored in Soarian Clinicals and Sovera databases. Information was
collected by analysing pathology reports, consultation notes, operative
summaries, discharge letters, operating room records, anaesthesia
records, and electronic medication administration records. Patients
were excluded if the operation was exploratory, trans-anal, or
colonoscopy only. All laparotomy and laparoscopy surgeries for
confirmed colon or rectal cancer were included (Figure 1).
Results
Literature review
The literature review revealed a favourable benefit-to-risk ratio when
patients were prescribed extended thromboprophylaxis after
undergoing surgery for abdominal and/or pelvic cancer in the four
RCTs reviewed. The current guidelines recommend a minimum of
seven to ten days of postoperative thromboprophylaxis, or a total
duration of four weeks unless otherwise contraindicated. The cohort
studies summarised in Table 1 revealed an overall rate of 2.0-2.83%
VTE events, which is closely related to the rate of 3.8% found in this
Cohort Risk of Post-Discharge Venous Comparison of 90-day in-hospital VTE, 90-day Thromboembolism and Associated post-discharge VTE, and 90-day mortality Mortality in General Surgery: A • “Major organ resections had the greatest Population-Based Cohort Study Using odds of VTE”10
Linked Hospital and Primary Care Data • Post-discharge VTE accounted for 64.8% of all in England10 recorded VTE (hospital and primary care databases) – 35.2% were in-hospital. • Overall VTE rate for colorectal cancer resection = 2.11% (55.5% post discharge)
Cohort A Clinical Outcome-Based Prospective Incidence of clinically overt VTE within 30 ±5 days Study on Venous Thromboembolism postoperatively in patients undergoing cancer surgery (or longer if longer hospital stay) After Cancer Surgery – • 2.83% of general surgery patients developed The @RISTOS Project2 postoperative VTE • 40% of all studied VTE events occurred after 21 days postoperatively • The colon was the second most common site of primary tumour with the third highest incidence of VTE Thromboembolism was the most common cause of death in this cohort
Cohort Timing and Perioperative Risk Factors 2% of patients developed a VTE within 30 days for In-Hospital and Post-Discharge (0.6% post-discharge) Venous Thromboembolism After Median time to diagnosis of VTE was 9 days Colorectal Cancer Resection11 post-operatively or 9 days from day of discharge UFH: unfractionated heparinLMWH: low-molecular-weight heparinDVT: deep vein thrombosisPE: pulmonary embolismTID = three times daily qd = four times daily
Type of study Study title Results
Table 1: Literature review (continued).
RCSIsmj
Page 36 | Volume 12: Number 1. 2019
study. Overall, the literature supported the aim of this study, and
showed resemblance to the findings within the St. Michael’s cohort.
Of note, the RCTs used venography to detect VTE events and thus
asymptomatic cases were also detected.
St. Michael’s Hospital chart review
A total of 100 charts were reviewed and, of these, 21 were excluded.
Fifteen patients were excluded because they did not undergo a
laparotomy or laparoscopic procedure and thus the risk of VTE was
unknown. There were six patients on anticoagulation therapy during
their admission, five of whom were admitted on anticoagulation
therapy for a pre-existing condition, and one who had a VTE event
during the index admission. This rendered these individuals ineligible
for thromboprophylaxis as they were receiving therapeutic doses.
The purpose of this work was to identify best practice in this field, to
quantify the frequency of post-dischargeprophylaxis in patients with colorectalcancer who underwent surgery in a single institution, to identify gaps
relative to accepted guidelines, and tosuggest implementation of a newapproach to lessen these gaps.
Table 2: Post-discharge VTE prophylaxis.
Yes No
Received post-discharge prophylaxis 2 77
Length of stay (days)
Patient 1 18
Patient 2 74
Average 46 8.31
Of the 79 patients analysed, two were prescribed dalteparin 5,000U
subcutaneously (SC) once nightly for post-discharge prophylaxis
(2.53%) (Table 2). The orders for dalteparin 5,000U were written on
the discharge summary, but an additional note stating “prescription
not given” was included with the orders and the duration of
prophylaxis was not specified. It is unclear whether these patients had
already filled the prescription or whether the patient never received
the prescription upon discharge. The reason for extended-duration
thromboprophylaxis for these two patients was not specified in the
medication order or in the documents reviewed.
Table 3: Post-discharge VTE events.
Received post-discharge prophylaxis
Yes No
Developed Yes 0/2 3/77
post-discharge VTE No 2/2 74/77
Additionally, three of the 79 patients (3.8%) suffered a
post-discharge VTE event (Table 3). These cases were discharged
without extended thromboprophylaxis and suffered a postoperative
event within 7, 13, and 50 days after the index operation. These
events occurred at 1, 6, and 25 days post discharge, respectively
(Table 4). The increased rate of VTE in patients not receiving
post-discharge prophylaxis was not significantly different from those
that did, but agrees with current research that the risk of VTE remains
elevated after discharge from hospital in those undergoing colorectal
cancer resection.
Analysis of both the articles and chartslead to the proposal of a novel protocolfor post-discharge thromboprophylaxis
in patients with colorectal cancerundergoing surgical resection.
Table 4: Length of stay and VTE presentation.
Post-discharge No post-discharge
VTE event (n=3) VTE event (n=76)
Average length 14.67 9.05
of stay (days)
Median length 7.00 6.00
of stay (days)
Number of 7 (1), 13 (6), 50 (25)
postoperative
(post-discharge)
days before VTE
presentation
original article
RCSIsmj
Volume 12: Number 1. 2019 | Page 37
A newly proposed VTE protocol
A protocol for post-discharge VTE prophylaxis in patients undergoing
colorectal cancer surgery was established following thorough
examination of both the current literature and the chart review, which
was then presented at the morbidity and mortality (M and M)
meeting at St. Michael’s Hospital, Toronto. In order to determine the
best approach to improve the current system, experts at the M and
M meeting had the opportunity to share their insight, and the
potential obstacles and limitations to the proposed protocol. The new
protocol will be implemented with guidance from Thrombosis
Canada, which includes staff education and patient education. Based
on post-implementation results within the first three to six months,
the protocol may be adjusted.
As per the new VTE protocol, low-molecular weight heparin (LMWH)
will be administered for 30 days following discharge from hospital for
all patients with colorectal cancer who have undergone laparoscopic
or laparotomy colorectal cancer resections, unless contraindicated.
Each patient will be taught how to self-administer the injections by a
member of the nursing staff prior to discharge, and will receive the
LMWH free of charge. Medication access is outlined in Figure 2.
Review of patient outcomes will be carried out at 3, 6, and 12 months
following protocol implementation, with the aim of reducing or
eliminating post-discharge VTE events after colorectal surgery for
colorectal cancer patients. Once a consensus has been reached, the
protocol will be disseminated to hospital administrators, nurses,
physicians, and pharmacists. Educational seminars will be held in
order to prepare hospital staff for the incoming changes and to
address concerns. As the proposed protocol is still in development,
results are not yet available.
FIGURE 2: Access to medication.
Prescription is covered
Pharmacistprovides Pfizer discount cards, which makesprescriptioncovered
Prescription is covered
Patient is >65 y/o
Patient is <65 y/o withoutprivate insurance
Patient is <65 y/o withprivate insurance
Doctorsprescribe 30 days
of Fragmin® (dalteparin)5,000U on
discharge order
original article
References
1 Bergqvist D, Agnelli G, Cohen AT, Eldor A, Nilsson PE, Le Moigne-Amrani
A et al. Duration of prophylaxis against venous thromboembolism with
enoxaparin after surgery for cancer. N Engl J Med. 2002;346(13):975-80.
2 Guyatt GH, Akl EA, Crowther M, Gutterman DD, Schuünemann HJ.
Executive Summary. Chest. 2012;141(2):7S-47S.
3 Agnelli G, Bolis G, Capussotti L, Scarpa RM, Tonelli F, Bonizzoni E et al. A
clinical outcome-based prospective study on venous thromboembolism
after cancer surgery: The @RISTOS Project. Ann Surg.
2006;243(1):89-95.
4 Frere C, Farge D. Clinical practice guidelines for prophylaxis of venous
thomboembolism in cancer patients. Thromb Haemost.
2016;116(10):618-25.
5 Lyman GH, Bohlke K, Falanga A. Venous thromboembolism prophylaxis
and treatment in patients with cancer: American Society of Clinical
Oncology Clinical Practice Guideline Update. J Oncol Pract.
2015;11(3):e442-4.
6 Vedovati MC, Becattini C, Rondelli F, Boncompagni M, Camporese G,
Balzarotti R et al. A randomized study on 1-week versus 4-week
prophylaxis for venous thromboembolism after laparoscopic surgery for
colorectal cancer. Ann Surg. 2014;259(4):665-9.
7 Rasmussen MS, Jorgensen LN, Wille-Jørgensen P, Nielsen JD, Horn A,
Mohn AC et al. Prolonged prophylaxis with dalteparin to prevent late
thromboembolic complications in patients undergoing major abdominal
surgery: a multicenter randomized open-label study. J Thromb Haemost.
2006;4(11):2384-90.
8 Kakkar VV, Balibrea JL, Martínez-González J, Prandoni P. Extended
prophylaxis with bemiparin for the prevention of venous
thromboembolism after abdominal or pelvic surgery for cancer: the
CANBESURE randomized study. J Thromb Haemost. 2010;8(6):1223-9.
9 Mandalà M, Falanga A, Roila F. Management of venous
thromboembolism (VTE) in cancer patients: ESMO clinical practice
guidelines. Ann Oncol. 2011;22(Suppl.6):85-92.
10 Bouras G, Burns EM, Howell A-M, Bottle A, Athanasiou T, Darzi A. Risk of
post-discharge venous thromboembolism and associated mortality in
general surgery: a population-based cohort study using linked hospital
and primary care data in England. PLoS One. 2015;10(12):e0145759.
11 Davenport DL, Vargas HD, Kasten MW, Xenos ES. Timing and
perioperative risk factors for in-hospital and post-discharge venous
thromboembolism after colorectal cancer resection. Clin Appl Thromb.
2012;18(6):569-75.
RCSIsmjoriginal article
Page 38 | Volume 12: Number 1. 2019
Discussion
The literature review demonstrated that the incidence of
post-discharge VTE in patients undergoing colorectal surgery for
resection of colorectal cancer remains elevated, and is a cause of
significant morbidity and mortality for these individuals. In addition,
prolonged VTE prophylaxis in colorectal cancer patients following
surgery appears to be well supported by the literature. This study was
limited as it consisted of a chart review of only 100 patients over a
one-year period. The chart review allowed for analysis of VTE events
that presented post discharge to a single hospital institution. The
professionals present at the M and M meeting showed concern
regarding the following aspects of the proposed protocol: training
nursing staff on patient education for self-administered injections;
patient compliance with self-injected medication; and adverse
outcomes. These concerns were addressed and have been integrated
into the revised proposed protocol.
Conclusion
The results of the chart review were in line with the current incidence
rates of post-discharge VTE events in colorectal cancer surgery
patients, and demonstrated a gap in prescribing habits based on
current guidelines for this patient population. The main finding from
this chart review was that only 2.53% of patients received
post-discharge prophylaxis. By contrast, the @RISTOS project
showed a post-discharge prescribing rate of 30.7%.2 The chart
review also showed that 3.8% of patients developed VTE
post discharge. The @RISTOS project demonstrated clinically overt
VTE in 2.83% of the general surgery patients included within the
study.2
The next steps of this project are to implement the newly proposed
VTE prophylaxis prescribing protocol and subsequently analyse the
incidence of post-discharge VTE events and adherence to the
thromboprophylaxis prescribing protocol. Future research aimed at
the cost–benefit ratio of prolonged post-discharge
thromboprophylaxis and the use of new oral anticoagulant agents is
recommended.
Acknowledgements
The author wishes to acknowledge the RCSI and the Keenan Research
Summer Student programme for their contribution.
RCSIsmjoriginal article
Bharti M. Kewlani
RCSI medical student
The role of lymphoscintigramsin breast cancer – aretrospective audit fromBeaumont Hospital
Abstract
Introduction: Lymphoscintigram scans have been used, along with the technetium-99m and blue
dye injection, to locate sentinel lymph node(s) in breast cancer patients undergoing surgery at
Beaumont Hospital. However, the value of lymphoscintigrams for accurately identifying potentially
malignant lymph nodes in the clinical setting remains controversial.
Aim of study: To evaluate the role of pre-operative lymphoscintigrams in female breast cancer
patients at Beaumont Hospital.
Methods: Hospital database records of 100 breast cancer patients’ lymphoscintigrams, from August
2015 to November 2017, were evaluated retrospectively. These findings were correlated with
histology results and operative reports of ‘hot and/or blue’ lymph nodes.
Results: The mean number of nodes on lymphoscintigrams and histology reports was 2.42 and 2.63,
respectively. Some 6% of patients (n=6) had negative lymphoscintigram scans but were found to
have malignant lymph nodes on histological assessment.
Conclusion: Lymphoscintigraphy did not enhance the accuracy of detecting sentinel lymph nodes in
this study. Given these findings, further randomised controlled trials are recommended to guide
eventual elimination of lymphoscintigrams at Beaumont Hospital.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 39-43.
Volume 12: Number 1. 2019 | Page 39
RCSIsmj
Introduction
Breast cancer is the most common malignancy in females, with an
annual incidence of 2,890 cases diagnosed in Ireland.1 Lymph from
the breast tissue drains via a system of ducts into lymph nodes in the
upper limb and thorax, and this lymphatic system serves as a possible
route for cancer metastasis. The sentinel lymph node(s) (SLN) is the
very first draining lymph node, and would thus be the first node to
which cancer would spread.2 An SLN biopsy is therefore performed
before or alongside surgical removal of the tumour, to confirm or
exclude the spread of breast cancer to regional lymph nodes and thus
pathologically stage the disease.2 One method for identifying the SLN
is to inject an isosulfan blue dye intra-operatively to directly visualise
the blue node(s).3 A second method involves injecting a radioactive
colloid tagged with technetium-99m near the areola for
intra-operative detection with a gamma probe.4,5 If the latter is done,
a lymphoscintigram (a nuclear medicine technology for imaging the
lymphatic system) is often obtained pre-operatively to aid
identification of the SLNs.5 There is no single unanimous protocol in
Irish hospitals for pre-operative SLN assessment in breast cancer.
Currently, at Beaumont Hospital in Dublin, both lymphoscintigraphy
and the isosulfan blue dye visualisation technique are used to identify
SLNs. Clinical guidelines at Beaumont Hospital have not been
updated in recent years to reflect available evidence, and the
usefulness of lymphoscintigrams has not been evaluated. A literature
review around this topic has shown that this area of research is
relatively underexplored; over the past ten years, only a handful of
studies have probed this question of lymphoscintigram efficacy. In
2010, studies conducted in New Zealand and the UK concluded that
the blue dye method alone is sufficient and pre-operative
lymphoscintigraphy is not required.6,7 However, these studies
indicated that further research looking at survival patterns and
recurrence rates should be done to confidently eliminate
lymphoscintigrams from clinical practice.7
In 2010, Sun et al. also concluded that lymphoscintigrams neither
improved the identification rate of SLNs nor reduced the chances of
missing SLNs during excisional surgery.8 It was further noted that
patients with metastatic breast cancer to the axilla had lower SLN
detection rates via lymphoscintigraphy compared to those without
metastasis.8
A 2015 study conducted in St Vincent’s Hospital in Ireland did not find
any benefit to injecting the blue dye post lymphoscintigraphy to
detect SLNs.3 The authors suggested avoiding the use of blue dye due
to possible anaphylactic reactions, skin tattooing, and interference
with pulse oximetry, and recommended that the blue dye method
should be reserved for patients with negative lymphoscintigrams or
who have undergone neoadjuvant chemotherapy.3
The aim of this present study was to retrospectively audit 100 female
patients who received breast cancer treatment at Beaumont Hospital
in Ireland between August 2015 and November 2017, and compare
their lymphoscintigram findings to histology findings. Subsequently,
the number of ‘hot and blue’ nodes identified intra-operatively was
also reviewed to assess for any correlations or discrepancies.
Methods
Search strategy
A search of the relevant databases (PubMed, CINAHL, and Cochrane)
was conducted, with results initially limited to articles published in the
last five years, but revised and expanded to articles published in the
last ten years due to a lack of publications on this topic in recent years.
The Medical Subject Headings (MeSH) used were as follows:
“lymphoscintigram – sentinel lymph node – breast cancer – English –
10 years – humans”. A total of 17 papers were found from the 3
databases used (PubMed – 10; CINAHL – 3; Cochrane – 4). Returned
search results were screened for duplicates, leaving a total of 13
papers that were then sorted based on title and abstract. Eventually,
7 papers were retained; the remaining 6 were discarded, as they were
deemed irrelevant for this study.
original article
Page 40 | Volume 12: Number 1. 2019
Volume 12: Number 1. 2019 | Page 41
RCSIsmj
Patient selection
Upon request, the Picture Archiving and Communication System
(PACS) office at Beaumont Hospital ran a search to extract a list of all
patients who underwent lymphoscintigraphy between January 2015
and December 2017. This list included 295 patients and was exported
into a working Excel spreadsheet. Data was extracted from the patient
seen most recently (2017), and data collection proceeded
retrospectively. Exclusion criteria included males, lymphoscintigrams
done for patients with melanoma, and cases where data was
insufficiently recorded. The final cohort of patient data included cases
from August 2015 to November 2017. Information was retrieved
from a total of 100 patients and 101 lymphoscintigram scans. Four
patients had bilateral malignancy and thus bilateral breast surgery,
but only two of these four patients had bilateral lymphoscintigraphy
performed (n=102 scans). However, one lymphoscintigram scan was
unavailable or unreported (n=101 scans). Finally, operative notes from
patients’ charts were reviewed manually, where possible, to record
the number of ‘hot and blue’ lymph nodes detected during surgery.
Upon request, the Picture Archiving andCommunication System (PACS) office atBeaumont Hospital ran a search to extract a list of all patients who
underwent lymphoscintigraphy betweenJanuary 2015 and December 2017.
Data analysis
Data was recorded and analysed using an Excel spreadsheet. Information
was obtained from lymphoscintigram imaging reports (from the hospital’s
PACS software) and histology reports (from the hospital’s WinPath
system). The data collected included: patient demographics; tumour
type and size; histological grade; lymphovascular invasion; oestrogen,
progesterone, and HER2 receptor status; lymphoscintigraphy results;
type of surgery; number of nodes positive for malignancy on
histology; and, number of ‘hot and/or blue’ lymph nodes detected
intra-operatively.
Results
This audit included data from 100 female patients (96 outpatients; 4
inpatients) who received treatment for breast cancer at Beaumont Hospital
between August 2015 and November 2017. The mean patient age was
58.3 years (range: 31-91 years). The average size of the invasive tumour
was 27.68mm (range: 0-210mm). The most common histological type
was invasive ductal carcinoma (IDC) grade 2, and 6% of patients had
multifocal disease. Thirteen patients (13%) had carcinoma in situ. For the
remaining 87 patients with invasive carcinoma, 16 had definitive
lymphovascular invasion, and 9 had probable lymphovascular invasion.
The remaining 62 patients did not have lymphovascular invasion.
Comparing the number of SLN(s) detected via different methods
All patients had a lymphoscintigram scan performed pre-operatively, and
were given the radioactive technetium-99m injection as well as the
isosulfan blue dye injection to aid the SLN biopsy. A total of 101
lymphoscintigrams were reported and analysed; nodes were counted
manually on scans retrieved from the PACS system. The average number
of SLNs per lymphoscintigram was 2.42 nodes (range: 0-13 nodes). The
average number of SLNs from 100 corresponding histology reports was
2.63 (range: 1-9 nodes).
It was noted that only 74 of the 87 surgical chart notes reviewed had
information regarding SLNs sufficiently recorded. This data was either
missing from the remaining operative notes, or was very poorly written
and could not be interpreted accurately, and was thus omitted. The
average number of ‘hot and/or blue’ nodes detected during surgery, as
calculated from the 74 available and legible operative notes, was 1.71
(range: 1-4). It was also noted that 6 of these 74 patients had nodes that
were hot (i.e., technetium-99m was detected via intra-operative gamma
probe), but not blue. In two cases, there was no evidence of hot and/or
blue nodes. These results are illustrated in Figure 1. A standard error of 5%
was included.
FIGURE 1: Comparing the average number of positive sentinel lymph nodesdetected pre-operatively and intra-operatively. Lymphoscintigraphy scans detected2.42 nodes (range 0-13, n = 101); histology reports detected 2.63 nodes (range1-9, n = 100 as one histology report was unavailable); surgery detected 1.71 hotand/or blue nodes (range 1-4, n = 74).
original article
3.5
3
2.5
2
1.5
1
0.5
0
Average number of lymph nodes
Lymphoscintigram Histology Surgery (hot/blue)
RCSIsmj
Consistency of lymphoscintigraphy and histology findings
Analysis revealed that 30 patients had lymphoscintigram scans that
were consistent with histology report findings, meaning the number
of nodes detected on lymphoscintigram exactly matched the number
of nodes seen on histology. However, 34 patients had histology
reports that revealed more nodes than their respective
lymphoscintigram scans, and another 34 patients had histology
reports that revealed fewer nodes than their respective
lymphoscintigrams (see Table 1).
Table 1: Number of positive lymph nodes detected via
lymphoscintigram versus histology.*
Number of Comparing the number of positive patients lymph nodes detected
30 Lymphoscintigram = Histology
34 Lymphoscintigram < Histology
34 Lymphoscintigram > Histology
Interpreting the histology findings for patients with negative
lymphoscintigrams
Of the 101 lymphoscintigrams performed, reported, and interpreted,
it was noted that six patients had a negative scan, meaning no lymph
nodes were identified. However, the histology reports for these six
patients were positive and indeed detected lymph nodes. Four of
these six patients had one sentinel node on histology, while the
remaining two patients each had six sentinel nodes detected on
histology (see Table 2).
Table 2: Histology results for six patients withnegative lymphoscintigram scans.
Number of Number of positive sentinel nodes patients on histology
4 1
2 6
Histological lymph node analysis
This audit revealed that 22 out of 100 patients had lymph nodes
positive for cancer on histological assessment. The average number of
positive nodes on histology was 0.233 (range: 0-2 nodes). Axillary
lymph node clearance was performed on 21 of these 22 patients.
Figure 2 summarises the number of nodes that were positive for
cancer among the 100 patients evaluated in this study; 20 patients
had one positive node, and two patients had two positive nodes.
Of note, 14% of the patients in this study were negative for the
oestrogen, progesterone, and HER2 receptors (i.e., triple negative),
and four of these patients had positive lymph nodes on histology.
Discussion
Lymphoscintigrams are nuclear medicine scans, and are only
performed in certain institutes where this imaging technology is
available. The 100 patients included in this study had a pre-operative
lymphoscintigram scan performed for further evaluation of their
breast cancer prior to surgery. In the majority of cases, the imaging
scan was done one day prior to surgery. Results showed that the
average number of SLNs detected on lymphoscintigram (2.42) was
almost the same as that seen histologically (2.63). The difference
noted was found not to be statistically significant (see Figure 1).
All lymphoscintigrams assessed in this study were interpreted
manually by one individual, via counting the number of lymph nodes
seen on the scan. As such, any judgement calls made by the
researcher regarding number of visible nodes were consistent.
However, approximately 10% of the scans were of average or poor
image quality, which introduced some degree of error since the nodal
assessment and count was based on only one individual’s
interpretation skills.
Based on data extrapolation from 2015, the Power Performance
Manager (PPM) costing system estimates that the average cost of
providing nuclear imaging for the year 2017 was €307.25/service,
which includes directly attributable running costs. By eliminating
Page 42 | Volume 12: Number 1. 2019
original article
0 nodes (79%)
1 node (19%)
2 nodes (2%)
FIGURE 2: Number of positive (malignant) lymph nodes on histological analysis.
*One lymphoscintigram was unavailable or not performed, and one histologyreport was missing or unavailable, so numeric figures do not sum to the 100patients included in the study.
Volume 12: Number 1. 2019 | Page 43
References
1. Breast Cancer Ireland. Facts and figures. 2019. [Internet]. [cited 2019
January 5]. Available from:
https://www.breastcancerireland.com/education-awareness/facts-and-
figures/.
2. Borgstein PJ, Pijpers R, Comans EF, van Diest PJ, Boom RP, Meijer S.
Sentinel lymph node biopsy in breast cancer: guidelines and pitfalls of
lymphoscintigraphy and gamma probe detection. J Am Coll Surg.
1998;186(3):275-83.
3. O’Reilly EA et al. The value of isosulfan blue dye in addition to isotope
scanning in the identification of the sentinel lymph node in
breast cancer patients with a positive lymphoscintigraphy: a
randomized controlled trial (ISRCTN98849733). Ann Surg.
2015;262(2):243-8.
4. Caruso G, Cipolla C, Costa R, Morabito A, Latteri S, Fricano S et al.
Lymphoscintigraphy with peritumoral injection versus
lymphoscintigraphy with subdermal periareolar injection of
technetium-labeled human albumin to identify sentinel lymph nodes in
breast cancer patients. Acta Radiologica. 2014;55(1):39-44.
5. Schaafsma BE, Verbeek FP, Rietbergen DD, van der Hiel B, van der Vorst
JR, Liefers GJ et al. Clinical trial of combined radio- and
fluorescence-guided sentinel lymph node biopsy in breast cancer. Br J
Surg. 2013;100(8):1037-44.
6. McMasters K, Wong S, Tuttle T, Carlson D, Brown C, Dirk Noyes R et al.
Preoperative lymphoscintigraphy for breast cancer does not improve the
ability to identify axillary sentinel lymph nodes. Ann Surg.
2000;231(5):724-31.
7. Mathew MA et al. Pre-operative lymphoscintigraphy before sentinel
lymph node biopsy for breast cancer. Breast. 2010;19(1):28-32.
8. Sun X et al. Roles of preoperative lymphoscintigraphy for sentinel lymph
node biopsy in breast cancer patients. Jpn J Clin Oncol. 2010;40:722-5.
9. Vaz S, Silva Â, Sousa R, Ferreira T, Esteves S, Carvalho I et al.
Breast cancer lymphoscintigraphy: factors associated with sentinel
lymph node non visualization. Rev Esp Med Nucl Imagen Mol.
2015;34(6):345-9.
lymphoscintigraphy, a technology that, based upon these results,
appears to provide no significant additional benefit to breast cancer
patient management, funds could be redirected towards research and
other areas of hospital patient care.
Of note, lymphoscintigrams have played an integral role in the
identification of SLNs in melanoma, which, unlike breast cancer, has
an unpredictable route of lymphatic drainage and spread.6 McMasters
et al. therefore argue that routine use of lymphoscintigrams in breast
cancer is not justified due to its predictable pattern of spread.6
One final limitation of this study is that the author would have liked to
assess patients’ body mass index (BMI), due to a possible correlation
between obesity and negative lymphoscintigram scans; however, this
data could not be evaluated due to limited resource availability.9
After evaluating the data collected and processed in this
retrospective audit, it is evident thatconducting a pre-operative
lymphoscintigram scan did not improveSLN detection rates in Beaumont Hospital breast cancer patients.
Conclusion
After evaluating the data collected and processed in this retrospective
audit, it is evident that conducting a pre-operative lymphoscintigram
scan did not improve SLN detection rates in Beaumont Hospital breast
cancer patients. Lymphoscintigrams are costly, and their continued
use in everyday practice should be carefully evaluated further. We
propose that lymphoscintigrams should be reserved for patients who
have a negative technetium-99m result (i.e., no SLNs detected using
radioactive technetium-99m colloid) or for patients with a history of
anaphylactic reaction to intra-operative use of the blue dye. Further
retrospective audits and randomised controlled trials with larger
patient cohorts are warranted before the complete elimination of
lymphoscintigrams for breast cancer patients at Beaumont Hospital is
carried out.
Acknowledgements
Professor Arnold D.K. Hill, Head of School of Medicine/Professor of
Surgery, Royal College of Surgeons in Ireland
Mr Chwanrow M.K. Baban, general surgeon, Beaumont Hospital,
Dublin
Mrs Aisling Hegarty, Research Nurse Manager, Beaumont Hospital,
Dublin
RCSIsmjoriginal article
Jaclyn A. Croyle
Francesco Bartoloni Saint Omer
Sari F. Yordi
Tara E. Dowd
Jesse M. Connors
Taylor C. Cunningham
RCSI medical students
The burden of paediatricasthma in urban United States populations
Abstract
Within the United States (US) paediatric population, asthma is the most common chronic disease. Its
impact is most greatly appreciated with respect to quality of life. The importance of this condition
has been recognised in the US with a significant national response; however, prevalence, morbidity,
and mortality continue to rise. This review explores the extent of asthma in the US paediatric
population, as well as strategies undertaken by US health services to address this childhood illness.
In addition, this review assesses the efficacy of three evidence- and population-based asthma
prevention strategies, namely immunotherapy, reduction of environmental triggers, and prevention
of viral infections.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 44-49.
RCSIsmjreview
Page 44 | Volume 12: Number 1. 2019
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 45
Introduction
Asthma is the most common chronic disease impacting the paediatric
population in the United States (US).1 The major burden of this
disease is its significant impact on quality of life (QOL).
Physiologically, asthma is characterised by airway inflammation and
hyperresponsiveness, and reversible bronchoconstriction, as a result
of inappropriate activation of the immune system to a nonspecific
particle.1 These processes lead to a decrease in airway lumen
diameter, causing the characteristic asthmatic symptoms of cough,
wheeze, and sputum production.1 Managing these symptoms
requires lifelong medication, and in acute exacerbations may require
presentation to the emergency room (ER).2 This impacts on the US
population via school and work absenteeism, and places a strain on
the healthcare system through the need for resources and
medication.3 Additionally, patients experience a reduced QOL as they
try to avoid triggering their symptoms.
Currently, approximately 10% of children aged 5 to 19 years in
the US have asthma. Interventionstrategies aimed at preventing asthma in children in urban areas
would be advantageous for the US population, from both a financial and QOL perspective.
The diagnosis of asthma is becoming more prevalent due to
improved understanding of the disease pathogenesis and better
diagnostic techniques. Asthma is more prevalent in urban areas,
which may be due to differences in environmental exposures and
lifestyle factors.4 Despite significant national responses in the US to
address this condition, the prevalence, morbidity, and mortality of
asthma have remained high in this population. Currently,
approximately 10% of children aged 5 to 19 years in the US have
asthma.5 Intervention strategies aimed at preventing asthma in
children in urban areas would be advantageous for the US
population, from both a financial and QOL perspective. These
strategies could reduce disease burden and mortality secondary to
asthma.
In this literature review, the epidemiology of childhood asthma in
urban US populations was assessed. Subsequently, three
evidence-based, population-based prevention strategies were
explored for their potential application in this patient population.
Methods
To complete this literature review, prospective and retrospective
studies, and online information, were evaluated for eligibility. No
language or date restrictions were applied. The paediatric population
was defined as ages 1 to 18 years, inclusive, as per the Medical
Subject Headings (MeSH) search criteria. PubMed was the main
source used to search for relevant data. This yielded 473 articles for
review. In addition, Science Direct, the clinical summary website
UpToDate, and the US Centers for Disease Control and Prevention
(CDC) website were also used to gather sources.
Titles and abstracts were reviewed for eligibility by each author.
Full-length articles for those deemed appropriate were accessed, and
bibliographies for these articles were investigated to identify
additional relevant studies.
Results
Epidemiology
The collection of epidemiological statistics on asthma poses
challenges, particularly with regard to collating and representing the
data in a standardised manner. Using a clinician’s diagnosis in health
questionnaires has been shown to be both sensitive and specific.6 This
strategy was utilised in the National Health Interview Survey (NHIS)
by asking patients if they had been diagnosed with asthma by a
physician, including the timeline and presentation of their diagnosis.7
In 2010, the CDC analysed data from the NHIS and found that 25.7
million people had asthma in the US, of which seven million were
children under 18 years of age.8 The data also showed that the
prevalence of asthma has increased; however, visits to primary care
settings for asthma symptoms decreased from 2001-2009 in the US.8
Moreover, children under 18 years of age with asthma are more likely
to attend primary care clinics or the ER than asthmatic adults, but
adults have a seven times higher mortality rate compared to
children.8 These statistics further highlight the importance of
recognising asthma risk factors, and using that information to
generate and promote prevention strategies that would likely reduce
the disease burden.8
Risk factors
Genetic and environmental factors have been attributed to the
development and severity of asthma. Although no single gene is
responsible for causing asthma, genetic mutations involving the
immune system play a role in predisposition to the disease.9,10
Interactions between genetic and environmental factors, both
prenatally and postnatally, are key in the risk of asthma development.
Polymorphisms in certain toll-like receptors, combined with exposure
RCSIsmjreview
Page 46 | Volume 12: Number 1. 2019
to traffic-related air pollution as seen in urban environments, may
increase the risk of childhood asthma.11
A child’s geographic, demographic, and social environment are
important factors when assessing the risk of developing asthma and
its severity. Children from urban households and of lower
socioeconomic status are at greater risk of an asthma diagnosis and
asthma-related exacerbations.12,13 Multiple factors may explain this,
including exposure to environmental pollutants, decreased physical
activity, poor diet, treatment non-adherence, and a high-stress
setting.12-15 Additionally, early introduction to microorganisms and
allergens, like those found on animals or a farm, can cause
desensitisation, preventing future hypersensitivity reactions.16 An
urban environment does not permit the same opportunity for gradual
exposure to these allergens, which is known to be a protective factor
in the development of asthma.
It is also crucial to consider the prenatal environment, as
maternal smoking status, paracetamol use, urban-type surroundings,
and a lower socioeconomic status have all been shown to increase
the risk of a child developing asthma.10 Furthermore, short-term
breastfeeding, postnatal tobacco smoke exposure, and antibiotic
use as a newborn have also been shown to increase paediatric
asthma.9,16
National health system response
In 1999, the CDC recognised the increased prevalence and burden of
asthma, and launched the National Asthma Control Program (NACP)
to provide funding to states, cities, school programmes, and
non-governmental organisations.3 Funds are used to perform regular
asthma surveillance, improve healthcare training in asthma
management, and develop education strategies.3 The goals of this
initiative are to reduce the morbidity and mortality of asthma, limit
missed school and work days, and improve QOL for asthma patients
in the US.3 It is estimated that for every US dollar spent through the
NACP, $71 US are saved, highlighting the success of this programme.3
In order to achieve its goals, the NACP improved surveillance
techniques for asthma data collection through telephone surveys and
a partnership with the National Centre for Disease Statistics.3
The NACP also directs funding for intervention programmes in all
participating districts.3 This has included initiatives such as asthma
education, funding for medication, healthcare training, and risk
reduction strategies including smoking cessation training and
promotion.3 Additionally, the NACP collaborated with the National
Institutes of Health, among others, to generate National Asthma
Education and Prevention Program Guidelines.3 Other partnerships
with the American Lung Association and the Asthma and Allergy
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 47
Foundation of America have been important for supporting policy
changes in the US, including legislation allowing youth to self-carry
and self-administer asthma medications in the school setting,
promoting affordable and quality care, and advocating for outdoor
and indoor air quality improvement.17,18 Future aims of the NACP are
to tackle racial and ethnic disparities, and improve reimbursement
schemes for asthma education and environmental management
strategies.3,19
In 1999, the CDC recognised the increasedprevalence and burden of asthma, andlaunched the National Asthma ControlProgram (NACP) to provide funding tostates, cities, school programmes, andnon-governmental organisations.
Prevention strategies
Specific immunotherapy
Immunotherapy is effective in children monosensitised to house dust
mites by preventing sensitisation to other allergens that could
otherwise lead to asthma.20 For individuals with rhinoconjunctivitis
and hay fever, immunotherapy reduces the risk of progression to
asthma.20,21 Of note, children who are untreated have at least a two
times greater risk of developing asthma or new asthma
sensitisations.20-22 According to Calderón and Brandt, the mechanism
of action of specific immunotherapy drugs, such as Grazax, is not fully
understood, but is believed to involve modulation of the immune
response by regulating immunoglobulin E (IgE) production and
generating anti-IgE antibodies that competitively inhibit the
allergen-IgE interaction.23 There is a dose-response curve with Grazax
in relation to antibodies.23 Additionally, there is a change from Th2
cells to regulatory T cells, and the mechanism that allows antigens to
be presented by specific T cells is inhibited.23
Nasser et al. determined the cost-effectiveness of Grazax in relation to
quality-adjusted life years (QALYs) in patients with existing
rhinoconjunctivitis.24 They found that productivity loss and healthcare
resource use as a result of asthma symptoms and complications were
much higher in the untreated group.24 Grazax was thus determined
to be a highly cost-effective treatment when compared to standard
management.24 The three-year treatment course would target
high-risk asthma patients, defined as those with pre-existing asthma
monosensitisation, or with a respiratory allergic response other than
asthma to one or more allergens.
This immunotherapy has already been shown to be effective in
Europe, making it a promising prevention strategy for the US
paediatric population. The greatest advantage of its introduction
would be its potential to reduce chronic asthma diagnoses and
disease burden. The main limitation, however, is that the long-term
side effects are unknown.
Reducing environmental triggers in the home
Many triggers for childhood asthma can be found in the home
environment, especially in urban settings. Home-based education
programmes serve to reduce the burden of disease in high-risk
paediatric asthma patients via proper home cleaning and repair.
These home-based programmes typically target the primary
caretakers of children with asthma in the urban environment, where
they are caring for those who have been diagnosed with asthma by a
physician and who have been to the hospital twice within a one-year
period. Healthcare workers visit the primary residence of the child to
assess for potential asthma triggers and educate the family on how
best to eliminate them. These programmes have proved to be
successful in several US urban populations.25-27 Other studies in the US
measured the reduction in asthma triggers and allergens, rather than
overall asthma improvement, to evaluate the effectiveness of these
home-based interventions. These studies found that a
multicomponent intervention strategy reduced levels of asthma
triggers in the home, including dust mites and cockroach excrement,
in both the short and long term.28-30 Adherence rates for these
programmes has been as high as 93%, suggesting that the
intervention is manageable, which is vital for lasting success.25 In
addition, this proposed programme has been shown to result in
substantial healthcare cost savings.27
Preventing viral infection
A third way to reduce the burden of paediatric asthma is to dampen
the allergic response to viral infections. Studies have shown that lower
respiratory tract viral infections, combined with aeroallergen
sensitisation, increase the probability of inducing asthma
symptoms.31-34 Over time, this can lead to the development of
persistent asthma. Certain viral infections have been strongly
associated with asthma exacerbations, including respiratory syncytial
virus and rhinovirus. 31-34
IgE plays a key role in the allergic response to these respiratory viruses,
making it a critical target for treatment. The US Inner City Asthma
Consortium published data strongly supporting the use of monthly
anti-IgE therapy (omalizumab) in high-risk subjects with atopy.33 A
randomised controlled trial recently published data supporting
anti-IgE therapy, showing that it not only reduced the number of
RCSIsmjreview
Page 48 | Volume 12: Number 1. 2019
References
1. Sawicki G, Haver K. Asthma in children younger than 12 years:
epidemiology and pathophysiology. In: Post T (ed.). UpToDate, Waltham,
MA. 2017. [Internet]. [Accessed 2017 November 25]. Available from:
https://www.uptodate.com/contents/asthma-in-children-younger-than-12-
years-initial-evaluation-and-diagnosis?search=asthma-in-children-younger-t
han-12-years-epidemiology-and-pathophysiology&source=search_result&s
electedTitle=1~150&usage_type=default&display_rank=1.
2. Fanta CH. An overview of asthma management. In: Post T (ed.).
UpToDate, Waltham, MA. 2017. [Internet]. [Accessed 2017 November 15].
Available from:
https://www.uptodate.com/contents/an-overview-of-asthma-management
?search=An%20overview%20of%20asthma%20management&source=sear
ch_result&selectedTitle=1~150&usage_type=default&display_rank=1.
3. Centers for Disease Control and Prevention. CDC’s National Asthma
Control Program: An investment in America’s health. Atlanta, GA. 2013.
[Internet]. Available from:
https://www.cdc.gov/asthma/pdfs/investment_americas_health.pdf.
4. Milligan KL, Matsui E, Sharma H. Asthma in urban children: epidemiology,
environmental risk factors, and the public health domain. Curr Allergy
Asthma Rep. 2016;16(4):33.
5. Centers for Disease Control and Prevention. Most Recent Asthma Data.
2017. [Internet]. Available from:
https://www.cdc.gov/asthma/most_recent_data.htm.
6. Litonjua A, Weiss S. Epidemiology of asthma. In: Post T (ed.). UpToDate,
Waltham, MA. 2018. [Internet]. [Accessed 2019 January 29]. Available
from:
https://www.uptodate.com/contents/epidemiology-of-asthma?search=epid
emiology%20of%20asthma%20litonjua&source=search_result&selectedTitl
e=1~150&usage_type=default&display_rank=1.
7. Centers for Disease Control and Prevention. NHIS Data, Questionnaires
and Related Documentation. Atlanta, GA. 2017. [Internet]. Available from:
https://www.cdc.gov/nchs/nhis/data-questionnaires-documentation.htm.
8. Akinbami LJ, Moorman JE, Bailey C, Zahran HS, King M, Johnson CA et al.
Trends in asthma prevalence, health care use, and mortality in the United
States, 2001-2010. NCHS Data Brief. 2012;(94):1-8.
9. Dietert RR. Maternal and childhood asthma: risk factors, interactions, and
ramifications. Reprod Toxicol. 2011;32(2):198-204.
10. Savenije OE, Kerkhof M, Reijmerink NE, Brunekreef B, de Jongste JC, Smit
HA et al. Interleukin-1 receptor-like 1 polymorphisms are associated with
serum IL1RL1-a, eosinophils, and asthma in childhood. J Allergy Clin
Immunol. 2011;127(3):750-6.e1-5.
11. Kerkhof M, Postma DS, Brunekreef B, Reijmerink NE, Wijga AH, de Jongste
JC et al. Toll-like receptor 2 and 4 genes influence susceptibility to adverse
effects of traffic-related air pollution on childhood asthma. Thorax.
exacerbations but also the number of symptomatic days spent in
hospital.35 Targeted therapy such as this has its disadvantages,
however, which include the time required to educate healthcare
providers and implement the new protocol. Moreover, the cost of
therapy needs to be addressed in order to ensure that asthma patients
of all socioeconomic groups in the US can benefit.36 Supplying this
therapy seasonally (i.e., in the autumn and winter months), could
make IgE therapy more cost-effective.33-35
Conclusion
It is clear that childhood asthma carries a significant burden in the US
population. The prevalence of asthma has increased in recent years; as
of 2010, 10% of children aged 5 to 19 years in the US were reported
to have asthma.5 Although it is a multifactorial disease, many risk
factors have been identified, including lower socioeconomic status
and urban residences.11 As with other chronic conditions, there seems
to be an interplay between genetics and the environment; however,
the extent of this interaction is not yet fully understood.
In 1999, the NACP was founded with clear objectives to address and
tackle this rising health problem.3 It has helped to reduce the
prevalence, morbidity, and mortality of paediatric asthma by enabling
funding for intervention programmes across the US.3
In this review, three evidence-based prevention strategies were
discussed. The first aims to reduce asthma prevalence by using targeted
immunotherapy in high-risk children, while the other two strategies aim
to lessen the disease burden by reducing symptoms and exacerbations.
Although many of these strategies have currently been tested or
implemented in populations outside the US, their application to the
US is valid. Exposure to the allergens in urban environments is likely to
be similar worldwide, and the asthmatic response to these triggers is
universal within the human body. The key factors that will determine
the success of these strategies are adherence rates and long-term
costs. Since these interventions will reduce the prevalence of asthma,
as well as lessen the need for medication or hospital and ER visits, the
long-term savings will outweigh the initial financial investment,
making them a viable option for the US population.
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 49
2010;65(8):690-7.
12. Cardet JC, Louisias M, King TS, Castro M, Codispoti CD, Dunn R et al.
Income is an independent risk factor for worse asthma outcomes. J Allergy
Clin Immunol. 2018;141(2):754-60.
13. O’Connor GT, Lynch SV, Bloomberg GR, Kattan M, Wood RA, Gergen PJ et
al. Early-life home environment and risk of asthma among inner-city
children. J Allergy Clin Immunol. 2018;141(4):1468-75.
14. Abajobir AA, Kisely S, Williams G, Strathearn L, Suresh S, Najman JM. The
association between substantiated childhood maltreatment, asthma and
lung function: a prospective investigation. J Psychosom Res.
2017;101:58-65.
15. Oland AA, Booster GD, Bender BG. Psychological and lifestyle risk factors
for asthma exacerbations and morbidity in children. World Allergy Organ J.
2017;10(1):35.
16. Liu AH, Babineau DC, Krouse RZ, Zoratti EM, Pongracic JA, O’Connor GT
et al. Pathways through which asthma risk factors contribute to asthma
severity in inner-city children. J Allergy Clin Immunol.
2016;138(4):1042-50.
17. Jones SE, Wheeler L. Asthma inhalers in schools: rights of students with
asthma to a free appropriate education. Am J Public Health.
2004;94(7):1102-8.
18. Asthma and Allergy Foundation of America. Advocacy and Public Policy.
2017. [Internet]. Available from: http://www.aafa.org/page/advocacy.aspx.
19. Centers for Disease Control and Prevention. Asthma self-management
education and environmental management: Approaches to enhancing
reimbursement. 2013. [Internet] Available from:
https://www.cdc.gov/asthma/pdfs/Asthma_Reimbursement_Report.pdf.
20. Pajno GB, Barberio G, De Luca FR, Morabito L, Parmiani S. Prevention of
new sensitizations in asthmatic children monosensitized to house dust
mite by specific immunotherapy. A six-year follow-up study. Clin Exp
Allergy. 2001;31(9):1392-7.
21. Niggemann B, Jacobsen L, Dreborg S, Ferdousi HA, Halken S, Høst A et al.
Five-year follow-up on the PAT study: specific immunotherapy and
long-term prevention of asthma in children. Allergy. 2006;61(7):855-9.
22. Jacobsen L, Niggemann B, Dreborg S, Ferdousi HA, Halken S, Høst A et al.
Specific immunotherapy has long-term preventive effect of seasonal and
perennial asthma: 10-year follow-up on the PAT study. Allergy.
2007;62(8):943-8.
23. Calderón M, Brandt T. Treatment of grass pollen allergy: focus on a
standardized grass allergen extract – Grazax®. Ther Clin Risk Manag.
2008;4(6):1255-60.
24. Nasser S, Vestenbæk U, Beriot-Mathiot A, Poulsen PB. Cost-effectiveness of
specific immunotherapy with Grazax in allergic rhinitis co-existing with
asthma. Allergy. 2008;63(12):1624-9.
25. Gruber KJ, McKee-Huger B, Richard A, Byerly B, Raczkowski JL, Wall TC.
Removing asthma triggers and improving children’s health: The Asthma
Partnership Demonstration project. Ann Allergy Asthma Immunol.
2016;116(5):408-14.
26. Baezconde-Garbanati L, Lienemann BA, Robles M, Johnson E, Sanchez K,
Singhal R et al. Implementation of HPV vaccination guidelines in a diverse
population in Los Angeles: results from an environmental scan of local HPV
resources and needs. Vaccine. 2017;35(37):4930-5.
27. Turcotte DA, Alker H, Chaves E, Gore R, Woskie S. Healthy homes: in-home
environmental asthma intervention in a diverse urban community. Am J
Public Health. 2014;104(4):665-71.
28. Adgate JL, Ramachandran G, Cho SJ, Ryan AD, Grengs J. Allergen levels in
inner city homes: baseline concentrations and evaluation of intervention
effectiveness. J Expo Sci Environ Epidemiol. 2008;18(4):430-40.
29. McConnell R, Milam J, Richardson J, Galvan J, Jones C, Thorne PS et al.
Educational intervention to control cockroach allergen exposure in the
homes of hispanic children in Los Angeles: results of the La Casa study.
Clin Exp Allergy. 2005;35(4):426-33.
30. Morgan WJ, Crain EF, Gruchalla RS, O’Connor GT, Kattan M, Evans R 3rd
et al. Results of a home-based environmental intervention among urban
children with asthma. N Engl J Med. 2004;351(11):1068-80.
31. Beigelman A, Bacharier LB. Early-life respiratory infections and asthma
development: role in disease pathogenesis and potential targets for disease
prevention. Curr Opin Allergy Clin Immunol. 2016;16(2):172-8.
32. Feldman AS, He Y, Moore ML, Hershenson MB, Hartert TV. Toward primary
prevention of asthma. Reviewing the evidence for early-life respiratory viral
infections as modifiable risk factors to prevent childhood asthma. Am J
Respir Crit Care Med. 2015;191(1):34-44.
33. Holt PG, Sly PD. Viral infections and atopy in asthma pathogenesis: new
rationales for asthma prevention and treatment. Nat Med.
2012;18(5):726-35.
34. Jackson DJ. Early-life viral infections and the development of asthma: a
target for asthma prevention? Curr Opin Allergy Clin Immunol.
2014;14(2):131-6.
35. Busse WW, Morgan WJ, Gergen PJ, Mitchell HE, Gern JE, Liu AH et al.
Randomized trial of omalizumab (anti-IgE) for asthma in inner-city
children. N Engl J Med. 2011;364(11):1005-15.
36. Normansell R, Walker S, Milan SJ, Walters EH, Nair P. Omalizumab for
asthma in adults and children. Cochrane Database Syst Rev.
2014;13(1):Cd003559.
Alexander Bonte
Lisa Gilmore
Kalu Long
Jessica Pian
Natalie Smith
Jason Royal
RCSI medical students
Abstract
Background: Chronic obstructive pulmonary disorder (COPD) is a major cause of increasing
mortality and morbidity worldwide, particularly in low- and middle-income countries where 90% of
deaths attributed to COPD occur. In China, the prevalence of COPD is 8.2% and there is a
disproportionately high mortality rate due to COPD. Chinese women in particular experience specific
risk factors, such as lifestyle and occupational exposures, and are a historically understudied and
underdiagnosed demographic. Opportunities for interventions specifically targeted at women are of
special interest in China.
Objective: The aim of this review is to examine the existing literature to quantify the disease burden
of COPD on Chinese females, to determine their specific risk factors, and to evaluate evidence-based
public health interventions at both local and national levels to mitigate the disease burden on this
demographic.
Methods: A comprehensive literature review was performed in the electronic databases of MEDLINE
PubMed and Cochrane Library, using the search terms COPD, China, and female/women. Articles
not available in English were removed from the analysis.
Results: The prevalence of COPD in Chinese women is 5.1%. Public health interventions such as the
provision of cleaner stoves to improve household ventilation and early screening programmes to
diagnose COPD have proven effective in both Chinese populations and in other countries. The
Chinese national health system response has been largely inadequate in meeting international
standards recommended to decrease the incidence of COPD.
Conclusions: The incidence and mortality of COPD, particularly in Chinese women, is a growing
concern. Public health interventions have demonstrated the potential for success in Chinese
populations. Additional effective public health interventions are needed to mitigate the incidence,
morbidity, and mortality of COPD, particularly for women. Nationwide large-scale studies of
epidemiological data are urgently needed to better inform public health decision-making.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 50-55.
RCSIsmjreview
Page 50 | Volume 12: Number 1. 2019
Chronic obstructivepulmonary disorder in adult Chinese women
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 51
Introduction
Chronic obstructive pulmonary disorder (COPD) is an increasing
cause of morbidity and mortality worldwide. Studies estimate a global
prevalence of 11.7%, with a 68.9% increase between 1990 and 2010.
While global prevalence is on the rise, incidence and mortality in
high-income countries are declining. The same is not true for middle-
and low-income countries where risk factors remain pervasive;
according to the World Health Organisation (WHO), 90% of COPD
deaths occur in low- and middle-income countries.1
The aetiology of COPD is complex, comprised of modifiable and
non-modifiable risk factors that are both environmental and genetic.
In high-income countries, tobacco smoke is the predominant COPD
risk factor, while in lower-income countries the aetiology is more
complex. In China, environmental pollutants are on the rise and other
risk factors like malnutrition, poor ventilation, and infectious diseases
remain ubiquitous.2
The estimated COPD prevalence in China is 8.2%.3 Globally, COPD in
women is of particular concern and has gained significant attention
as a threat to public health.4 In China, this risk to women is further
compounded by continued indoor air pollution and unique
occupational exposure.2,4
This review sets out to assess the current landscape of the burden of
COPD in China with a specific focus on adult Chinese women, and to
examine evidence-based interventions that may mitigate that burden.
While China’s health system has implemented various strategies to
combat COPD, additional interventions are needed to mitigate the
incidence, morbidity, and mortality of COPD, particularly for women.
Search methodology
To assess the breadth of epidemiological information on adult
Chinese women with COPD, searches were performed in MEDLINE
PubMed using Medical Subject Headings (MeSH) and subheadings.
Exact search terms and refinements resulted in an initial body of
literature comprising 46 results.
To explore existing interventions targeting COPD, the Cochrane
Library was searched for “COPD” as a MeSH descriptor with
“Prevention & Control” as a qualifier. Sixty-nine results were found
that addressed COPD interventions, and 37 results were found that
specifically centred on interventions relevant to China. The literature
searches described above are all limited by the exclusion of
Chinese-language search terms.
While global prevalence is on the rise, incidence and mortality
in high-income countries are declining.The same is not true for middle- and low-income countries where risk factors remain pervasive; according to the World Health
Organisation (WHO), 90% of COPD deaths occur in low- and middle-income countries.
RCSIsmjreview
Page 52 | Volume 12: Number 1. 2019
Epidemiology
China has produced the highest number of COPD epidemiological
studies among emerging countries, yet data are inconsistent due to
discrepant diagnostic criteria, diverse study populations, and differing
geographic estimates (see Table 1).5-7 One of the largest
population-based studies estimates an overall 8.2% prevalence
(12.4% for men and 5.1% for women) using Global Initiative for
Chronic Obstructive Lung Disease (GOLD) criteria.3,5 Findings from
this study fit well within global averages; however, mortality due to
COPD in China is significantly greater than in other parts of the
world.8 Research estimates age-standardised mortality in China as
179.9 per 100,000 person-years for men and 141.3 per 100,000
person-years for women.2
COPD is the third highest cause of disability-adjusted life years
(DALYs) in China and is projected to rise with increased smoking
prevalence.1 While female smoking (2.4%) is low in comparison to
male smoking levels (52.9%), disproportionate exposure to biofuels,
increasing social acceptability of female smoking, and potential
genetic susceptibility place women at particular risk for COPD.4,9,10
Table 1: Estimated COPD prevalence by geographic
location in China based on GOLD criteria.
Region Total prevalence (95% CI)
Beijing 8.0 (7.1-9.0)
Tianjin 9.6 (8.6-10.7)
Liaoning 7.4 (6.6-8.3)
Shanghai 6.5 (5.6-7.4)
Guangdong 9.4 (8.4-10.4)
Shanxi 5.5 (4.6-6.4)
Chongqing 13.7 (11.9-15.5)
Average 8.2 (7.9-8.6)
Source: Zhong et al., 20075
COPD risk factors
Tobacco smoke
Smoking is the leading risk factor for COPD in China. China is the
world’s largest producer and consumer of cigarettes, consuming 40%
of the cigarettes in the world.11 Estimated consumption of cigarettes
in China increased by 260% between 1970 and 1990, with recent
data citing continually high prevalence in males.11,12 While the same
study found a decrease in smoking uptake by women, these
researchers also express concern that more acceptable attitudes
towards female smoking may reverse this trend.11 Other publications
have found a rise in young female smokers.13
In addition to active smoking, second-hand smoke exposure in
China is common. Some 51.3% of female non-smokers reported
exposure to second-hand smoke at home.14 Passive smoking, as well
as biofuel exposure, may explain COPD rates in Chinese women that
are higher in proportion to their active smoking rates.15 Additionally,
research conducted globally and in China suggests that
genetic factors may place women at greater risk of developing
COPD, with an earlier onset and higher mortality than their male
counterparts.4,16
In contrast to more economicallydeveloped countries, inefficient biomass combustion is the second
leading risk factor for COPD in China, due to pollutants produced by heating and cooking in poorly
ventilated areas.
Indoor pollutants
In contrast to more economically developed countries, inefficient
biomass combustion is the second leading risk factor for COPD in
China, due to pollutants produced by heating and cooking in poorly
ventilated areas.17 One study that measured indoor and outdoor
pollutants based on spirometry from rural and urban households
concluded that the use of biofuels is widespread in rural China and
negligible in urban areas (88.1% vs 0.7% of households).18
Additionally, the prevalence of COPD among non-smokers was
significantly higher in rural households, where biofuels are more
commonly used, than in urban ones (7.4% vs 2.5%).18 Zhong et al.
also observed a significantly higher COPD prevalence in rural areas
(8.8%) compared with urban areas (7.8%).5 Rural Chinese women are
disproportionately exposed to indoor air pollution in comparison to
men, as they spend more time at home and do the majority of
household cooking.17
National health system response
In 2012, the WHO endorsed a new health goal intending to
achieve a 25% reduction in COPD-related premature death by the
year 2025.8 In China, health officials acknowledge the severity
of this problem, yet there remains no centralised plan to combat
the burden of the disease and no direct efforts targeting women.
The WHO established the Framework Convention on Tobacco
Control (FCTC), which came into effect in 2006.19 The MPOWER
policy tracks compliance with FCTC regulations on tobacco
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 53
monitoring, smoke-free policies, cessation programmes, health
warnings and mass media, advertising bans, and cigarette
taxation.19 China’s implementation of and adherence to FCTC
requirements is considered very poor based on quantitative markers
of MPOWER.19
For example, in 2007 the Chinese Government approved WHO
guidelines prohibiting smoking in the workplace and indoor public
places.20 However, the Ministry of Health did not introduce a
nationwide ban on smoking in indoor public places until 2011.21 In
a WHO survey, 89% of respondents observed smoking in banned
public areas, emphasising that without legal penalties there is poor
compliance with the law.22 Beijing has been more successful after
implementing fines for individuals and establishments, public
shaming tactics, and hiring inspectors to enforce the ban.21
Notably, in China’s most recent 2017 WHO profile, it received the
lowest possible rating for its smoke-free policies.19 Article 11 of the
FCTC requires that large, graphic, and pictorial warnings of the
health hazards of smoking cover at least 50% of a cigarette
package.23 China has failed to comply with guidelines, providing
only small text-only warnings that cover 35% of the package.
To reduce risk from indoor air pollution, the Ministry of Agriculture
introduced the National Improved Stove Program (NISP) in the
1980s and distributed 200 million stoves to households in rural
China. The stoves required less fuel and were installed with
chimneys and grates to increase ventilation in the home in efforts
to reduce household air pollution.24
Evidence-based public health interventions
In China, locally established COPD interventions are delivered at
different tiers of prevention, including primordial, primary, and
secondary levels. Outcome data supporting regulatory cigarette
legislation and early screening suggests promising adaptation to the
Chinese population.25,26 A second iteration of the NISP could also help
to minimise COPD risk for women.
Reduction of household air pollution
Given the large percentage of the population who live in rural China,
interventions that improve cook stove quality and household
ventilation can reduce COPD risk for women and the entire
household.15 Chinese studies have highlighted these benefits,
including better air quality measurements in homes, reduced decline
in forced expiratory volume (FEV1), reduced COPD risk, and
decreased COPD incidence.24,27-29 Studies conducted in other
countries show similar results and note additional benefits, including
a rise in infant birth weight, decreased rates of childhood pneumonia,
and an improvement in self-reported respiratory symptoms.28
While the NISP is regarded as a successful public health intervention,
technological advancements have led to more efficient, cleaner
stove models.24 Environmental monitoring studies show persistent levels
of household air pollution above national safety standards for indoor air,
including homes that previously participated in the programme.27
While the main aim of this intervention is to reduce COPD risk in
women, the intervention would additionally benefit other household
RCSIsmjreview
References
1. Cruz A, Mantzouranis E, Matricardi P et al. Global surveillance, prevention and
control of chronic respiratory diseases: a comprehensive approach. Geneva,
Switzerland. 2007. [Internet]. Available from:
https://www.who.int/gard/publications/GARD%20Book%202007.pdf.
2. Reilly KH, Gu D, Duan X et al. Risk factors for chronic obstructive pulmonary
disease mortality in Chinese adults. Am J Epidemiol. 2008;167:998-1004.
3. Yin P, Zhang M, Li Y et al. Prevalence of COPD and its association with
socioeconomic status in China: findings from China Chronic Disease Risk
Factor Surveillance 2007. BMC Public Health. 2011;11:586.
4. Varkey AB. Chronic obstructive pulmonary disease in women: exploring
gender differences. Curr Opin Pulm Med. 2004;10:98-103.
5. Zhong N, Wang C, Yao W et al. Prevalence of chronic obstructive pulmonary
disease in China: a large, population-based survey. Am J Respir Crit Care Med.
2007;176:753-60.
6. Adeloye D, Chua S, Lee C et al. Global and regional estimates of COPD prevalence:
systematic review and meta-analysis. J Glob Health. 2015;5(2):020415.
Page 54 | Volume 12: Number 1. 2019
members, especially young children, who are often cared for
indoors.28 Cost-effectiveness studies also support net benefits,
estimating $50-100 as the comparative cost per DALY gained with use
of improved stoves.30,31
Early screening programmes
Recent recommendations in the literature highlight the importance of
early COPD detection and diagnosis, particularly for the large number
of patients who have diagnosable COPD but are asymptomatic or
unaware of their condition.32,33 Prevalence data from China support
the need for COPD screening; one study showed that of participants
meeting COPD diagnostic criteria, 35.5% were asymptomatic and
only 35.1% had a previous diagnosis before study enrolment.5
Additionally, women with COPD are particularly underdiagnosed by
physicians.4
Numerous COPD screening strategies exist worldwide, with the
majority conducted in the primary care setting.33 A randomised
controlled trial conducted in primary care clinics in the Netherlands
concluded that implementation of the Respiratory Health Screening
Questionnaire (RHSQ), with follow-up diagnostic spirometry in
high-risk patients, was an effective early COPD detection strategy.
Preliminary cost estimations found this intervention economically
feasible.26 Implementation of a similar screening programme in China
could help to identify high-risk women.
While the RHSQ has been validated in a Western setting, it would
need to be replaced by a locally developed screening tool, such as the
COPD Screening Questionnaire, or adjusted and translated for a
Chinese population.34 For example, the RHSQ does not assess for
biomass smoke exposure, one of the leading COPD risk factors for
women in China. In primary care facilities that lack resources for full
spirometry, case-identification spirometry can be used to refer
patients for follow-up rather than diagnosis.35
Slowed disease progression, better symptom control, and integration
of useful maintenance treatments are just some advantages to early
COPD detection and diagnosis. This will ultimately lead to long-term
positive effects: improved health outcomes, enhanced quality of life,
and decreased health expenditure.33
Conclusion
Previously understudied, COPD in women is gaining attention
worldwide as a public health threat, including in China. Several factors
make Chinese women of particular interest, including their
disproportionate rates of biofuel exposure, the potential rise in female
smokers, and high exposure to second-hand smoke. COPD
prevalence data in women contradicts the low percentage of female
smokers; this inconsistency highlights the impact of indoor pollution
and questions if there are genetic factors that make women more
susceptible to COPD. Further studies should examine this theory,
which is a topic of significant controversy within the literature.
Furthermore, although a large task given its population size, a
nationwide epidemiological study would clarify differing reports
on COPD in China, with focused analysis on distinct populations
like women. Meanwhile, a large-scale meta-analysis of the literature
to date could synthesise current data to better inform public
health decision-making. The Chinese Government should also
increase health sector capacity to detect, manage, and treat
COPD.
Finally, no risk factor operates independently; like other disease states
and health outcomes, COPD risk is multifaceted and occurs at
multiple tiers of influence. The confluence of individual behaviours,
society/community, and larger socioeconomic, cultural, and
environmental conditions, all contribute to health.36 This includes
overarching national health efforts like public health policies and
legislation. China’s compliance with WHO regulations will reduce the
burden of disease on individuals and communities, and decrease the
economic burden on the public healthcare system.
RCSIsmjreview
7. Halbert RJ, Isonaka S, George D et al. Interpreting COPD prevalence
estimates: what is the true burden of disease? Chest.
2003;123(5):1684-92.
8. Mathers CD, Loncar D. Projections of global mortality and burden of
disease from 2002 to 2030. PLoS Med. 2006;3(11):e442.
9. Li Q, Hsia J, Yang G. Prevalence of smoking in China in 2010. N Engl J
Med. 2011;364(25):2469-70.
10. Sansone N, Yong HH, Li L et al. Perceived acceptability of female smoking
in China. Tob Control. 2015;24(Suppl.4):iv48-iv54.
11. Liu S, Zhang M, Yang L et al. Prevalence and patterns of tobacco
smoking among Chinese adult men and women: findings of the 2010
national smoking survey. J Epidemiol Community Health.
2017;71(2):154-61.
12. Chan-Yeung M, Aït-Khaled N, White N et al. The burden and impact of
COPD in Asia and Africa. Int J Tuberc Lung Dis. 2004;8(1):2-14.
13. Ho MG, Ma S, Chai W et al. Smoking among rural and urban young
women in China. Tob Control. 2010;19(1):13-8.
14. Gu D, Wu X, Reynolds K et al. Cigarette smoking and exposure to
environmental tobacco smoke in China: The international collaborative
study of cardiovascular disease in Asia. Am J Public Health.
2004;94(11):1972-6.
15. Zhou Y, Chen R. Risk factors and intervention for chronic obstructive
pulmonary disease in China. Respirology. 2013;18(Suppl.3):4-9.
16. Xu F, Yin X, Shen H et al. Better understanding the influence of cigarette
smoking and indoor air pollution on chronic obstructive pulmonary
disease: a case-control study in mainland China. Respirology.
2007;12(6):891-7.
17. Chapman RS, He X, Blair AE et al. Improvement in household stoves and
risk of chronic obstructive pulmonary disease in Xuanwei, China:
retrospective cohort study. BMJ. 2005;331(7524):1050.
18. Liu S, Zhou Y, Wang X et al. Biomass fuels are the probable risk factor for
chronic obstructive pulmonary disease in rural South China. Thorax.
2007;62(10):889-97.
19. World Health Organisation. WHO Report on The Global Tobacco
Epidemic. MPOWER Packag. 2008:1-336.
20. Yamato H, Jiang Y, Ohta M. WHO Framework Convention on Tobacco
Control (FCTC) Article 8: protection from exposure to tobacco smoke. Jap
J Hyg. 2015;70(1):3-14.
21. Fong GT, Sansone G, Yan M et al. Evaluation of smoke-free policies in
seven cities in China, 2007-2012. Tob Control. 2015;24(Suppl. 4):iv14-20.
22. World Health Orgnization Western Pacific Region and University of
Waterloo, ITC Project. Smoke-free policies in China: Evidence of
effectiveness and implications for action. Manila: 2015.
23. WHO FCTC Conference of the Parties. Guidelines for implementation of
Article 11 of the WHO Framework Convention on Tobacco Control
(Packaging and labelling of tobacco products). Tob Control. 2008.
24. Smith KR, Keyun D. A Chinese national improved stove program for the
21st century to promote rural social and economic development. Energy
Policy Res. 2010;1:24-5.
25. Hammond D, Fong GT, McNeill A et al. Effectiveness of cigarette warning
labels in informing smokers about the risks of smoking: findings from the
International Tobacco Control (ITC) Four Country Survey. Tob Control.
2006;15(Suppl.3):19.
26. Dirven JAM, Tange HJ, Muris JWM et al. Early detection of COPD in
general practice: patient or practice managed? A randomised controlled
trial of two strategies in different socioeconomic environments. Prim Care
Respir J. 2013;22(3):331-7.
27. Sinton JE, Smith KR, Peabody JW et al. An assessment of programs to
promote improved household stoves in China. Energy Sustain Dev.
2004;8(3):33-52.
28. Thomas E, Wickramasinghe K, Mendis S et al. Improved stove
interventions to reduce household air pollution in low and middle income
countries: a descriptive systematic review. BMC Public Health.
2015;15:650.
29. Zhou Y, Zou Y, Li X et al. Lung function and incidence of chronic
obstructive pulmonary disease after improved cooking fuels and kitchen
ventilation: a 9-year prospective cohort study. PLoS Med.
2014;11(3):e1001621.
30. Bruce N, Smith K, Ezzati M et al. Addressing the impact of household
energy and indoor air pollution on the health of the poor: implications for
policy action. 2002. [Internet]. Available from:
http://www.who.int/mediacentre/events/H&SD_Plaq_no9.pdf.
31. Mehta S, Shahpar C, Appia A et al. The health benefits of interventions to
reduce indoor air pollution from solid fuel use: a cost-effectiveness analysis.
Energy Sustain Dev. 2004;8(3):53-9.
32. Price D, Freeman D, Cleland J et al. Earlier diagnosis and earlier treatment
of COPD in primary care. Prim Care Respir J. 2011;20(1):15-22.
33. Csikesz NG, Gartman EJ. New developments in the assessment of COPD:
early diagnosis is key. Int J Chron Obstruct Pulmon Dis. 2014;9:277-86.
34. Zhou Y, Chen S, Tian J et al. Development and validation of a chronic
obstructive pulmonary disease screening questionnaire in China. Int J
Tuberc Lung Dis. 2013;17(12):1645-51.
35. Soriano JB, Zielinski J, Price D. Screening for and early detection of chronic
obstructive pulmonary disease. Lancet. 2009;374(9691):721-32.
36. Whitehead M, Dahlgren G. What can be done about inequalities in
health? Lancet. 1991;338:1059-63.
Volume 12: Number 1. 2019 | Page 55
Christopher S. Lozano
RCSI medical student
Abstract
Objective: The obesity epidemic presents a major challenge to health and chronic disease, most
notably cardiovascular disease. With the global burden of obesity expected to continue to increase,
many strategies for weight loss have been developed. Intermittent fasting (IF) is a weight loss
technique characterised by short periods of semi-restricted food intake interspersed with extended
periods of fasting. This is in contrast to continuous restriction (CR) diet regimens, which reduce the
total number of calories consumed in a conventional eating schedule. There is some evidence,
particularly in animal studies, that IF is more effective for weight loss compared to CR. This systematic
review aims to compile and evaluate all the human trials of IF and CR to determine whether there is
a clear advantage of one technique over the other.
Methods: A search using relevant keywords yielded 659 articles, of which seven were found to meet
the inclusion criteria for this analysis. These seven articles investigated a variety of different fasting
regimens.
Results: None of the studies demonstrated a significant difference in weight loss between the IF and
CR groups. There is some evidence to suggest that IF might be more effective than CR because it is
more conducive to diet adherence.
Conclusions: Within the current literature, there is no definitive answer as to whether IF is superior
to CR for weight loss. Additional human studies should be conducted to better clarify this clinically
relevant question, and should include adherence as part of their analysis.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 56-61.
RCSIsmjreview
Page 56 | Volume 12: Number 1. 2019
Fasting fiction: is intermittent fasting superior toconventional dieting?
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 57
Introduction
Intermittent fasting (IF) is defined as a diet regimen that cycles
between periods of fasting (low or no food intake) interspersed
with periods of less restricted or unrestricted food intake.1 Fasting
in various forms has been around for thousands of years, rooted
in asceticism or for spiritual purposes in various religions.2 While
fasting for religious reasons persists, modern-day fasting is often for
the purpose of weight loss. The theory behind weight loss in IF is
similar to conventional dieting in that it creates a caloric deficit.
The key difference between IF and conventional diets lies in the
timing of caloric ingestion. Most conventional diets rely on a
continuous restriction (CR) of calories (e.g., reductions in daily caloric
intake by 25% each day) or simply the complete restriction of certain
types of food.
It is widely accepted that beingoverweight or obese carries a number of negative health effects. Increasedbodyweight is associated with anincreased risk for chronic diseasesincluding cardiovascular disease, diabetes, kidney and liver disease,
osteoarthritis, and cancer.
Various methodologies for IF have been studied; however, there is no
definitive regimen established as IF has only recently come under
mainstream scientific interest. The interval of fasting and degree of
restriction vary greatly across different studies. One IF regimen
restricts all eating to an eight-hour period during the day, which
typically results in the participant skipping one or more meals.3 Some
diets involve alternate-day fasting, characterised by zero food intake
every other day or a moderate restriction (50-70%) every other day.4-6
Another popular approach is the 5:2 method, in which fasting takes
place for two days of the week and the other five days have less
restriction or ad libitum food intake.7 On the more extreme end, some
studies have tested longer fasting periods of four to six days.8 Many
of these studies of IF for weight loss have been conducted in humans;
however, it is also important to note that studies in animals have
investigated IF in the context of far-reaching metabolic benefits for
health and disease beyond that of body composition and weight.1
It is widely accepted that being overweight or obese carries a number
of negative health effects. Increased bodyweight is associated with an
increased risk for chronic diseases including cardiovascular disease,
diabetes, kidney and liver disease, osteoarthritis, and cancer.9 A large
meta-analysis consisting of nearly three million subjects across 97
studies found a significant increase in all-cause mortality in
overweight individuals compared to non-overweight subjects.10 With
the global burden of obesity-related illness on the rise, there is an
increasing urgency to devise safe and effective ways for people to lose
excess weight.11 The question remains: what role should IF play in the
arsenal of weight management techniques?
Methods
Search strategy
The Scopus database (www.scopus.com) was used to search original
research articles on the use of IF and CR for weight loss. The search
was conducted in August 2018. A search string was designed using
the following terms: intermittent fasting; alternate day fasting;
whole-day fasting; time-restricted eating; continuous restriction;
consistent restriction; (long-term) caloric restriction; and (long-term)
energy restriction. These terms were combined with the terms weight
loss, weight reduction, fat mass loss, overweight, and obese/obesity.
Complete search results, including citation information and abstracts,
were exported as an RIS file and imported into Colandr
(www.colandr.com), a web-based platform for investigators to collect
research articles and keep track of which ones have been included or
excluded during the stages of a systematic review.
Inclusion criteria
Articles in this analysis were limited to randomised controlled trials
(RCTs) in human subjects. Review articles and basic science articles
(including animal studies) were excluded. Articles required
overweight or obese subjects (BMI ≥25) over 18 years of age. Studies
were included if they compared a variation of IF with a conventional
CR diet. Studies were deemed acceptable if the intervention was at
least four weeks long and included at least ten subjects in each
treatment group. Initially, abstracts were scrutinised to generate a
smaller list of potentially relevant articles. Of these relevant articles,
full texts were retrieved and examined. Articles that did not meet the
inclusion criteria were discarded. Figure 1 shows a PRISMA diagram
of the stages of systematic review in this study.
Results
Study identification
The search of the Scopus database yielded 659 articles in total. Abstracts
of these articles were reviewed for relevance and suitability based on the
inclusion criteria. This initial screen of abstracts left 15 articles that
warranted additional review. After scrutiny of these full texts, eight
articles were excluded and seven were included for final analysis.
RCSIsmjreview
Page 58 | Volume 12: Number 1. 2019
Systematic review
The following information was extracted from each article: study
design and duration; study population characteristics (sex, age, and
BMI); prescribed treatments; weight change from baseline; and,
participant adherence. This data is shown in Table 1.
Demographics
Based on the inclusion criteria, all seven studies examined overweight
and/or obese individuals. Three studies recruited obese individuals
(BMI >30) exclusively,12-14 three studies allowed overweight
participants (BMI ≥2515,16 or BMI >2717), and one study did not specify
a BMI cut-off, although the average body fat percentage of
participants was noted to be 47%.18 The age of participants also
varied across studies. Four studies included a general adult population
(females ≥18, and males and females ages 18-55, 18-64, or 21-70).
The other three studies included more specific populations, such as
postmenopausal females and males aged 55-75 (war veterans).
Group size varied across the different studies, ranging from 10 to 58
participants.
Variations in IF diet
In four out of the seven studies, a 5:2 IF diet was employed. In the 5:2
diet structure, two non-consecutive days per week are assigned to
fasting and the remaining five days allow ad libitum eating. The
degree of caloric restriction on the fasting days varies. Of these 5:2
studies, three instructed participants to consume a fixed amount of
energy on the fasting days (1,670-2,500kJ,17 600kcal,13 or
400-600kcal14), while another study instructed subjects to eat 25% of
their energy requirement on fasting days.16
Conversely, in the alternate-day fasting technique, participants
alternated between a large caloric restriction on one day (‘fast day’)
followed by increased eating the next day (‘feast day’). Again, there
was some variability in the allocation of calories. For example, one
study allowed for caloric intake equivalent to 25% of the energy
requirement on fast days, followed by 125% caloric intake on feast
days.15 Another study instructed zero caloric intake on fast days and
ad libitum intake on feast days.12 One study did not specify their IF
protocol, although the authors indicated that the intervention was
designed to reduce bodyweight by 1% per week.18 In each study, the
IF group was compared to a CR group in which participants were
required to have a consistent daily restriction that was of less
magnitude. The degree of caloric restriction in these continuous
groups varied slightly, but most involved a 25-30% calorie deficit
per day.
Weight loss assessment
All studies examined demonstrated weight loss in both their IF groups
and their CR groups, though the degree of weight loss varied
between studies. The length of intervention and follow-up across the
different studies ranged from eight weeks to six months. All but one
study measured the change in bodyweight as the average number of
kilograms lost over time, while the other study measured the loss as
Articles from initial Scopusdatabase search
(n=659)
Review articles, basic science oranimal studies, and off-topic
articles (n=644)
Abstracts obtained and reviewed(n=659)
Studies not meetinginclusion criteria
(n=8)
Studies meeting inclusioncriteria used in review
(n=7)
Articles selected for full-textreview (n=15)
FIGURE 1: PRISMA diagram showing the process of inclusion and exclusion of articles for this systematic review.
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 59
an average percentage of the participants’ baseline bodyweights.
The largest weight loss was reported by Arguin et al., where the IF
group lost 10.7±3.0kg on average after five weeks of IF followed by
five weeks of energy maintenance.18 Interestingly, the authors did not
give a detailed breakdown of the dietary intervention employed other
than to say it was designed to reduce bodyweight by 1% each week.
They did, however, include the macronutrient composition of the
diet: 55% carbohydrates, 30% fat, and 15% protein. The CR group in
this study experienced a similar weight loss of 9.5±2.1kg.18
The least amount of weight loss from IF was seen in the study by
Carter et al., where participants had an average 2.0±1.9kg reduction
in bodyweight at the end of an eight-week programme (and
negligible change at 12-month follow-up).17 The IF methodology
used in this study was a 5:2 day ‘feed-fast’ protocol in which subjects
consumed 1,670-2,500kJ (400-600kcal) on fast days. In this study, the
CR group experienced an average loss of 3.2±2.1kg after eight weeks
on a 5,000-6,500kJ (1,200-1,550kcal) daily diet. This difference in
weight loss between the CR and IF treatment groups was ultimately
found to be non-significant.
An important finding is that none of the studies demonstrated a
significant difference in weight loss between the IF and CR groups. A
detailed breakdown of study follow-up and weight loss results can be
found in Table 1.
Discussion
The purpose of this systematic review was to examine different RCTs
that explored the use of IF versus CR for weight loss. Of the existing
human literature, only seven studies have directly compared IF to CR
diets, and none of them have shown a significant difference. There
are a number of underlying considerations that need to be taken into
account with this result. First and foremost, the most important factor
at play appears to be calories consumed versus calories expended. In
each study, there was a comparable caloric deficit between the CR
and IF groups. In other words, the small differences in weight change
that were observed most likely come from factors other than energy
input. For example, adherence to the diet likely played a large
role. This is evidenced by the fact that some studies reported their
results ‘per protocol’, meaning that they only measured results in
subjects who adhered to the prescribed treatment, while other
studies reported their results as ‘intention to treat’, meaning that
they included all subjects regardless of diet adherence. Adherence is
a major issue in the efficacy of weight loss diets and there is
some evidence to suggest that IF might be conducive to better
adherence, although more research is needed on this topic to better
understand the importance of adherence.19 Overall, the exclusion of
non-adherent participants in some of the trials examined makes it
difficult to compare their results to those from trials that used an
intention to treat analysis.
The current literature indicates that IF may impart two benefits over
conventional CR. First, IF appears to be more tolerable than CR and
therefore results in improved long-term adherence. The second
proposed benefit is that IF incites some metabolic processes that
impart additional benefits beyond weight loss, such as reducing
oxidative stress.20 Animal studies of IF have demonstrated a potential
benefit for cardiovascular disease, type 2 diabetes, and even cancer.1
While there have been extensive studies of these effects in animals,
less work has been done in humans. Until more research is conducted
in these areas, a definitive answer regarding the efficacy of IF over CR
cannot be given.
Conclusion
IF is a safe and effective method of weight loss as demonstrated by
several human studies. This systematic review did not find evidence
that IF was superior to CR for weight loss in overweight subjects;
however, this conclusion comes with a caveat. Further studies must
include the role of adherence as well as the benefits of IF outside pure
caloric deficit to accurately determine the diet regimen of choice.
RCSIsmj
Page 60 | Volume 12: Number 1. 2019
review
Author Year Study Study Groups Intervention Weight Adherence Conclusion design population change
Arguin et al.18
Carter et al.17
Catenacciet al.12
Conley et al.13
Harvie et al.16
Sundfør et al.14
2012
2016
2016
2018
2011
2018
RCT; 10-20-weekintervention,1-year FU
RCT; 8-weekinterventionand12-month FU
RCT; 8-weekintervention,24-week FU
RCT; 6months
RCT; 6months
RCT; 6-monthweight lossphase,6-monthweightmaintenancephase
F; post-menopausal(average age= 60.5),obese(average BF%= 47.2)
F; age ≥18,BMI ≥27
M and F; age= 18-55, BMI≥30
M; age =55-75, BMI≥30
F; age =30-45, BMI = 25-40
M and F; age= 21-70, BMI = 30-45
IF: N = 12CR: N = 10
IF: N = 39CR: N = 36
IF: N = 14CR: N = 14
IF: N = 12CR: N = 12
IF: N = 53CR: N = 54
IF: N = 54CR: N = 58
After weight loss phase:*IF: -4.0±1.4kgCR: -3.5±1.5kg
After weight maintenancephase:*IF: -10.7±3.0kgCR: -9.5±2.1kg
After 8 weeks ofintervention:*IF: -2.0±1.9kg
CR: -3.2±2.1kg(mean ± SD)At 12-month FU:*IF: -2.1±3.8kg
CR: -4.2±5.6kg(mean ± SD)
At 8 weeks:*IF: -8.2±0.9kg
CR: -7.1±1.0kgAt 32 weeks:*IF: -5.7±1.5kgCR: -5.0±1.6kg
At 6 months:**IF: -5.3±3.0kgCR: -5.5±4.3kg
At 1 month:*IF: -1.8kgCR: -1kg
At 3 months:*IF: -4.1kgCR: -3kg
At 6 months:*IF: -5.7kgCR: -4.5kg
At 3 months:**IF: -7.1 ± 3.7kgCR: -7.4 ± 3.8kg
At 6 months:**IF: -9.1 ± 5.0kgCR: -9.4 ± 5.3kg
Changes between 6- and12-month FU:**IF: +1.1 ± 3.8kgCR: +0.4 ± 4.0kg
2 participantsfrom CR and 1participant from IF excluded fromanalysis (due tofactors other thanadherence)
Attrition duringfollow-up period:IF = 12CR = 12
Attrition:IF = 3CR = 22 participantswithdrew beforestarting theintervention
Attrition:IF = 1CR = 0
Attrition:IF = 11CR = 7
Attrition: WLphase:IF = 1CR = 2
WM phase:IF = 3CR = 1
Differencesin weightloss betweengroups werenotsignificant
Differencesin weightloss betweengroups werenotsignificant
Differencesin weightloss betweengroups werenotsignificant
Differencesin weightloss betweengroups werenotsignificant
Differencesin weightloss betweengroups werenotsignificant
Differencesin weightloss betweengroups werenotsignificant
Table 1: Summary of design and findings of IF versus CR trials included in this review.
IF: 5-week energyrestriction period followedby a 5-week energymaintenance period
CR: 15-week energyrestriction period followedby a 5-week energymaintenance periodBoth diets were designedto reduce body weight by1%/week during weightloss phase
IF: 1,670-2,500kJ/day for2/7 days per week;unrestricted 5/7
CR: 5,000-6,500kJ/day for7/7 days per week
IF: alternating days of 0kcalintake with unrestrictedintake days
CR: 400kcal caloric deficiteach day
IF: 600kcal/day for 2/7non-consecutive days perweek; unrestricted 5/7
CR: 500kcal/day restrictionfrom required amount for7/7 days per week
IF: 25% of energyrequirement for 2/7 daysper week; unrestricted 5/7
CR: 75% of energyrequirement for 7/7 daysper week
IF: 400-600kcal/day for 2/7non-consecutive days perweek; unrestricted 5/7
CR: reduction by ~30% ofenergy requirement perday
Author Year Study Study Groups Intervention Weight Adherence Conclusion design population change
RCSIsmj
References
1. Patterson RE, Sears DD. Metabolic effects of intermittent fasting. Annu Rev
Nutr. 2017;37:371-93.
2. Persynaki A, Karras S, Pichard C. Unraveling the metabolic health benefits of
fasting related to religious beliefs: a narrative review. Nutrition.
2017;35:14-20.
3. Gabel K, Hoddy KK, Haggerty N et al. Effects of 8-hour time restricted feeding
on body weight and metabolic disease risk factors in obese adults: a pilot
study. Nutr Healthy Aging. 2018;4(4):345-53.
4. Horne BD, Muhlestein JB, Lappé DL et al. Randomized cross-over trial of
short-term water-only fasting: metabolic and cardiovascular consequences.
Nutr Metab Cardiovasc Dis. 2013;23(11):1050-7.
5. Johnson JB, Summer W, Cutler RG et al. Alternate day calorie restriction
improves clinical findings and reduces markers of oxidative stress and
inflammation in overweight adults with moderate asthma. Free Radic Biol
Med. 2007;42(5):665-74.
6. Varady KA, Bhutani S, Klempel MC et al. Alternate day fasting for weight loss
in normal weight and overweight subjects: a randomized controlled trial.
Nutr J. 2013;12(1):146.
7. Patterson RE, Laughlin GA, LaCroix AZ et al. Intermittent fasting and human
metabolic health. J Acad Nutr Diet. 2015;115(8):1203-12.
8. Safdie FM, Dorff T, Quinn D et al. Fasting and cancer treatment in humans: a
case series report. Aging (Albany NY). 2009;1(12):988-1007.
9. Pi-Sunyer X. The medical risks of obesity. Postgrad Med. 2009;121(6):21-33.
10. Flegal KM, Kit BK, Orpana H et al. Association of all-cause mortality with
overweight and obesity using standard body mass index categories. JAMA.
2013;309(1):71.
11. Cummings J, Aisen PS, DuBois B et al. Drug development in Alzheimer’s
disease: the path to 2025. Alzheimers Res Ther. 2016;8:39.
12. Catenacci VA, Pan Z, Ostendorf D et al. A randomized pilot study comparing
zero-calorie alternate-day fasting to daily caloric
restriction in adults with obesity. Obesity (Silver Spring). 2016;24(9):1874-83.
13. Conley M, Le Fevre L, Haywood C et al. Is two days of intermittent energy
restriction per week a feasible weight loss approach
in obese males? A randomised pilot study. Nutr Diet. 2018;75
(1):65-72.
14. Sundfør TM, Svendsen M, Tonstad S. Effect of intermittent versus continuous
energy restriction on weight loss, maintenance and cardiometabolic risk: a
randomized 1-year trial. Nutr Metab Cardiovasc Dis. 2018;28(7):698-706.
15. Trepanowski JF, Kroeger CM, Barnosky A et al. Effect of alternate-day fasting
on weight loss, weight maintenance, and cardioprotection among
metabolically healthy obese adults. JAMA Intern Med. 2017;177(7):930-8.
16. Harvie MN, Pegington M, Mattson MP et al. The effects of intermittent or
continuous energy restriction on weight loss and metabolic disease risk
markers: a randomized trial in young overweight women. Int J Obes (Lond).
2011;35(5):714-27.
17. Carter S, Clifton PM, Keogh JB. The effects of intermittent compared to
continuous energy restriction on glycaemic control in type 2 diabetes; a
pragmatic pilot trial. Diabetes Res Clin Pract. 2016;122:106-12.
18. Arguin H, Dionne IJ, Sénéchal M et al. Short- and long-term effects of
continuous versus intermittent restrictive diet approaches on body
composition and the metabolic profile in overweight and obese
postmenopausal women: a pilot study. Menopause. 2012;19(8):870-6.
19. Harvie M, Howell A. Potential benefits and harms of intermittent energy
restriction and intermittent fasting amongst obese, overweight and normal
weight subjects – a narrative review of human and animal evidence. Behav Sci
(Basel). 2017;7(1):pii:E4.
20. Wegman MP, Guo MH, Bennion DM et al. Practicality of intermittent fasting
in humans and its effect on oxidative stress and genes
related to aging and metabolism. Rejuvenation Res. 2015;18(2):
162-72.
Volume 12: Number 1. 2019 | Page 61
review
Trepanowskiet al.15
2017 RCT, 6-monthweight lossphase followedby 6-monthweightmaintenancephase
M and F;age = 18-64,BMI =25-39.9
IF: N = 34CR: N = 35Ctrl: N = 31
At 6 months**:IF: -6.8% BWCR: -6.8% BW
At 12 months**:IF: -6.0% BWCR: -5.3% BW
Attrition: WLphase:IF = 9CR = 6
WM phase:IF = 4CR = 4
Differences inweight lossbetween theinterventiongroups werenotsignificant
IF: 25% of energy needs onfast days; 125% of energyneeds on ‘feast days’,alternating each day.
CR: 75% of energy needsevery day
Ctrl: no restriction
BF = body fat; BW = bodyweight; Ctrl = control group; RCT = randomised controlled trial; F = female; M = male; FU = follow-up; SD =standard deviation; WL = weight loss; WM = weight maintenance.*reported per protocol**reported as intention to treat
David Seligman
RCSI medical student
Abstract
Osteoarthritis of the hip is a leading cause of chronic disability worldwide. Total hip arthroplasty
reduces pain, increases mobility, and has very high patient-reported satisfaction rates postoperatively.
Three of the common surgical techniques include the posterior approach, the direct lateral approach,
and the direct anterior approach (DAA). The DAA is unique in that it utilises both an intermuscular
and internervous plane; however, the procedure may come with drawbacks including a steeper
surgical learning curve and increased risk of intraoperative fracture. The posterior approach is the
most common technique with reports demonstrating shorter surgical times but greater dislocation
rates. The direct lateral approach allows for greater exposure to the femur; however, it is associated
with postoperative Trendelenburg gait. There remains no consensus among surgeons regarding the
best technique or which outcomes are most important. Meta-analyses comparing these three
techniques have mostly failed to determine the superiority of one over another, with the exception
of one study supporting the direct lateral approach. There remains a need for more robust
randomised controlled trials and a better defined standard for outcomes of interest in order to
establish the most advantageous approach for both short- and long-term recovery, joint functionality,
and pain reduction.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 62-67.
RCSIsmjreview
Page 62 | Volume 12: Number 1. 2019
‘Joint’ decision-making:how to determine thebest approach for totalhip arthroplasty
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 63
Introduction
Osteoarthritis (OA) is the most common chronic condition of the
joints and is a leading cause of chronic disability in the elderly
population worldwide.1 OA of the knees and hips can cause
significant impairment in activities of daily living, creating barriers to
engagement in regular exercise and maintenance of a healthy
lifestyle. OA is a progressive disease and many treatment options,
such as anti-inflammatory medications and intra-articular steroid
injections, are aimed at symptomatic relief without altering the
underlying pathophysiology. In more severe and debilitating cases,
patients may decide to receive a total joint replacement in order to
reduce pain and restore functionality.2
Total hip arthroplasty (THA) is an increasingly popular procedure,
with over 90% of patients reporting satisfaction with their results
after surgery.3 Factors including an ageing population, increased
life expectancy, and the obesity epidemic have contributed to an
increased need for joint replacement surgery, and greater
expectations for longevity in the lifespan of prostheses.4 In fact,
it is estimated that the demand for hip replacements in Organisation for
Economic Co-operation and Development (OECD) countries
will rise by 1.2% per year, reaching roughly 2.8 million implants
per year in 2050.5 In order to combat this issue of longevity, an
abundance of research has been done that focuses on optimising
durable and longer-lasting materials such as stainless steel, titanium, and
ceramic variations for the ball and socket joint components.6 In addition
to modifying the materials used, continued advancements
in surgery, such as minimally invasive techniques and computer- assisted
sizing, have improved short- and long-term outcomes and reduced the
incidence of intraoperative complications and postoperative pain.7
Despite the ubiquity of the THAprocedure, there is a lack of consensusregarding the gold standard techniqueamong surgeons, with many lookingtowards newer, minimally invasive
approaches.
Shorter post-surgery recovery times prove to be advantageous to both
patients and hospitals. As fewer days are required for in-hospital recovery,
patients can return to their daily commitments sooner, allowing the
procedure to be less of an imposition on their lives and schedules. The
decreased hospital stays also alleviate strain on the hospital by utilising
fewer resources and freeing up beds designated for recovery.8
Despite the ubiquity of the THA procedure, there is a lack of
consensus regarding the gold standard technique among surgeons,
with many looking towards newer, minimally invasive approaches.
These modern approaches can further decrease recovery time,
allowing patients to get back on their feet and out of the hospital
more promptly.9 This review will compare three commonly used THA
techniques, assess the benefits and drawbacks of each, and
summarise the findings from several key meta-analyses in the field.
Surgical approaches
The most commonly used surgical approaches to THA include the
posterior approach, the direct lateral approach, and the newly
popularised direct anterior approach (DAA) (Figure 1).10 Several
variables may inform both patient and physician preference. For
example, surgeons may weigh short- and long-term functional
FIGURE 1: Schematic representation of the common approaches to total hip arthroplasty (THA): A. Posterior approach; B. Direct lateral approach; and C. Direct anteriorapproach. Adapted from Orthobullets.com – https://www.orthobullets.com/recon/12116/tha-approaches.
Gluteus maximus
t
Short rotators
Femoral neck
t
Gluteus medius
t
t
Quadratus femoris
t
Vastus lateralis
t
Fascia lata
t
tt
Tendon of gluteusminimus
Joint capsule
t t t
Incised tendon ofgluteus medius
Greatertrochanter
Vastus lateralis(incised)
tt
SartoriusGluteus medius
Ascending branch oflateral femoralcircumflex artery
t
t
Tendon of rectusfemoris(cut)
t
Iliopsoas
Femoral head and neck
t
Rectus femoris
t
Tensor fasciae latae
t
A B C
RCSIsmjreview
Page 64 | Volume 12: Number 1. 2019
outcomes against the intraoperative risks and technical difficulty
associated with each technique. Of particular interest, many studies
have emphasised short-term recovery, length of hospitalisation,
reported pain, and long-term functionality as important
decision-making variables.11
Posterior approach
The posterior approach to THA gained popularity in the 1950s and is
used ubiquitously today. A recent survey across 57 countries revealed
that 45% of surgeons prefer the posterior approach.12
Technique
The patient is placed in the lateral decubitus position. An incision
is made 5cm distal to the greater trochanter and centred on the
femoral diaphysis. Fascia overlying the gluteus maximus is divided and
the muscle is split. The gluteus maximus is retracted and the sciatic
nerve is protected. The short external rotators and piriformis are incised
at their insertion on the greater trochanter (Figure 1A). The posterior
joint capsule is incised to expose the femoral head and neck.13
Discussion
This technique provides good visualisation of both the acetabulum and
femur, and spares the abductor muscles, which are involved in lateral
and anterior approaches.13 A major drawback to the posterior approach
is a significantly higher postoperative dislocation rate compared to the
other techniques. A 2002 meta-analysis reported a dislocation rate of
3.23% in posterior THA compared to 0.55% in the direct lateral
approach and 2.18% in the anterolateral approach.14
Direct lateral approach
The direct lateral approach originated in the 1980s and has gained
popularity since. In fact, it is the most commonly used technique in
Canada. Worldwide, it is the second most popular approach with
42% of surgeons preferring this method.12
Technique
The patient is placed in the lateral decubitus position. The lower leg
and foot are bagged to allow for manipulation during the procedure.
A longitudinal incision of roughly 10cm is made, extending over the
greater trochanter. The fascia is split in the area that divides the tensor
fascia lata (TFL) and gluteus maximus (Figure 1B). The gluteus
minimus is exposed and divided with one-third anteriorly and
two-thirds posteriorly. The gluteus minimus and the joint capsule are
then split to visualise the femoral head.13
Discussion
This approach allows considerable exposure of the femur, improving
visualisation into the femoral shaft. This eases the process of reaming
(widening and smoothing) the canal in preparation for the insertion of the
stem (femoral joint component).13 It is associated with very low dislocation
rates.14 However, one well-documented complication associated with this
approach is damage to the superior gluteal nerve, which can lead to a
Trendelenburg gait.15 This deficit is often short term, with most cases
improving spontaneously by 12 months postoperatively.
Direct anterior approach
The DAA has gained tremendous popularity in recent years both in
the surgical community and in the media. Interestingly, it is not a new
procedure, but rather was first described in the 1940s and later
modified in the 1950s.13 In a 2013 survey, roughly 10% of surgeons
favoured this approach in their practice, which has likely increased in
the last five years.12 The allure of this approach relates to its utilisation
Table 1: Summary of reported benefits and drawbacks for three common approaches to total hip arthroplasty. Adaptedfrom Petis et al., 2015.
Posterior approach Lateral approach Direct anterior approach
Advantages • Shorter surgical time • Low dislocation rate • Intermuscular and internervous plane • Shorter learning curve • Excellent exposure of the (muscle sparing)
• Less blood loss femur for stem insertion • Shorter recovery time
• Spares abductor muscles • Shorter length of stay in hospital
Disadvantages • Greater dislocation rate • Greater risk for damage to • Steeper learning curve • Greater risk of sciatic the superior gluteal nerve • Longer surgical time
nerve injury resulting in Trendelenburg gait • Damage to LFCN
• Greater muscle damage • Greater risk for heterotopic • Intraoperative fractures (femur, ankle)
ossification • Limited exposure of the proximal femur
LFCN = Lateral femoral cutaneous nerve.
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 65
of an intermuscular and internervous plane to access the joint. The
DAA allows the surgeon to enter through the Hueter interval between
the sartorius and TFL without incising the muscle.10
The DAA has gained tremendouspopularity in recent years both in the surgical community and in the media. Interestingly, it is not a new procedure, but rather was firstdescribed in the 1940s and later
modified in the 1950s.
Technique
The patient is placed in the supine position with or without a
specialised surgical table. The leg is attached to a mobile element that
allows for controlled movement of the limb intraoperatively.16 An
incision is made 3cm lateral and 3cm distal to the anterior superior
iliac spine down to the level of the fascia. The TFL and sartorius
muscle are separated (Figure 1C), and major vessels in this region are
ligated. Fat is removed to visualise the capsule and the head of the
rectus femoris is reflected for better exposure.
Capsulectomy/capsulotomy is performed to expose the femoral neck.
The surgical table allows better elevation of the femur for stem
insertion. The positioning of the joint implant can also be confirmed
using intraoperative x-ray.10
Discussion
Reported benefits for this approach include reduced pain, improved
function, faster discharge, faster return to mobility, and better stride
time and cadence speed.17,18 However, some studies reported difficult
learning curves among surgical trainees, new intraoperative fracture
risks, and increased risk for infection due to proximity of the moist
groin area.10,19 Other reported complications include damage to the
lateral femoral cutaneous nerve (LFCN), dissection of the
neurovascular bundle of the femoral triangle, and ankle fractures
associated with the special surgical table.20 Additionally, there is
greater difficulty visualising the femoral canal using the DAA.
Therefore, many joints are being reconstructed using shorter stem
designs, which are newer and consequently do not have the same
proven performance history as the more conventional stems.21
Weighing the options
Several prospective and retrospective studies have compared
outcomes across patients having undergone THA with these three
surgical approaches. The commonly reported advantages and
drawbacks are summarised in Table 1. Some studies have suggested
that the DAA results in less muscle damage and better gait
mechanics.18,22 Other studies have reported reductions in
postoperative pain scores, length of hospitalisation, and earlier
discontinuation of ambulatory devices with the DAA.23 Most studies
found similar long-term outcomes with respect to functionality and
revision rates.24 Noted pitfalls of the DAA include a reportedly difficult
learning curve for surgeons, as well as complications such as ankle
fractures and damage to the LFCN, drawbacks that are not noted in
other techniques.16,25 Many of the studies reporting complication
rates are retrospective and non-comparative.
A research group based in Melbourne, Australia, has commenced a
parallel three-arm randomised controlled trial (RCT) comparing the
posterior, direct lateral, and direct anterior approaches in a sample
size of 243 patients.24 They will be assessing patient-reported pain
RCSIsmjreview
Page 66 | Volume 12: Number 1. 2019
and function at two years postoperatively using the Oxford Hip Score
(OHS). Secondary outcomes will include length of stay and rates of
dislocation, infection, intraoperative fractures, and various other
complications.24
Meta-analyses
The following selected meta-analyses provide comparisons for these
procedures, outlining various outcomes of interest.
Anterior vs posterior approach
In their 2015 meta-analysis, Higgins et al. looked at 17 studies
assessing the anterior vs posterior approach.4 The authors concluded
that there was insufficient evidence to support the superiority of one
approach over another. The anterior approach may demonstrate
reductions in early patient-reported pain, function, and narcotic
consumption. The study also found significant reductions in length of
stay and dislocation rates in the anterior group.4 The posterior
approach trended towards lower estimated blood loss and operation
time without statistical significance. There was no difference in gait
function recovery or intraoperative fractures.4
Direct anterior vs direct lateral approach
Yue et al. included 12 studies in a meta-analysis comparing the direct
anterior and direct lateral approaches.26 The results of this
meta-analysis suggested that the DAA was associated with lower
reported pain in the early postoperative period, shorter length of
hospital stay, and greater functional rehabilitation in the early period.
It was also found that the DAA was associated with longer average
surgical time. There was no significant difference between the groups
with regard to perioperative complication rates, transfusions, and
radiological outcomes.26
Posterior vs direct lateral approach
Berstock et al. performed a meta-analysis of five studies comparing
the posterior and direct lateral approaches.27 The study failed
to determine superiority of one approach over another in terms
of functional outcomes. The results of this meta-analysis favoured
the posterior approach for reduction in the risk for Trendelenburg
gait as well as stem malposition. The posterior group trended
insignificantly towards reduced heterotopic ossification and
dislocation rates.27
Cross comparisons
Putananon et al., in their 2018 meta-analysis, included the following
comparative studies: four anterior vs direct lateral approach; four direct
anterior vs posterior approach; and five direct lateral vs posterior
approach.28 The overall recommendation of this review was in favour of
the direct lateral approach, which was found to be most favourable for
postoperative pain, complications, and postoperative functionality.28
Discussion
Relative to other techniques, the DAA is more technically demanding
with a steeper surgical learning curve. Thus far, the evidence for
long-term functionality following the DAA appears to be comparable to
other techniques.19 Although proponents in the media and surgical
community have emphasised its benefits, it is crucial to acknowledge
and assess new risks associated with the DAA, including intraoperative
fracture and neurovascular damage.29 Despite the promise of shorter
recovery times, surgeons and patients must also consider the cost
associated with this potential benefit. One study suggests that it may
take a surgeon, on average, 100 cases to become proficient in the
procedure.30 In reality, this means that during the first 100 operations,
patients may be at greater risk for complications, longer operative time,
and conversion to other techniques. If the DAA is indeed superior to
other techniques, then perhaps this early risk is outweighed by the
long-term potential benefit.
Despite the promise of shorter recoverytimes, surgeons and patients must alsoconsider the cost associated with thispotential benefit. One study suggeststhat it may take a surgeon, on average,
100 cases to become proficient.
Conclusion
It is evident that there are several viable techniques for THA, and there
are many advantages and drawbacks associated with each one. The
growing interest in the DAA among both patients and surgeons will
continue to provide short- and long-term data, which will better
inform the surgical community on the risk:benefit ratio between all
approaches. There remains a need for larger and more extensive
RCTs, which may shed light on the potential benefit of one technique
over another.26 Most review articles have advocated for physician
preference until more convincing evidence is available.4 Alternatively,
others have advocated for the direct lateral approach on the basis of
pain scores, functionality, and complications.28 Ultimately, the onus is
on the surgical team to adequately educate their patient and make a
joint decision on the technique that will improve outcomes, minimise
risk for complications, and give the best chance for longevity.
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 67
References
1. Zhang Y, Jordan JM. Epidemiology of osteoarthritis. Clin Geriatr Med.
2010;26(3):355-69.
2. Tsertsvadze A, Grove A, Freeman K, Johnson S, Connock M, Clarke A et al.
Total hip replacement for the treatment of end stage arthritis of the hip: a
systematic review and meta-analysis. PloS One. 2014;9(7):e99804.
3. Lau RL, Gandhi R, Mahomed S, Mahomed N. Patient satisfaction after total
knee and hip arthroplasty. Clin Geriatr Med. 2012;28(3):349-65.
4. Higgins BT, Barlow DR, Heagerty NE, Lin TJ. Anterior vs. posterior approach
for total hip arthroplasty, a systematic review and meta-analysis. J
Arthroplasty. 2015;30(3):419-34.
5. Pabinger C, Lothaller H, Portner N, Geissler A. Projections of hip arthroplasty
in OECD countries up to 2050. Hip Int. 2018;28(5):498-506.
6. Kurtz SM, Lau E, Ong K, Zhao K, Kelly M, Bozic KJ. Future young patient
demand for primary and revision joint replacement: national projections from
2010 to 2030. Clin Orthop Rel Res. 2009;467(10):2606-12.
7. Learmonth ID, Young C, Rorabeck C. The operation of the century: total hip
replacement. The Lancet. 2007;370(9597):1508-19.
8. Foote J, Panchoo K, Blair P, Bannister G. Length of stay following primary total
hip replacement. Annals R Coll Surg Engl. 2009;91(6):500-4.
9. Pospischill M, Kranzl A, Attwenger B, Knahr K. Minimally invasive compared
with traditional transgluteal approach for total hip arthroplasty: a comparative
gait analysis. J Bone Joint Surg Am. 2010;92(2):328-37.
10. Post ZD, Orozco F, Diaz-Ledezma C, Hozack WJ, Ong A. Direct anterior
approach for total hip arthroplasty: indications, technique, and results. J Am
Acad Orthop Surg. 2014;22(9):595-603.
11. Cheng T, Feng JG, Liu T, Zhang XL. Minimally invasive total hip arthroplasty: a
systematic review. Int Orthop. 2009;33(6):1473-81.
12. Chechik O, Khashan M, Lador R, Salai M, Amar E. Surgical approach and
prosthesis fixation in hip arthroplasty world wide. Arch Orthop Trauma Surg.
2013;133(11):1595-600.
13. Petis S, Howard JL, Lanting BL, Vasarhelyi EM. Surgical approach in primary
total hip arthroplasty: anatomy, technique and clinical outcomes. Can J Surg.
2015;58(2):128-39.
14. Dargel J, Oppermann J, Brüggemann GP, Eysel P. Dislocation following total
hip replacement. Dtsch Ärztebl Int. 2014;111(51-52):884-90.
15. Picado CH, Garcia FL, Marques Jr W. Damage to the superior gluteal nerve
after direct lateral approach to the hip. Clin Orthop Relat Res.
2007;455:209-11.
16. Matta JM, Shahrdar C, Ferguson T. Single-incision anterior approach for total
hip arthroplasty on an orthopaedic table. Clin Orthop Relat Res.
2005;441:115-24.
17. Schweppe ML, Seyler TM, Plate JF, Swenson RD, Lang JE. Does surgical
approach in total hip arthroplasty affect rehabilitation, discharge disposition,
and readmission rate? Surg Technol Int. 2013;23:219-27.
18. Mayr E, Nogler M, Benedetti MG, Kessler O, Reinthaler A, Krismer M et al. A
prospective randomized assessment of earlier functional recovery in THA
patients treated by minimally invasive direct anterior approach: a gait analysis
study. Clin Biomech (Bristol, Avon). 2009;24(10):812-8.
19. Kagan R, Peters CL, Pelt CE, Anderson MB, Gililland JM. Complications and pitfalls
of direct anterior approach total hip arthroplasty. Annals of Joint. 2018;3(5).
20. Barrett WP, Turner SE, Leopold JP. Prospective randomized study of direct
anterior vs postero-lateral approach for total hip arthroplasty. J Arthroplasty.
2013;28(9):1634-8.
21. Panichkul P, Parks NL, Ho H, Hopper Jr RH, Hamilton WG. New approach and
stem increased femoral revision rate in total hip arthroplasty. Orthopedics.
2016;39(1):e86-92.
22. Müller M, Tohtz S, Springer I, Dewey M, Perka C. Randomized controlled trial
of abductor muscle damage in relation to the surgical approach for primary
total hip replacement: minimally invasive anterolateral versus modified direct
lateral approach. Arch Orthop Trauma Surg. 2011;131(2):179-89.
23. Christensen CP, Jacobs CA. Comparison of patient function during the first six
weeks after direct anterior or posterior total hip arthroplasty (THA): a
randomized study. J Arthroplasty. 2015;30(9):94-7.
24. Talia AJ, Coetzee C, Tirosh O, Tran P. Comparison of outcome measures and
complication rates following three different approaches for primary total hip
arthroplasty: a pragmatic randomised controlled trial. Trials. 2018;19(1):13.
25. Goulding K, Beaulé PE, Kim PR, Fazekas A. Incidence of lateral femoral
cutaneous nerve neuropraxia after anterior approach hip arthroplasty. Clin
Orthop Rel Res. 2010;468(9):2397-404.
26. Yue C, Kang P, Pei F. Comparison of direct anterior and lateral approaches in
total hip arthroplasty: a systematic review and meta-analysis (PRISMA).
Medicine (Baltimore). 2015;94(50):e2126.
27. Berstock JR, Blom AW, Beswick AD. A systematic review and meta-analysis of
complications following the posterior and lateral surgical approaches to total
hip arthroplasty. Annals R Coll Surg Engl. 2015;97(1):11-6.
28. Putananon C, Tuchinda H, Arirachakaran A, Wongsak S, Narinsorasak T,
Kongtharvonskul J. Comparison of direct anterior, lateral, posterior and
posterior-2 approaches in total hip arthroplasty: network meta-analysis. Eur J
Orthop Surg Traumatol. 2018;28(2):255-67.
29. Lee GC, Marconi D. Complications following direct anterior hip procedures:
costs to both patients and surgeons. J Arthroplasty. 2015;30(9):98-101.
30. de Steiger RN, Lorimer M, Solomon M. What is the learning curve for the
anterior approach for total hip arthroplasty? Clin Orthop Rel Res.
2015;473(12):3860-6.
Brian Li1
Elysia Grose2
Shawn Khan3
Caberry Weiyang Yu4
1RCSI medical student
2University of Ottawa
3University of Toronto
4Queen’s University
Abstract
Clinical autopsies are conducted to further investigate a patient’s cause of death and often include
sampling tissue for educational or research purposes. However, standard autopsies preclude the
collection of the high-quality tissue necessary for advanced laboratory assays due to the degree of
tissue necrosis that occurs after death. Rapid autopsies differ from standard procedures in that they
are conducted shortly after a patient’s death, thus preserving the structural integrity of important
biomolecules. The goal of this review is to outline the logistics of rapid autopsy programmes, and to
discuss the advantages of rapid autopsies and their relevance in the field of cancer research. To
ascertain the number and variety of studies that have utilised rapid autopsy-obtained tissue for cancer
research, a search was conducted on Ovid MEDLINE for all records from 1946 to January 4, 2017.
The majority of studies focused on molecular genetics, clinical research, and tumour heterogeneity,
most frequently using prostate, breast, and pancreas tissue. This article highlights the need for a
more robust and comprehensive systematic review on the impact of rapid autopsy programmes to
better inform and advise researchers on the value of rapid autopsies for oncology research.
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 68-71.
RCSIsmjreview
Page 68 | Volume 12: Number 1. 2019
Introduction
Rapid autopsy is an advanced method of
biospecimen procurement whereby autopsies are
conducted within six hours post mortem.1-3 This
technique was first introduced in the late 1980s;
however, its popularity has grown in recent years
due to its invaluable role in oncology research.4-6
Tissue obtained from rapid autopsies experiences a
much shorter post-mortem interval, defined as the
time between death and tissue preservation,
allowing scientists to collect high-quality specimens
for advanced laboratory analyses that require tissue
to have low levels of cellular degradation.7-9 Rapid
autopsies also allow for the collection of a much
larger volume of tissue, a biopsy option that is not
possible in living patients. Specimens from standard
The role of rapid autopsyin oncology
RCSIsmjreview
Volume 12: Number 1. 2019 | Page 69
autopsies are snap frozen in liquid nitrogen and/or preserved in
formalin-fixed paraffin-embedded blocks.3 Rapid autopsies provide the
additional option of immediate fresh tissue distribution to cancer
research facilities, permitting analyses that are incompatible with
archival frozen or fixed tissue.7
Institutions capable of executing the collection and maintenance of
such a repository of high-quality biospecimens are invaluable resources
for clinicians and researchers who are interested in investigating tumour
biology, metastatic disease, and treatment resistance. Secondary to
significantly furthering our understanding of tumourigenesis, rapid
autopsy programmes offer a unique opportunity for advanced-stage
cancer patients to donate their bodies to cancer research. The purpose
of this review article is to inform current and future oncology
researchers of the enormous potential and utility of rapid
autopsy-obtained tissue. This review will also outline the rapid autopsy
process from patient recruitment to transport of the deceased.
Methods
A search was run on Ovid MEDLINE for all records from 1946 to
January 4, 2017, using the search terms rapid autopsy, rapid tissue
donation, and warm autopsy. The search strategy yielded 173
publications. After removing duplicates, 138 references underwent
title and abstract review, yielding 53 potentially relevant references on
the analysis of rapid autopsy tissue in the context of cancer research.
In total, 44 unique records relevant to oncology were identified upon
full-text review. Studies were then organised by tissue type as well as
topic of research, with some articles being included in multiple
categories. Studies regarding patient perspectives on rapid autopsies
or body donations were excluded. A PRISMA flow chart of the data
extraction process is shown in Figure 1.
Results – literature search
The majority of rapid autopsy programmes worldwide are located in
the United States (>90%). Over half of the studies analysed tissue
obtained from rapid autopsy programmes at Johns Hopkins Hospital
(26%) and the University of Michigan (26%). The mean sample size
was 17 patients (range: 1-76). Patients ranged from 14-90 years of age.
Autopsies were conducted 1 to 48 hours post mortem, although the
majority were performed within 6 hours. The top three most studied
rapid autopsy tissues were from the prostate, breast, and pancreas, as
shown in Figure 2. The top three research fields that utilised rapid
autopsy tissue included molecular genetics (54%), clinical research
(18%), and tumour heterogeneity (16%), as shown in Figure 3.
Results – rapid autopsy process
Recruitment
The eligibility criteria for rapid autopsies typically include patients
with advanced cancer and a ‘do not resuscitate’ status.3 The consent
process should include the patient’s family members and next of kin
or person with power of attorney, as they will have a role in
supporting the patient and facilitating later steps of co-ordination.3
The topic of tissue donation can be difficult, particularly for patients
diagnosed with terminal illnesses such as cancer. As such, the timing
and language of the discussion must be carefully chosen. Patients
should be approached by their medical oncologist or palliative care
physician only after a strong relationship has been established and
initial treatments have been unsuccessful.10 The label “rapid autopsy
programme” should also be avoided in clinical environments as
patients and family members may find the term too abrasive; “tissue
donation programme” has been suggested as an acceptableFIGURE 1: PRISMA flow chart of data extraction process.
FIGURE 2: Distribution of primary disease sites studied using rapid autopsy samples.
Prostate
Breast
Pancreatic
Renal cell carcinoma
Diffuse intrinsicpontine glioma
Other (embryonalrhabdomyosarcoma,intrahepaticcholangiocarcinoma,melanoma, multiple myeloma,ovarian)
173 total records identified through adatabase search – OVID MEDLINE(R)
138 records after duplicates removed
53 records identified after screeningby title and abstract
44 records identified upon full text review
t
t
t
9 records excluded
85 records excluded
tt
RCSIsmjreview
Page 70 | Volume 12: Number 1. 2019
alternative.11 Patients should also be reassured that their decision to
participate in, or decline, rapid autopsy will not have any effect on
their treatment options or quality of life.12
Patient tracking
Patients from either inpatient or outpatient settings can participate in
rapid autopsy programmes.3 It is important to track the patient’s
whereabouts to ensure that the correct individual will be contacting
the rapid autopsy team at the time of the patient’s death. For
instance, families of patients who are transferred to a hospice may ask
the nursing staff to contact the autopsy team at the time of death.
Transport of the deceased
Members of the rapid autopsy team are notified by the patient’s
healthcare provider, family member, next of kin, or person with
power of attorney so that immediate transport can be arranged.3 This
can be a difficult stage in the process since a balance must be found
between allowing family members ample time to grieve and ensuring
timely transport of the body for autopsy before significant tissue
degradation occurs. Extraneous factors such as traffic and weather
conditions can also lead to delays.
Autopsy
Biobanking is concurrently conducted during the autopsy and is
guided by the patient’s past radiology reports to locate primary
tumours and metastatic sites. Standard autopsy procedures are
applied, including a Y-shaped incision from the shoulders to the
sternum and pubic bone, evisceration of internal organs, and a
systematic examination of all organs. Biobanking may consist of snap
freezing tissue in liquid nitrogen or preserving the tissue in
formalin-fixed paraffin-embedded blocks. Labs may request that
specimens be placed in specific media for testing.
Transport of the deceased to funeral home
Once the autopsy is complete, the body is transported to the family’s
designated funeral home. The body can be restored to a state suitable
for an open casket viewing, the details of which are discussed at the
time of consent.3 Since rapid autopsy programmes are sponsored by a
combination of grants and hospital resources, there are no costs to the
family for transport of the deceased. A letter of appreciation can be sent
to the family several weeks after the procedure.3
Discussion
Rapid autopsy programmes appear to be few and far between, with
the vast majority located in the United States. Within oncology
research, there is a wide scope of applications for tissue obtained from
rapid autopsy, ranging from drug resistance to xenograft studies.1,13-15
Overall, studies to date that have utilised rapid autopsy tissue
primarily investigated the molecular genetics of cancer, as genomic
instability enables neoplastic cells to resist programmed cell death,
sustain proliferation signals, and activate invasion and metastasis.16
In the age of personalised cancer medicine, genomic characterisation
of tumours is of paramount importance to determine the appropriate
treatment plan for oncology patients. Unfortunately, this is
complicated by significant variations in genomic instability. Tumour
heterogeneity has been found between different regions within a
single tumour, as well as between the primary tumour and its
metastases.17 Evidence for tumour heterogeneity has also been found
between patients of the same histological cancer subtype.18
Therefore, removal of tumour tissue from a living patient, either
through biopsy or surgical excision, may prove to be insufficient in
obtaining an accurate genetic representation of the patient’s entire
disease. There is much to uncover about the clinical implications of
interpatient and intratumour heterogeneity on diagnosis, prognosis,
treatment selection, and combating treatment resistance.19 Rapid
autopsies may thus prove to be pivotal in furthering our understanding
of genetic cancer lineages, since they allow for high-volume,
low-degradation sampling of multiple neoplastic sites.20-22
Conclusion
The procurement of tissue through rapid autopsies has enabled
significant advancements in oncology research, particularly for
prostate, pancreatic, and breast cancer. The majority of published
research appears to be centred on studying molecular genetics in
FIGURE 3: Distribution of studies utilising rapid autopsy specimens in their analysesaccording to research interests.
Molecular genetics
Clinical
Tumour heterogeneity
Drug resistance
Xenograft
RCSIsmj
References
1. Rubin MA, Putzi M, Mucci N, Smith DC, Wojno K, Korenchuk S et al.
Rapid (“warm”) autopsy study for procurement of metastatic prostate
cancer. Clin Cancer Res. 2000;6(3):1038-45.
2. Wu JM, Fackler MJ, Halushka MK, Molavi DW, Taylor ME, Teo WW et al.
Heterogeneity of breast cancer metastases: comparison of therapeutic
target expression and promoter methylation between primary tumors and
their multifocal metastases. Clin Cancer Res. 2008;14(7):1938-46.
3. Alsop K, Thorne H, Sandhu S, Hamilton A, Mintoff C, Christie E et al. A
community-based model of rapid autopsy in end-stage cancer patients.
Nat Biotechnol. 2016;34(10):1010-14.
4. Shah RB, Mehra R, Chinnaiyan AM, Shen R, Ghosh D, Zhou M et al.
Androgen-independent prostate cancer is a heterogeneous group of
diseases: lessons from a rapid autopsy program. Cancer Res.
2004;64(24):9209-16.
5. Lindell KO, Erlen JA, Kaminski N. Lessons from our patients: development
of a warm autopsy program. PLoS Med. 2006;3(7):e234.
6. Beltran H, Eng K, Mosquera JM, Sigaras A, Romanel A, Rennert H et al.
Whole-exome sequencing of metastatic cancer and biomarkers of
treatment response. JAMA Oncol. 2015;1(4):466-74.
7. Walker DG, Whetzel AM, Serrano G, Sue LI, Lue LF, Beach TG.
Characterization of RNA isolated from eighteen different human tissues:
results from a rapid human autopsy program. Cell Tissue Bank.
2016;17(3):361-75.
8. Sun H, Sun R, Hao M, Wang Y, Zhang X, Liu Y et al. Effect of duration of
ex vivo ischemia time and storage period on RNA quality in biobanked
human renal cell carcinoma tissue. Ann Surg Oncol. 2016;23(1):297-304.
9. Birdsill AC, Walker DG, Lue L, Sue LI, Beach TG. Postmortem interval effect
on RNA and gene expression in human brain tissue. Cell Tissue Bank.
2011;12(4):311-8.
10. McIntyre J, Pratt C, Pentz RD, Haura EB, Quinn GP. Stakeholder
perceptions of thoracic rapid tissue donation: an exploratory study. Soc Sci
Med. 2013;99:35-41.
11. Achkar T, Wilson J, Simon J, Rosenzweig M, Puhalla S. Metastatic breast
cancer patients: attitudes toward tissue donation for rapid autopsy. Breast
Cancer Res Treat. 2016;155(1):159-64.
12. Quinn GP, Murphy D, Pratt C, Muñoz-Antonia T, Guerra L, Schabath
MB et al. Altruism in terminal cancer patients and rapid tissue
donation program: does the theory apply? Med Health Care Philos.
2013;16(4):857-64.
13. Almendro V, Kim HJ, Cheng YK, Gönen M, Itzkovitz S, Argani P et al.
Genetic and phenotypic diversity in breast tumor metastases. Cancer
Res. 2014;74(5):1338-48.
14. Udager AM, Alva A, Chen YB, Siddiqui J, Lagstein A, Tickoo SK et al.
Hereditary leiomyomatosis and renal cell carcinoma (HLRCC): a rapid
autopsy report of metastatic renal cell carcinoma. Am J Surg Pathol.
2014;38(4):567-77.
15. Xie T, Musteanu M, Lopez-Casas PP, Shields DJ, Olson P, Rejto PA et
al. Whole exome sequencing of rapid autopsy tumors and xenograft
models reveals possible driver mutations underlying tumor
progression. PloS One. 2015;10(11):e0142631.
16. Hanahan D, Weinberg RA. Hallmarks of cancer: the next generation.
Cell. 2011;144(5):646-74.
17. Meacham CE, Morrison SJ. Tumour heterogeneity and cancer cell
plasticity. Nature. 2013;501(7467):328-37.
18. Cancer Genome Atlas Network. Comprehensive molecular portraits
of human breast tumours. Nature. 2012;490(7418):61-70.
19. Bedard PL, Hansen AR, Ratain MJ, Siu LL. Tumour heterogeneity in
the clinic. Nature. 2013;501(7467):355-64.
20. Avigdor BE, Cimino-Mathews A, DeMarzo AM, Hicks JL, Shin J,
Sukumar S et al. Mutational profiles of breast cancer metastases from
a rapid autopsy series reveal multiple evolutionary trajectories. JCI
insight. 2017;2(24):pii: 96896.
21. Pisapia DJ, Salvatore S, Pauli C, Hissong E, Eng K, Prandi D et al.
Next-generation rapid autopsies enable tumor evolution
tracking and generation of preclinical models. JCO Precis Oncol.
2017;1:1-3.
22. Savas P, Teo ZL, Lefevre C, Flensburg C, Caramia F, Alsop K et al.
The subclonal architecture of metastatic breast cancer: results from
a prospective community-based rapid autopsy program “CASCADE”.
PLoS Med. 2016;13(12):e1002204.
Volume 12: Number 1. 2019 | Page 71
review
cancer. However, despite its clear potential to make significant
contributions to cancer research, rapid autopsy is still relatively
unknown among oncology clinicians and researchers. Currently, there
are no published systematic reviews on the usage of rapid autopsy
tissue; such publications could prove useful for the development of
novel programmes to guide oncology researchers on how rapid
autopsy tissue samples should be collected and utilised. Based on the
this preliminary search, a more comprehensive, systematic review of
the literature should be completed to improve future clinical trials and
basic science research using rapid autopsy-obtained tissue.
RCSIsmj
Page 72 | Volume 12: Number 1. 2019
Abstract
Fasting during the lunar month of Ramadan is regarded as an integral pillar of Islam. During this holy
month, many Muslims around the world abstain from eating and drinking from sunrise to sundown.
Depending on geography, these fasts can range from 12 to 20 hours. Fasting in itself causes many
physiological changes within the body, but individuals with diabetes who observe the month of Ramadan
are at risk of many additional physiologic complications, ranging from hypoglycaemia and ketoacidosis to
thrombosis due to dehydration. Healthcare professionals not accustomed to or familiar with Ramadan lack
knowledge regarding the pharmacologic modifications required for patients with diabetes during the fast.
While some guidelines do exist, they are not widely known or followed, and such data is not published in
any medication reference text such as the British National Formulary (BNF) or Lexicomp. Presented here
are the current guidelines implemented by the International Diabetes Federation (IDF) and the Diabetes
and Ramadan (DAR) International Alliance, compiled from a range of studies regarding fasting and its
implications for individuals with diabetes.
review
Ranya Buhamad
RCSI pharmacy student
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 72-77.
Pharmacologic managementof diabetes during Ramadan
RCSIsmj
Volume 12: Number 1. 2019 | Page 73
Introduction
According to the World Health Organisation (WHO), there are 422 million
people living with diabetes worldwide.1 It is estimated that 50 million
individuals with diabetes observe the holy month of Ramadan, an Islamic
lunar month during which an absolute fast is observed.2 Fasting has many
benefits for diabetics and non-diabetics, such as reducing insulin resistance
and providing better blood glucose control.3 While there are many types
of fasting, most studies have explored intermittent fasting, also known as
a wet fast, during which water can be consumed. During Ramadan,
however, the type of fast performed is a dry fast. This absolute fast involves
abstaining from both eating and drinking, from sunrise to sunset, for one
month. Managing diabetes during Ramadan is a challenge for patients as
well as healthcare providers (HCPs) due to a lack of data and
epidemiological studies.4
During a dry fast, a healthy individual’s body undergoes physiologic
changes, predominantly in metabolic pathways, which result in
ketogenesis.5 Usually, periods of eating stimulate insulin secretion.
However, during fasting, the absence of ingested food causes insulin
secretion to drop. In order for the body to maintain homeostasis, glycogen
stores are broken down to promote gluconeogenesis so that glucose can
be used to fuel body tissues and organs.5 As the fast continues, glycogen
stores are depleted, and the body turns to utilising fatty acids as an energy
source in a process known as ketosis or ketogenesis.5 Healthy individuals
are able to regulate these processes to maintain normal blood glucose
levels.5 People with diabetes, on the other hand, lack the ability to
effectively regulate blood glucose levels.5
An Epidemiology of Diabetes and Ramadan (EPIDIAR) study conducted in
2005 highlighted the many risks associated with fasting in patients with
diabetes, including hypoglycaemia, hyperglycaemia, diabetic ketoacidosis
(DKA), dehydration, and thrombosis.6,7 In order to ensure that patients
with diabetes have optimal blood glucose control and quality of life, HCPs
should familiarise themselves with the unique pharmacologic
modifications required for adequate diabetes management during
Ramadan, as well as after, especially as populations become increasingly
culturally diverse.
Eid-Al-Fitr is a holiday marking the end of the fast. During this celebratory
period, many people tend to overindulge in carbohydrate-rich foods and
snacks.8 Therefore, this post-Ramadan transitional period must also be
considered, and patients with diabetes should be educated and monitored
by HCPs to ensure maintenance of blood glucose control.
Determining patient risk categories
Due to a previous lack of data on the consequences of fasting among
patients with diabetes during Ramadan, an inaugural consensus meeting
was held in 1995 in Casablanca, which aimed to propose a set of
recommendations for fasting diabetics. The meeting included
endocrinologists and diabetes experts, as well as Muslim scholars and
clerks. During the conference, a standard guideline was outlined regarding
the risk assessment of patients with diabetes, and patients were risk
stratified based on expert opinion.3,9
The Muslim community took well to the recommendations, which, in a
religious context, were based on the interpretation of the Holy Quran
(Al-Baqarah 183-185), which exempts sick individuals from fasting and
reiterates the concept of ease.10 Muslim diabetics who still choose to fast
during Ramadan despite this religious exemption should be respected on
the basis of patient autonomy, but should also be made aware of the risks
associated with fasting.8
Figure 1 summarises the risk categories for patients with diabetes, as
outlined at the 1995 conference. It is recommended that patients
who are deemed very high or high risk should abstain from fasting
review
FIGURE 1: Risk categorisation of patients with diabetes. Adapted from Al-Arouj et al.6
Severehypoglycaemia ordiabeticketoacidosis threemonths prior toRamadan
Patients withhypoglycaemiaunawareness
Patients with type1 diabetes
Patients with poorblood glucosecontrol
Patients with renalinsufficiency
Patients livingalone (e.g.,elderly)
Patients who aretreated withinsulin orsulfonylureas
Patients withwell-controlleddiabetes treatedwith short-actinginsulinsecretagogues
Patients withwell-controlleddiabetes managedwith diet alone ormetformin
VERY HIGH RISK
HIGH RISK
MODERATE RISK
LOW RISK
RCSIsmj
Page 74 | Volume 12: Number 1. 2019
during Ramadan and otherwise.6 Furthermore, it is recommended that
a patient should break their fast if a blood glucose measurement is less
than 3.3mmol/L, or if their blood glucose rises above 16.7mmol/L.6
Ramadan-focused diabetes education
According to the EPIDIAR study, one-third of patients with diabetes did
not receive any counselling regarding fasting during Ramadan from
their HCPs.3 Of the HCPs who did provide counselling, only 63%
adhered to the recommended guidelines.11 The primary aim of the
IDF-DAR guidelines is to reduce the risks associated with diabetes and
fasting.12 Education and counselling is not limited to the patient, but
must also extend to the HCP so that they can familiarise themselves
with the various aspects of fasting during Ramadan, in order to better
care for their patients.12 According to the IDF-DAR guidelines, the
crucial areas of diabetes education include: risk quantification; blood
glucose monitoring; advice regarding fluids, diet, and exercise;
medication adjustments; and, when to break the fast.12
According to the IDF-DAR guidelines, the crucial areas of diabetes educationinclude: risk quantification; blood
glucose monitoring; advice regardingfluids, diet, and exercise; medication
adjustments; and when to break the fast.
In a UK-based study entitled Ramadan Education and Awareness in
Diabetes (READ), a group of general practitioners attended an
educational workshop on this issue.11 Following the workshop,
physician participants counselled their patients with type 2 diabetes
(n=57) and relayed the newly acquired advice and
recommendations.11 Patients’ weights and incidence of
hypoglycaemic events before and after Ramadan were compared to
those of a control group that did not receive any counselling.12 The
results demonstrated weight loss (mean -0.7kg, p<0.001) and a
decrease in hypoglycaemic events in the experimental group that
received educational counselling.12 Conversely, the control group
reported weight gain (mean +0.6kg, p<0.001) and an increase in
hypoglycaemic events.12 This demonstrates the importance and
effectiveness of proper education and counselling in reducing
hypoglycaemic episodes among fasting individuals.
HCPs should also encourage a highly structured self-monitoring
programme for their fasting patients with diabetes. According to
recommendations published by the Diabetes Research and Clinical
Practice journal, self-monitoring blood glucose should be carried out
at seven distinct times throughout the day:8,12
1. At the sunrise meal (suhoor).
2. Morning.
3. Midday.
4. Mid afternoon.
5. At the sunset meal (iftar).
6. Two hours following iftar.
7. Any time the patient feels unwell.
Fasting management of type 1 diabetes
Patients with type 1 diabetes are classified as high risk with respect to
fasting, and thus it is not recommended that these patients fast
during Ramadan due to possible complications, most notably
hypoglycaemia.13 Islamic clerks have approved this medical
recommendation as it aligns with Islamic teaching. Despite this, many
Muslims experience peer pressure to fast during Ramadan, according
to the DAR International Alliance.12 In fact, it is reported that 42.8%
of patients with type 1 diabetes indeed fast during Ramadan.3 If such
patients do make the choice to fast, HCPs must be responsible for
educating them regarding the risks associated with fasting and any
advised medication adjustments. With regard to pharmacologic
management of type 1 diabetes, the IDF-DAR alliance has set out
guidelines relating to recommended adjustments for insulin regimens
during Ramadan. In addition, HCPs must provide patients with advice
regarding proper hydration and nutrition.
Insulin therapy
Insulin administration during a fast carries a risk of hypoglycaemia.8
Typical insulin regimens for insulin-dependent diabetics (either type 1
or type 2) include a long- or intermediate-acting basal insulin and a
short-acting bolus insulin dose administered with meals.8 Another
treatment regimen utilises twice-daily pre-mixed insulins. There is
limited data available concerning the optimal insulin regimen during
Ramadan. Recent guidelines state that with proper monitoring and
individualisation of therapy, either regimen would be considered safe
during Ramadan.8 In terms of dosage adjustments, long/
intermediate-acting basal insulins should be administered twice daily,
with the usual pre-Ramadan ‘morning dose’ to be taken instead at
iftar, and the pre-Ramadan ‘evening dose’ to be reduced by 50% and
taken instead at suhoor.8 For three-times-daily short-acting insulin
boluses, the therapy should be adjusted to only twice-daily dosing:
one bolus to be administered at iftar, while the suhoor dose should be
reduced by 25-50% according to the IDF-DAR guidelines.8
review
RCSIsmj
Volume 12: Number 1. 2019 | Page 75
For patients who take pre-mixed insulins, dose adjustments are also
necessary. For the twice-daily regimen, the normal dose should be
administered at iftar, and the suhoor dose should be reduced by
25-50%.8,14 A three-times-daily regimen should be switched to the
twice-daily dosing schedule just described.
Dose titrations should be performed every three days for all insulin
regimens, and adjusted based on self-monitored blood glucose levels.8 A
summary of these titrations is listed in Table 1. Of note, all of the insulin
recommendations stated above apply to both type 1 and type 2 diabetes.
Insulin pumps
Patients with poor blood glucose control can benefit from insulin
pumps. According to the IDF-DAR guidelines, studies demonstrated
that insulin-dependent diabetics can safely perform a fast using
insulin pumps with the following rate adjustments:8
n basal rate: reduce the dose by 20-40% during the last three to four
hours of fasting, and increase the dose by 30% after iftar; and,7,14
n bolus rate: patients should continue to administer regular doses
based on the carbohydrate content of the meal.7,14
Fasting management of type 2 diabetes
Patients with type 2 diabetes often require multiple medications in
order to manage their disease.15 The added challenges posed by
fasting during the holy month of Ramadan mean that various
treatment modifications may be required.
Diet and exercise therapy
Patients who are managed on dietary therapy alone are well able to
reap the many benefits that fasting has on blood sugar. The main area
for adjustment with this mode of therapy, as recommended by the
IDF-DAR guidelines, is exercise. The intensity of physical activity
should be reduced during the hours of fasting, and adequate
hydration should follow during the hours of iftar.8
Metformin
Metformin is the first-line agent for treatment of type 2 diabetes.8 This
medication works by decreasing both hepatic glucose production and
glucose absorption.16 Although there are no randomised controlled
trials (RCTs) investigating the use of metformin during Ramadan, it is
considered a safe agent since it carries a low hypoglycaemic risk.8
Metformin is available in immediate- and prolonged-release
preparations. In terms of adjusting a patient’s metformin dosage
during fasting, once-daily immediate-release or prolonged-release
regimens require no modifications.8 For twice-daily dosing, the first
dose is to be taken at suhoor, and the second dose at iftar.6,8 For
patients on a three-times-daily dosing regimen, the morning dose
should be taken at suhoor, and the remaining dosages should be
combined and administered at iftar.6,8
Thiazolidinediones
Thiazolidinediones (TZDs) reduce blood glucose by activating
peroxisome proliferator activated receptor γ. Pioglitazone
was evaluated in a study of fasting Muslims during Ramadan.12
In comparison to placebo, pioglitazone showed no difference
in incidence of hypoglycaemic events.12 During Ramadan, no
dose adjustments are necessary due to TZDs’ low risk of
hypoglycaemia.12
Short-acting insulin secretagogues
This class of medication works by stimulating pancreatic beta cells to
secrete more insulin. The IDF-DAR guidelines recommend that
three-times-daily dosing should be reduced to twice-daily dosing
during Ramadan, with no other dose adjustments necessary.8
Sulfonylureas
Sulfonylureas (SUs) are a second-line treatment for type 2 diabetes
following metformin.15 SUs stimulate insulin secretion from pancreatic
beta cells in a glucose-independent process.8 This carries a higher risk
review
Table 1: Dose titrations of insulin during Ramadan. Adapted from Hassanein et al.6,10,13
Pre-iftar Post-iftar/post-suhoor
Fasting blood glucose basal insulin bolus insulin Pre-mixed insulin
<3.9mmol/L Reduce by 4 units Reduce by 4 units Reduce by 4 units
3.9-5.0mmol/L Reduce by 2 units Reduce by 2 units Reduce by 2 units
5.0-7.0mmol/L No change required No change required No change required
7.0-11.1mmol/L Increase by 2 units Increase by 2 units Increase by 2 units
>11.1mol/L Increase by 4 units Increase by 4 units Increase by 4 units
RCSIsmj
Page 76 | Volume 12: Number 1. 2019
of hypoglycaemia depending on the specific medication used due to
a difference in receptor interactions.8 According to an observational
study of patients treated with SUs, the highest incidence of
hypoglycaemic events was associated with glibenclamide, a
first-generation agent.8,17 This study thus recommended that
second-line agents such as gliclazide be used to treat type 2 diabetes
during Ramadan.8,17 In terms of dose adjustments for the twice-daily
regimen, the suhoor dose should be reduced while keeping the iftar
dose the same.8
SGLT2 inhibitors
This class of oral medications works by increasing glucose excretion
via the kidneys.8 The risks associated with SGLT2 inhibitors include
urinary tract infections and dehydration; the risk of dehydration is
augmented when a patient is fasting. To date, only one study has
assessed the use of SGLT2 inhibitors during Ramadan; it advises
caution with use of these medications, and emphasises that patients
must be informed to stay well hydrated after iftar.8,18
DPP4 inhibitors
As a result of DPP4 enzyme inhibition, the levels of circulating
glucagon-like-peptide-1 (GLP1) are increased. GLP1 plays a pivotal
role in stimulating insulin secretion.8 IDF-DAR guidelines recommend
no dosage adjustment during Ramadan.12
GLP1 receptor agonists
By mimicking the incretin hormone, GLP1 receptor agonists are able
to increase glucose uptake in muscles. When used as monotherapy,
the risk of hypoglycaemia is low in comparison to their use alongside
SUs or insulin.8 Newer agents such as lixisenatide and dulaglutide
have since come to market, but there have been no trials regarding
their use during Ramadan. The IDF-DAR guidelines state that as
long as GLP1 receptor agonists are well titrated to the therapeutic
dose six weeks prior to Ramadan, they are considered safe to use
while fasting.8
Table 2 summarises the IDF-DAR guidelines on oral diabetes
medication adjustments during Ramadan.
As global populations becomeincreasingly diverse, it is paramount that healthcare professionals educatethemselves on cultural and religiouspractices that influence healthcare
and pharmacotherapy.
Conclusion
As global populations become increasingly diverse, it is paramount
that healthcare professionals educate themselves on cultural and
religious practices that influence healthcare and pharmacotherapy.
Physicians and allied health professionals involved in managing
patients with diabetes should counsel patients prior to Ramadan,
liaise with them throughout the fast, and continue patient monitoring
during Eid-Al-Fitr in the weeks following Ramadan. Guidance during
review
Table 2: Pharmacologic modifications for fasting patients with type 2 diabetes during Ramadan.
Pharmacologic agent/class Treatment modification during Ramadan
Metformin OD – no modification
BD – dose to be taken at suhoor and iftar
TID – first dose to be taken at suhoor, and remaining doses to be combined at iftar
Thiazolidinediones No adjustments necessary
Short-acting insulin secretagogues TID – switch to BD dosing
Sulfonylureas Switch to second-generation agent
BD – reduce suhoor dose
SGLT2 inhibitors Use with caution
Advise patient to stay hydrated
DPP4 inhibitors No adjustments necessary
GLP1 receptor agonists No adjustments necessary if well titrated six weeks prior to Ramadan
OD = once daily
BD = twice daily
TID = three times daily
RCSIsmj
Volume 12: Number 1. 2019 | Page 77
Eid-Al-Fitr should include advice against overindulgence, along with
counselling regarding medication readjustment to pre-Ramadan
dosages. Regular self-blood glucose monitoring is vital during this
readjustment period in order to prevent adverse complications such
as hypo- or hyperglycaemia.
Although the Diabetes and Ramadan International Alliance and
International Diabetes Federation have put tremendous effort into
creating guidelines for diabetes management during the holy
month of Ramadan, additional long-term studies with larger sample
sizes are much needed in order to create an internationally known
and accredited guideline. There are many gaps concerning some
new diabetes medications and their effects during fasting; these
areas of uncertainty must be clarified to maximise patient safety. The
scope of such research could also extend to other illnesses, since a
majority of patients living with diabetes have other co-morbidities.
As newer medications and more advanced therapies continue to be
brought to the market, guidelines must expand and progress to
include them.
review
References
1. World Health Organisation. Diabetes. 2018. [Internet]. [cited 2018
November 1]. Available from:
http://www.who.int/news-room/fact-sheets/detail/diabetes.
2. Almalki M, Alshahrani F. Options for controlling type 2 diabetes during
Ramadan. Front Endocrinol (Lausanne). 2016;7:32.
3. Salti I, Benard E, Detournay B, Bianchi-Biscay M, Le Brigand C, Voinet C
et al. A population-based study of diabetes and its characteristics during
the fasting month of Ramadan in 13 countries: results of the
Epidemiology of Diabetes and Ramadan 1422/2001 (EPIDIAR) study.
Diabetes Care. 2004;27(10):2306-11.
4. Arnason T, Bowen M, Mansell K. Effects of intermittent fasting on health
markers in those with type 2 diabetes: a pilot study. World J Diabetes.
2017;8(4):154-64.
5. Longo V, Mattson M. Fasting: molecular mechanisms and clinical
applications. Cell Metab. 2014;19(2):181-92.
6. Al-Arouj M, Bouguerra R, Buse J, Hafez S, Hassanein M, Ibrahim M et al.
Recommendations for management of diabetes during Ramadan.
Diabetes Care. 2005;28(9):2305-11.
7. Hassanein M, Al-Arouj M, Hamdy O, Bebakar W, Jabbar A, Al-Madani A et
al. Diabetes and Ramadan: practical guidelines. Diabetes Res Clin Pract.
2017;126:303-16.
8. Hassanein M, Al-Arouj M, Ben-Nakhi A. Management of diabetes during
Ramadan. 2016. [Internet]. [cited 2019 January 8]. Available from:
https://www.daralliance.org/daralliance/wp-content/uploads/2018/01/ID
F-DAR-Practical-Guidelines_15-April-2016_low_8.pdf.
9. Akram J, De Verga V. Insulin lispro (Lys(B28), Pro(B29)) in the treatment
of diabetes during the fasting month of Ramadan. Diabet Med.
1999;16(10):861-6.
10. Sadikot S, Hassanein M. Diabetes and Ramadan: Practical Guidelines.
Brussels: DAR-IDF; 2016. [Internet]. [cited 2019 January 8]. Available
from: http://Diabetes and Ramadan:Practical Guidelines.
11. Bravis V, Hui E, Salih S, Mehar S, Hassanein M, Devendra D. Ramadan
Education and Awareness in Diabetes (READ) programme for Muslims
with type 2 diabetes who fast during Ramadan. Diabet Med.
2010;27(3):327-31.
12. Hassanein M, Ahmedani, M. IDF-DAR Practical Guidelines. Brussels:
International Diabetes Federation, 2016: Chapter 6. [ebook]. [Accessed
2019 January 3]. Available at: https://www.daralliance.org/daralliance/
wp-content/uploads/2018/01/IDF-DAR-Practical-Guidelines_15-April-201
6_low_6.pdf.
13. Diabetes Control and Complications Trial (DCCT)/Epidemiology of
Diabetes Interventions and Complications (EDIC) Study Research Group.
Intensive diabetes treatment and cardiovascular outcomes in type 1
diabetes: The DCCT/EDIC Study 30-year follow-up. Diabetes Care.
2016;39(5):686-93.
14. Hassanein M, Al-Arouj M, Hamdy O. Insulin dose modifications for
patients with diabetes. 2017.
15. Canadian Agency for Drugs and Technologies in Health. Optimal second-
and third-line therapy in type 2 diabetes. Ottawa, 2013:1-2. [Internet].
Available from: https://www.cadth.ca/sites/default/files/pdf/C1111_
Project_in_Brief_e.pdf.
16. DrugBank. Metformin. 2019. [Internet]. [cited 2019 January 8]. Available
from: https://www.drugbank.ca/drugs/DB00331.
17. Aravind S, Tayeb K, Ismail S, Shehadeh N, Kaddaha G, Liu R et al.
Hypoglycaemia in sulphonylurea-treated subjects with type 2 diabetes
undergoing Ramadan fasting: a five-country observational study. Curr
Med Res Opin. 2011;27(6):1237-42.
18. Beshyah S, Chatterjee S, Davies M. Use of SGLT2 inhibitors during
Ramadan: a survey of physicians’ views and practical guidance. Br J
Diabetes. 2016;16(1):20.
RCSIsmj
Page 78 | Volume 12: Number 1. 2019
review
Abstract
Recent research has identified the human microbiome as a key modulator of the gut–brain axis, influencing
both physiological brain function and the development of neurologic and psychiatric pathologies. A
number of studies have investigated whether differences exist in the microbial composition within the
gastrointestinal tracts of individuals with certain neuropsychiatric conditions compared to neurotypical
controls. Indeed, findings support the idea that abnormalities in gut microbiota composition can have
pathological implications both within the gastrointestinal system and extra-intestinally. This review aims to
examine the mounting evidence with respect to certain neuropsychiatric conditions, namely major
depressive disorder, generalised anxiety disorder, early- and late-onset autism spectrum disorder, and
Parkinson’s disease. There appears to be a higher incidence of gastrointestinal issues among patients with
these conditions compared to the general population. In addition, altered gut microbial composition in
psychiatric patients has been found to hinder the therapeutic benefit of certain drugs commonly used for
symptom management. The gut microbiota may thus be a potential therapeutic target with broad clinical
relevance, from enhanced understanding of neuropsychiatric pathophysiology, to therapeutic applications
in both gastrointestinal and neurological conditions.
Emily Panteli
RCSI medical student
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 78-83.
Influence of the gutmicrobiome on brain healthand function
RCSIsmj
Volume 12: Number 1. 2019 | Page 79
Introduction
The human microbiota is defined as all microorganisms that live on or
within the body, with its corresponding genome being the microbiome.1
This vast population outnumbers an individual’s own cells tenfold, and
accounts for 2-3% of body weight. Most of these microorganisms reside
within the gastrointestinal tract (GIT). The role the human microbiome
plays in health and disease is becoming increasingly recognised, with
numerous studies showing that these species impact human nutrition,
immunity, behaviour, and disease.2 The United States Institutes of Health
initiated the Human Microbiome Project (HMP) in 2008, with the aim of
generating a resource that would enable improved comprehension of the
human microbiota and its role in human health and disease.3 By the end
of 2017, HMP researchers had published over 650 papers discussing the
structure, function, and diversity of the microbiome.3 The second phase of
the HMP is currently in progress, and aims to analyse host–microbiome
interactions through longitudinal studies of disease-specific cohorts.4
The gut–brain axis
Currently, the gut microbiota is viewed as a key regulator of the gut–brain
axis, whereby microorganisms in the GIT lumen modulate many brain
processes, including pain perception, stress reactivity, immune responses,
and the emotional motor system (EMS).5 This axis is bidirectional; the
central nervous system (CNS) is capable of altering gut flora populations
and other GIT-related factors such as motility, rate and composition of
secretions, mucosal permeability, and nutrient delivery (Figure 1). This is
achieved through the hypothalamic-pituitary-adrenal (HPA) axis and the
autonomic nervous system (ANS). Enterochromaffin cells are essential
bidirectional transducers that facilitate communication between the gut
lumen and nervous system; afferent innervation of these cells via the vagus
nerve provides a direct signalling pathway.6
Recent studies indicate a strong correlation between dysbiosis and manyhealth conditions, including obesity,
autoimmunity, allergies, andgastrointestinal problems such as irritablebowel syndrome (IBS) and inflammatory
bowel disease (IBD).
Dysfunction of the gut–brain axis can lead to dysbiosis, defined as a
change or imbalance in microbiota composition with consequent
pathophysiological outcomes.8 Recent studies indicate a strong correlation
between dysbiosis and many health conditions, including obesity,
autoimmunity, allergies, and gastrointestinal problems such as irritable
bowel syndrome (IBS) and inflammatory bowel disease (IBD).9 Evidence is
also accumulating to support the hypothesis that dysbiosis may have a role
in the progression of neuropsychiatric disorders, such as major depressive
disorder (MDD), generalised anxiety disorder (GAD), autism spectrum
disorder (ASD), and Parkinson’s disease (PD).2
review
FIGURE 1: Schematic representation of bidirectional brain–gut–microbe interactions.7
Direct and indirectmodulation of entericmicrobiota by the CNS
Modulation of nervoussystem function byenteric microbiota
Enteric microbiota
GI motility, secretion,permeability
Mucosal immune function
Viscerosensorymechanismt
t
t
t
Enteric microbiota
Visceral stimuli perception – emotion
t
tt
t
tt
t
HPA axisANS
EMS
t
RCSIsmj
Page 80 | Volume 12: Number 1. 2019
Dysbiosis can arise through the use of antibiotics, vaccinations,
disinfectant cleaning products, and a high-calorie diet.10 These ‘modern
advances’ decrease one’s contact with microorganisms, especially during
childhood, and hence can result in significant changes in the composition
of an individual’s intestinal microbiota. This is supported by the ‘hygiene
hypothesis,’ which suggests a causal relationship between decreased
contact with microorganisms and an increased incidence of allergic
diseases.10 The Western diet, which is high in calories, fat, and sugar, was
demonstrated to cause dysbiosis in a humanised mouse model.11
Microbial composition shifted to an overgrowth of Firmicutes such as
Clostridium innocuum and Eubacterium dolichum, with a reduction in several
Bacteroides species.11 As demonstrated by rodent studies, the high
comorbidity of stress-related psychiatric symptoms and gastrointestinal
impairment is further evidence of the significant role of the gut–brain axis
in pathological states.12
Major depressive disorder
MDD, also known as clinical depression, is a mood disorder affecting
approximately one in five individuals at some point in their life, with over
85% of sufferers relapsing within ten years.13 Symptoms include
persistent low mood, loss of interest in previously enjoyable activities,
disturbances in sleep or appetite, and suicidal ideations.14 With MDD
being one of the leading causes of morbidity and mortality worldwide,
much research has been done to understand its pathophysiology.
Modern research reveals that MDD is a ‘physiological’ pathology, with
biological origins such as neurotransmitter imbalances or abnormal
neuronal circuitry.13
A systematic review by Lee et al. estimated the co-occurrence of
functional gastrointestinal disorders, such as IBS, and MDD to be 30%.15
A literature review by Mudyanadzo et al. described an undeniable
association between IBS and mood disorders that has long been
documented.16 The dysbiosis-induced inflammation that occurs in IBS
may contribute to the physiological stress that, in some cases, damages
areas of the brain such as the hippocampus and amygdala, leading to
MDD.17 The inflammation observed in IBS is due to increased levels of
cytokines, immune proteins that disrupt glutamate levels.17 Glutamate
is a neurotransmitter that, when overproduced, can exert an excitotoxic
effect, leading to pathological changes in neuronal circuits. These
findings are highly clinically relevant, as recognition of the relationship
between IBS and MDD has led to the collaborative management of
these two conditions. Antidepressants, mainly selective serotonin
reuptake inhibitors (SSRIs), are now routinely used to treat IBS.16 The
precise mechanism of action of SSRIs is still unknown in the treatment
of MDD and other mood disorders; however, they are believed to limit
serotonin reabsorption in the synaptic cleft in order to increase
extracellular levels and chemical activity.18 SSRIs have also been found
to increase gut motility, influence sympathetic and parasympathetic
relays to the GIT via the HPA axis, reduce levels of pro-inflammatory
cytokines such as interleukin-1, and increase levels of the
anti-inflammatory interleukin-10.16 It is speculated that the
anti-inflammatory action of SSRIs reduces the damaging effects of
dysbiosis, thereby contributing to SSRIs’ effectiveness in treating
depression; this provides further evidence of an association between the
microbiome and brain function.16
review
RCSIsmj
Volume 12: Number 1. 2019 | Page 81
Generalised anxiety disorder
GAD is defined as “excessive anxiety and worry occurring more days
than not for at least six months, about a number of events or
activities”.19 For a clinical diagnosis of GAD, the anxiety cannot be
attributable to another pathology or substance, and impairs daily
functioning.19 It is well known that physical manifestations of anxiety
occur often, especially in the GIT. Multiple researchers have found
statistically significant positive correlations between GAD and
gastrointestinal problems, including acid reflux and ulcerative colitis.20
Indeed, evidence supporting the notion that gut microbiota
abnormalities contribute to GAD pathophysiology is accumulating.21 A
systematic comparative analysis conducted by Jiang et al. identified
dysbiosis in GAD patients, specifically overabundances of the bacteria
Escherichia, Shigella, Fusobacterium, and Ruminococcus gnavus.22 Another
experiment performed by Guzman et al. – which aimed to demonstrate
that gut microbiota from patients with GAD can induce anxiety in
neurotypical mice – found that the GAD-colonised mice had increased
brain-derived neurotrophic factor (BDNF) expression in the amygdala
and decreased BDNF expression in the hippocampus accompanying
their dysbiosis-induced anxiety symptoms.21 Both the amygdala and
hippocampus have roles in emotional processing.21 BDNF is a protein
that regulates synaptic plasticity in neurons through control of their
growth, maturation, and survival. Certain polymorphisms in the BDNF
gene are associated with an increased risk of developing psychiatric
conditions such as bipolar disorder, GAD, and eating disorders.23 The
findings of Guzman et al. support the relationship between gut
microbiota dysbiosis and the occurrence of GAD, and may pave the way
for the development of genetic testing for BDNF gene polymorphisms as
a novel biomarker for psychiatric disorders.
Autism spectrum disorder
ASD is a range of neurodevelopmental and behavioural disorders of
varying severity, characterised by restrictive repetitive behaviours and
impaired social interaction and communication.24 Interestingly, many
children with ASD suffer from gastrointestinal issues such as diarrhoea,
constipation, and abdominal pain.25 Three surveys conducted in the US
described a high prevalence of gastrointestinal problems in children with
ASD when compared to neurotypical children.24 This has sparked
numerous investigations into possible dysbiosis in individuals with autism.
It has been demonstrated that faecal samples obtained from a group of
children with autism contained ten times more gram-positive bacteria
under the Clostridium genus compared to samples from the general
population.6 However, these findings are not always replicable; a study
conducted by Gondalia et al. did not show significant differences in the
gut microbiota of children diagnosed with ASD compared to their
neurotypical siblings.25 Rather, this study concluded that other
explanations for gastrointestinal dysfunction in the autistic population
ought to be considered, such as elevated anxiety levels and self-restricted
diets.25 Hence, the existence of dysbiosis as a factor in the pathogenesis of
ASD remains a complex and controversial topic, undoubtedly warranting
further investigation.
Three surveys conducted in the USdescribed a high prevalence of
gastrointestinal problems in children withASD when compared to neurotypicalchildren. This has sparked numerous
investigations into possible dysbiosis inindividuals with autism.
In most ASD cases, early symptoms emerge in infancy. However, a
subset of children receives a diagnosis of regressive or late-onset ASD.
Typically, these children initially reach developmental milestones
normally, and then, at an older age, exhibit a deterioration of previously
acquired skills, with autistic signs becoming apparent.26 Sandler et al.
speculated that gut microorganism disruption may contribute to the
symptomatology in regressive ASD by promoting colonisation of
neurotoxin-producing bacteria.27 This hypothesis is supported by the
fact that many parents recall antibiotic exposure preceding chronic
persistent diarrhoea prior to their child’s ASD diagnosis.27 Several bodies
of evidence confirm that antibiotic administration can result in
dysbiosis, with broad-spectrum antibiotics affecting the abundances of
30% of gut bacteria.28 Once antibiotic treatment is stopped, the
microbiome is often able to recover and return to its pre-treatment
composition; however, the initial state is not always fully restored.29
Sandler et al. tested their hypothesis by recruiting 11 children with
regressive ASD for an interventional trial using oral vancomycin. As
faecal samples from children with ASD have been found to contain
significant overgrowth of bacteria under the Clostridium genus,
vancomycin was the chosen intervention as it is widely used to treat C.
difficile infections, the most common cause of healthcare- associated
infectious diarrhoea in developed countries.6,30 Short-term
improvement of participants in the Sandler et al. trial was evaluated
using paired videotapes analysed by a clinical psychologist, who
concluded that behaviour improved in 80% of children in the
intervention group.27 However, the long-term benefits are still unclear
and are yet to be investigated. This randomised controlled trial indicates
that a possible dysbiosis exists in late-onset ASD, a finding that warrants
further research.27
review
RCSIsmj
Page 82 | Volume 12: Number 1. 2019
References
1. The National Human Genome Research Institute. The Human Microbiome
Project: Extending the definition of what constitutes a human. 2012.
[Internet]. [cited 2019 January 1]. Available from: https://www.genome.gov/
27549400/the-human-microbiome-project-extending-the-definition-of-wh
at-constitutes-a-human/.
2. Medical News Today. What is the gut microbiota? What is the human
microbiome? Brighton: Medical News Today. 2016. [Internet]. [cited 2018
June 10]. Available from: https://www.medicalnewstoday.com/articles/307998.php.
3. The NIH Common Fund. Human Microbiome Project. Maryland: National
Institutes of Health. 2010. [Internet]. [cited 2018 June 15]. Available from:
https://commonfund.nih.gov.
4. Proctor L. The Integrative Human Microbiome Project: dynamic analysis of
microbiome-host omics profiles during periods of human health and
disease. Cell Host Microbe. 2014;16(3):276-89.
5. Mayer E. The neurobiology of stress and gastrointestinal disease. Gut.
2000;47(6):861-9.
6. Mangiola F, Ianiro G, Franceschi F, Fagiuoli S, Gasbarrini G, Gasbarrini A.
Gut microbiota in autism and mood disorders. World J Gastroenterol.
2016;22(1):361-8.
7. Rhee S, Pothoulakis C, Mayer E. Principles and clinical implications of the
brain-gut-enteric microbiota axis. Nat Rev Gastroenterol Hepatol.
2009;6(5):306-14.
Parkinson’s disease
PD is a neurodegenerative condition characterised by akinesia,
bradykinesia, muscle rigidity, tremor, and depression. This is due to loss of
dopaminergic neurons in the pars compacta of the substantia nigra, one
of the basal ganglia in the brain. However, there is a well-established
co-existence of extranigral pathologies.31 Much research examining the
pathophysiology of PD has reported dysbiosis in patients with PD.32,33 Like
the other neuropsychiatric conditions discussed above, sufferers of PD
often have comorbid gastrointestinal dysfunction; for instance,
approximately 80% of PD patients suffer from constipation.31 Other
gastrointestinal conditions that have a higher incidence in patients with
PD compared to the general population include Helicobacter pylori
infection, a leading cause of ulcers, and small intestinal bacterial
overgrowth (SIBO), affecting nearly 25% of all PD sufferers.31 H. pylori
infections have negative implications for PD treatment because this
bacterium hinders absorption of levodopa, a drug routinely used for PD
symptom management.31 Fasano et al. found that patients receiving
levodopa who also had SIBO suffered from more Parkinson’s motor
symptoms, and experienced delayed onset of action of levodopa.32 Upon
eradication of SIBO, an increase in levodopa’s therapeutic efficacy was
observed in these patients, highlighting the clinical significance of
dysbiosis and its implications for the pharmacologic management of
certain neurological illnesses.32
Therapeutic potential: faecal microbiota transplantation
Faecal microbiota transplantation (FMT) is the process of transplanting
faeces from a donor with healthy gut flora to a recipient in order to treat
dysbiosis and impaired gastrointestinal function.34 This procedure is
commonly used to treat gastrointestinal diseases such as C. difficile
infection, Crohn’s disease, and ulcerative colitis.35 Although recent studies
in animals concluded that gut microbiota transplantation can transfer a
behavioural phenotype, there have been no clinical studies to date
investigating its efficacy in neuropsychiatric disorders in humans.10 The
findings of one study, conducted by Hsiao et al., support gut–brain axis
involvement in mice displaying symptoms of ASD, and identify a potential
target for probiotic therapy in this neurodevelopmental condition.36
Treating these mice with Bacteroides fragilis was found to correct
behavioural abnormalities associated with ASD.36 However, these findings
have not yet been applied to humans with ASD. Additionally, case reports
of FMT demonstrated favourable outcomes and neurological symptom
alleviation in patients suffering from PD.37,38 Accumulating research
supports FMT as a promising approach for the treatment of dysbiosis in
both gastrointestinal and extra-intestinal disorders, broadening the
procedure’s clinical applications.39 However, further studies in humans are
essential to ensure its therapeutic efficacy.
Conclusion
The human microbiome is an essential modulator of the gut–brain axis,
and hence the gut microbiota is involved in a variety of both physiological
and psychological processes. Ever-increasing evidence suggests that
alterations in the composition of these microorganisms can have
pathological consequences within and beyond the GIT. This accounts, in
part, for the high incidence of gastrointestinal comorbidities in many
neuropsychiatric conditions, from anxiety and depression to PD. The
microbiome influences both the pathogenesis and treatment of disease,
making gut flora a potential therapeutic target in a vast number of
conditions. With such clinical and therapeutic possibility, further research
is warranted and eagerly anticipated.
review
RCSIsmj
Volume 12: Number 1. 2019 | Page 83
8. Cenit M, Sanz Y, Codoñer-Franch P. Influence of gut microbiota on
neuropsychiatric disorders. World J Gastroenterol. 2017; 23(30):5486-98.
9. Bull M, Plummer N. Part 1: The Human Gut Microbiome in Health and
Disease. Integr Med (Encinitas). 2014;13(6):17-22.
10. Evrensel A, Ceylan M. Fecal microbiota transplantation and its usage in
neuropsychiatric disorders. Clin Psychopharmacol Neurosci. 2016;14(3):231-7.
11. Brown K, Daniella DeCoffe D, Molcan E, Gibson D. Diet-induced dysbiosis
of the intestinal microbiota and the effects on immunity and disease.
Nutrients. 2012;4(8):1095-119.
12. Reber S. Stress and animal models of inflammatory bowel disease – an
update on the role of the hypothalamo-pituitary-adrenal axis.
Psychoneuroendocrinology. 2012;37(1):1-19.
13. Liang S, Wu X, Hu X, Wang T, Jin F. Recognizing depression from the
microbiota–gut–brain axis. Int J Mol Sci. 2018;19(6):E1592.
14. National Institute of Mental Health. Depression – Overview. Bethesda,
Maryland: U.S. Government Printing Office. 2018. [Internet]. [cited 2018
September 7]. Available from: https://www.nimh.nih.gov.
15. Lee C, Doo E, Choi J, Jang S, Ryu H, Lee J et al. The increased level of
depression and anxiety in irritable bowel syndrome patients compared
with healthy controls: systematic review and meta-analysis. J
Neurogastroenterol Motil. 2017;23(3):349-62.
16. Mudyanadzo T, Hauzaree C, Yerokhina O, Architha N, Ashqar H. Irritable
bowel syndrome and depression: a shared pathogenesis. Cureus.
2018;10(8):e3178.
17. Kiecolt-Glaser J, Derry H, Fagundes C. Inflammation: depression fans the
flames and feasts on the heat. Am J Psychiatry. 2015;172(11):1075-91.
18. Preskorn S, Ross R, Stanga C. Selective serotonin reuptake inhibitors.
Antidepressants: past, present and future. Handbook of Experimental
Pharmacology. 2004;157:241-62.
19. Reynolds C, Kamphaus R. Diagnostic and Statistical Manual of Mental
Disorders, Fifth Edition. American Psychiatric Association.
2013;300.02(F41.1).
20. Werden M. Is it all in your mind? Gastrointestinal problems, anxiety and
depression. Undergraduate Review. 2009;5:113-8.
21. Guzman E, Anglin R, De Palma G, Lu J, Potts R, Amber M et al. A301 Gut
microbiota from a patient with generalised anxiety disorder induces
anxiety-like behaviour and altered brain chemistry in gnotobiotic mice. J
Can Assoc Gastroenterol. 2018;1(1):523-4.
22. Jiang H, Zhang X, Yu Z, Zhang Z, Deng M, Zhao J et al. Altered gut
microbiota profile in patients with generalized anxiety disorder. J Psychiatr
Res. 2018;104:130-6.
23. US National Library of Medicine. Genetics Home Reference – BDNF gene.
Bethesda, Maryland: US Department of Health and Human Services. 2018.
[Internet]. [cited 2018 Oct 28]. Available from: https://ghr.nlm.nih.gov.
24. Horvath K, Perman J. Autistic disorder and gastrointestinal disease. Curr
Opin Pediatr. 2002;14(5):583-7.
25. Gondalia S, Palombo E, Knowles S, Cox S, Meyer D, Austin D. Molecular
characterisation of gastrointestinal microbiota of children with autism
(with and without gastrointestinal dysfunction) and their neurotypical
siblings. Autism Res. 2012;5(6):419-27.
26. Stefanatos G. Regression in autistic spectrum disorders. Neuropsychol Rev.
2008;18(4):305-19.
27. Sandler R, Finegold S, Bolte E, Buchanan C, Maxwell A, Väisänen M et al.
Short-term benefit from oral vancomycin treatment of regressive-onset
autism. J Child Neurol. 2000;15(7):429-35.
28. Dethlefsen L, Relman D. Incomplete recovery and individualized responses
of the human distal gut microbiota to repeated antibiotic perturbation.
Proc Natl Acad Sci U S A. 2011;108(Suppl. 1):4554-61.
29. Francino M. Antibiotics and the human gut microbiome: dysbioses and
accumulation of resistances. Front Microbiol. 2015;6:1543.
30. Isaac S, Scher J, Djukovic A, Jiménez N, Littman D, Abramson S et al.
Short- and long-term effects of oral vancomycin on the human intestinal
microbiota. J Antimicrob Chemother. 2017;72(1):128-36.
31. Parashar A, Udayabanu M. Gut microbiota: implications in Parkinson’s
disease. Parkinsonism Relat Disord. 2017;38:1-7.
32. Fasano A, Bove F, Gabrielli M, Petracca M, Zocco M, Ragazzoni E et al. The
role of small intestinal bacterial overgrowth in Parkinson’s disease. Mov
Dis. 2013;28(9):1241-9.
33. Scheperjans F, Aho V, Pereira P, Koskinen K, Paulin L, Pekkonen E et al. Gut
microbiota are related to Parkinson’s disease and clinical phenotype. Mov
Disord. 2015;30(3):350-8.
34. Gupta S, Allen-Vercoe E, Petrof E. Fecal microbiota transplantation: in
perspective. Therap Adv Gastroenterol. 2016;9(2):229-39.
35. Fond G, Boukouaci W, Chevalier G, Regnault A, Eberl G, Hamdani N et al.
The “psychomicrobiotic”: targeting microbiota in major psychiatric
disorders: a systematic review. Pathol Biol (Paris). 2015;63(1):35-42.
36. Hsiao E, McBride S, Hsien S, Sharon G, Hyde E, McCue T et al. Microbiota
modulate behavioral and physiological abnormalities associated with
neurodevelopmental disorders. Cell. 2013;155(7):1451-63.
37. Dinan T, Cryan J. Gut feelings on Parkinson’s and depression. Cerebrum.
2017;pii:cer-0417.
38. Shi Y, Yang Y. Fecal microbiota transplantation: Current status and
challenges in China. JGH Open. 2018;2(4):114-6.
39. Xu M, Cao H, Wang W, Wang S, Cao X, Yan F et al. Fecal microbiota
transplantation broadening its application beyond intestinal disorders.
World J Gastroenterol. 2015;21(1):102-11.
review
Page 84 | Volume 12: Number 1. 2019
staff review
Abstract
A renewed global interest in research on mental health disorders has been sparked, as the number of
individuals suffering from mental or behavioural disorders now amounts to approximately 450 million.
Additionally, the World Health Organisation (WHO) estimates that one million people suffering from mental
health issues die by suicide each year. Bipolar disorder (BD) is a chronic mental illness defined by repeated
episodes of depressive and manic episodes that impact patients’ overall health, including daily functioning
and quality of life. BD commonly presents in adolescence or early adulthood, and is further divided into
Bipolar I, dominated by manic episodes, and Bipolar II, dominated by hypomanic episodes. Current
practice guidelines and detailed algorithms used for the diagnosis and management of BD are not user
friendly and require a skilled team to navigate. This results in underdiagnosis of BD, leaving many patients
undertreated. Symptom complexity, duration, and intensity make reaching a conclusive diagnosis for
manic, hypomanic, and depressive states challenging. While there is no single unifying theory that
encompasses the genetic, pharmacological, biochemical, and anatomical components of BD’s
pathophysiology, an array of categorical theories currently exists. After accurately confirming a BD
diagnosis, identifying appropriate treatment is crucial, as symptomatic improvement generally takes one to
four weeks. Currently, pharmacologic treatments include mood stabilisers, antidepressants, and atypical
antipsychotics. Unfortunately, lifetime management of BD is extremely challenging due to its dynamic and
fluctuating nature. The WHO has called on regional governments to identify the current barriers to mental
health and well-being, as this is a crucial step towards improving the morbidity and mortality associated
with mental illnesses on a global scale. This review aims to dissect the complexity of BD from diagnosis to
management, including the prevailing theories, diagnostic methods, and pharmacologic therapies.
Jessica Millar
RCSI pharmacy student
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 84-90.
RCSIsmj
Bipolar disorder: the highs and lows
Volume 12: Number 1. 2019 | Page 85
Introduction
Renewed interest in research on mental health disorders has been
driven by a plethora of factors. The World Health Organisation (WHO)
estimates that approximately one million people die by suicide as a
result of mental illness each year.1 Additionally, those suffering from
mental illnesses are also victims of human rights violations, stigma,
and discrimination throughout daily life.1 It is estimated that one in
four families have a member suffering from a mental health condition
to whom the family provides the majority of care.1 It is therefore in
the best global interest to address misconceptions, reduce
prevalence, and improve the lives of those living with such conditions.
Bipolar disorder (BD) is a chronic mentalillness defined by repeated episodes ofdepressive and manic symptoms, whichhave a significant impact on the overall
health status of the patient.
Bipolar disorder (BD) is a chronic mental illness defined by repeated
episodes of depressive and manic symptoms, which have a significant
impact on the overall health status of the patient, including daily
functioning and quality of life.2 BD involves six clinical parameters,
including two states (depression and mania) and four phase changes
(depression to mania, mania to depression, depression to mixed
states, and mixed states to depression).3 Most commonly, BD
presents in adolescence or early adulthood, between 18 and 22 years
of age.4 BD can be further divided into Bipolar I, dominated by manic
episodes, and Bipolar II, dominated by hypomanic episodes.5
Unfortunately, many of the current practice guidelines, decision trees,
and detailed algorithms used in the diagnosis and management of BD
are complex and not entirely user friendly, requiring a skilled team to
navigate appropriately.4 This further complicates an already difficult
diagnosis and management process, prolonging the period to
effective therapeutic management.
Aetiological theories
There is currently no single unifying theory that encompasses the
genetic, pharmacological, biochemical, and anatomical components
of BD’s pathophysiology.4 Pre-existing theories included the
electrolyte disturbance and serum fluid theory, which was revised in
the late 1960s. This theory suggested that lower amounts of sodium,
haemoglobin, and albumin were present during manic episodes,
while those levels were increased in depression.6 Hochman et al.
revisited this theory in 2014, and found evidence of an imbalance in
both electrolytes and fluid homeostasis throughout bipolar episodes.6
The monoamine hypothesis theorises that depression results from
underactivity or deficiency of monoamines such as dopamine,
serotonin, and noradrenaline, while mania occurs with excess of these
monoamines.4 This theory arose in the 1970s as the dopaminergic
theory, hypothesising that additional dopamine contributed to mania
while deficits led to depressive episodes.7 Current research, however,
has isolated serotonin as the crucial monoamine involved in the
neurophysiology of depression.8 It is owing to this prevailing
understanding that selective serotonin re-uptake inhibitors (SSRIs),
which act as serotonin agonists, are commonly used as
antidepressants in the management of mood disorders.3,9,10
Furthermore, management of BD with lithium has created a further
chemical imbalance theory. Lithium is thought to be neuroprotective
against urea toxicity, which is a pathophysiological mechanism
underlying the clinical presentation of BD.11,12
The neurodevelopmental theory suggests that differences in cerebral
pathophysiology are the cause of mood disorder development. It is
proposed that a combination of genetic predisposition and
adverse events early in life, such as insults during gestation, could
develop into psychosis in adolescence or early adulthood,
when normal maturational changes ‘unmask’ the insult.13 Such
neurodevelopmental conditions involve functional abnormalities,
such as accelerated volume loss in brain areas essential for mood
regulation, and dysregulation of glial-neuronal connections.14
Neuroimaging studies, most commonly magnetic resonance imaging
(MRI), show larger lateral ventricles in patients suffering multiple
bipolar episodes compared to those who experience a single episode,
or healthy controls.14
Genetic theories go a step further in that they map the incidence and
patterns of BD inheritance. Data collected from twin studies suggests
the current heritability of BD to be between 60 and 80%; however,
this statistic is much lower in family studies and larger patient
cohorts.15 Clinicians have since found that positive family histories of
BD are rare in everyday practice, and it is uncommon to have
multiple family members affected over several generations.15
However, genetic risk factors alone increase susceptibility with small
to moderate effects, indicating the role of non-genetic risk factors,
such as alcohol, drug dependence, and physical and sexual abuse in
the aetiology of BD.15
The single nucleotide polymorphism CACNA1C is the most replicated
and studied polymorphism for BD, appearing in all ethnic populations
(present in 31% of European, 6% of Asian, and 56% of African
populations).15 However, since many individuals who carry this allele
are considered healthy, further research is required to confirm
staff reviewRCSIsmj
Page 86 | Volume 12: Number 1. 2019
CACNA1C’s contribution to the pathology of BD.15 Genetic
vulnerability and episodic recurrence must both be explained by a
neurological hypothesis in order to make a genetic diagnosis of BD.
Current linkages are being studied on chromosomes 18 and 21q for
their role in genetic vulnerability to the development of BD.3,16-19
Diagnosing bipolar disorder
While symptom complexity, duration, and intensity vary greatly, a
high index of clinical suspicion for BD is necessary for further
substantiation into diagnosis (Figure 1).5 Generally, an initial
presentation is described as episodes of mania or depression
separated by periods that are deemed ‘well’.5 However, during these
symptom-free periods patients may still experience decreased
functioning, which can be detected via MRI to further support a
diagnosis of BD.5
The diagnosis is further supported when the patient presents with
mood-related symptoms such as depression, mood swings, irritability,
anxiety, or difficulty sleeping.5 The Diagnostic and Statistical Manual
of Mental Disorders 5th Edition (DSM-5) criteria specifically define the
manic, hypomanic, and depressive states that must be identified
during BD diagnosis. Manic episodes are defined as distinct periods of
erratic and prolonged elevation of mood or irritable mood, with
increased goal-directed activity or energy, for the majority of days
within at least one week.5 The presence of at least three other
symptoms, such as inflated self-esteem, decreased need for sleep,
talkativeness, flight of ideas, increased distractibility, or excessive
involvement in activities with painful consequence, must also be
present and severe enough to cause marked impairment of functioning
and potential hospitalisation.5 The episodes must not be attributable to
other substances such as illicit drugs, medication, or alcohol.5
Hypomanic episodes are defined as a distinctive period of erratic and
prolonged elevation with increased goal-directed activity or energy,
lasting at least four consecutive days, present for most of the day and
nearly all four days.5 Similarly, at least three other symptoms as seen above
should be noted.5 However, the episode should not be severe enough for
marked impairment or hospitalisation.5 Hypomanic episodes are often
overlooked, and may incorrectly be deemed a symptom-free period.5
BD is underdiagnosed due to cautionthroughout the diagnostic process, to
avoid incorrect labelling. This often results in manic episodes being overlooked and undermanaged.
Depressive episodes are defined as five or more symptoms present in
the same two-week period, showing a change from previous
functioning, with at least one of the symptoms being either depressed
mood or loss of interest.5 The mood should be present for most of the
day and nearly every day, with a markedly decreased interest in
activities, unintentional weight loss, insomnia or hypersomnia,
psychomotor agitation or retardation, fatigue, feelings of
worthlessness or inappropriate guilt nearly every day, diminished
ability to think or concentrate, and/or recurrent thoughts of death.5
Currently, BD is underdiagnosed due to caution throughout the
diagnostic process, to avoid incorrect labelling.3,20 This often results in
manic episodes being overlooked and undermanaged; the same is
true for mixed states, hypomania, and/or bipolar depression.3 Studies
looking at the diagnostic patterns within a community approximate
that 60% of hospitalised patients who are diagnosed with BD by
researchers also received a confirmed clinical diagnosis.3
staff reviewRCSIsmj
FIGURE 1: The diagnostic process for bipolar disorder. Case-based tools such as themood disorder questionnaire (MDQ) and the composite international diagnosticinterview (CIDI) are used to differentiate mood disorders.5
No Continue managementin primary care
Yes Specialistreferral/collaborative care
Suspect bipolar disorder
n Presentation: depressive symptomsn Risk factors: history of manic or hypomanic symptoms
Verify suspicion of bipolar disorder by:
n Patient interview: detailed personal (onset, frequency, severityof mood symptoms), family and social history
n Case-based finding tools (e.g., M3, MDQ, CIDI)
Confirm bipolar disorder diagnosis:
n Detailed clinical interview: using DSM-IV-TR/DSM-5 criteria;determine bipolar I vs bipolar II; assess for mixed state
t t
Evidence of:
n Suicide risk n Harm to self/others n Comorbidities
Volume 12: Number 1. 2019 | Page 87
Management of bipolar disorder
After accurately confirming a BD diagnosis, identifying an
appropriate treatment strategy is crucial, as symptomatic
improvement generally takes up to two weeks for antipsychotics and
antidepressants, in comparison to a seven-day delay of therapeutic
effects with mood stabilisers.21-23 Diagnostic challenges and
concomitant delays may result in ineffective treatment for initial
episodes, and the potential for biological and psychological ‘scars’
from these early episodes to linger, further complicating BD.2-4 It has
been found that only 50-60% of patients treated with mood
stabiliser monotherapy respond adequately, meaning that the rest of
the patient cohort will require polypharmacy or multiple treatment
modalities, such as pharmacotherapy in conjunction with cognitive
behavioural therapy (CBT).4,24 Currently, the mainstays of
pharmacologic treatment include medications such as mood
stabilisers (lithium, valproate, lamotrigine, and carbamazepine),
antidepressants, and an array of atypical antipsychotics, as
outlined below.25
Non-pharmacologic therapy
Non-pharmacologic therapies, such as electroconvulsive therapy,
magnetic seizure therapy, magnetic stimulation, sleep deprivation,
bright light therapy, and psychosocial interventions, have been seen
as efficacious alternatives to current pharmacological methods.26
Psychotherapy is one of the most commonly utilised adjuvants, as it
has been proven to be more efficacious than medication alone in
mood relapse prevention.26 The Systematic Treatment Enhancement
Program for Bipolar Disorder (STEP-BD), involving 15 clinical
sites, found that recovery from depressive episodes was 110 days
faster when various psychosocial interventions were used as
adjuncts to pharmacotherapy, in comparison to minimal use of
psychosocial treatment.26 Such psychosocial therapies include
psychoanalytic psychotherapy, psychodynamic psychotherapy, and
psychoeducation.26
CBT is also a useful adjuvant in the prevention of mood episode
relapse.26 Randomised controlled trials (RCTs) have shown lower rates
of relapse in both adolescents and adults with a diagnosis of BD
receiving CBT.26 Furthermore, studies of CBT showed that its
therapeutic use resulted in a 50% reduction in the number of days
with depressed mood or antidepressant dose alterations, overall
resulting in positive outcomes for BD patients.26
Lithium
Mood stabilisers such as lithium have been the foundation of BD
treatment for over 60 years due to their serotonergic effects, with a
recorded six-fold reduction in suicide rates compared to no
treatment.3,11,27 RCTs between 1970 and 2006 have shown lithium to
be efficacious in the prevention of relapses.2,28 Lithium remains the
only agent to reduce the risk of suicide by more than 50%.2,29,30 As
such, lithium must not be discontinued abruptly due to its
promotion of rapid relapse, placing patients at increased risk of
suicide attempts.29,31
staff reviewRCSIsmj
Page 88 | Volume 12: Number 1. 2019
Lithium induces multiple biochemical and molecular effects on
neurotransmitter and receptor-mediated signalling, signal
transduction cascades, circadian and hormonal regulation, gene
expression, and ion transport.11 Based on a 40-year review, lithium
dosage should be between 600 and 2,400mg/day to give an ideal
serum level between 0.8 and 1.2µg/mL.4 Side effects of lithium
treatment include polyuria, polydipsia, weight gain, gastrointestinal
(GI) upset, acne, hypothyroidism and/or hyperthyroidism.4,30 Due to
its narrow therapeutic index, patients require regular monitoring of
serum hormone and electrolyte levels, along with serum drug
concentration.2,30
Anticonvulsants
Initial research that led to the use of anticonvulsants as first-line BD
treatment was concentrated on lithium-resistant patients suffering
from rapid cycling mixed states, with or without additional
substance abuse.3 Sodium valproate is a popular antiepileptic mood
stabiliser used in combination with lithium for symptomatic
improvement and maintenance from the time of first mood
episode.2 However, lithium still remains more effective than
valproate alone.2,32 Interestingly, through observational studies,
patients on valproate were found to have a higher rate of requiring
additional medications and therapy switching in comparison to
patients receiving lithium.2,33
Valproate has been approved for mania as well as mixed or irritable
mania and rapid cycling; however, further studies are required to
clarify valproate’s long-term usage.4,26 The general dose for its
immediate-release preparations ranges from 500-3,500mg/day in
two divided doses, whereas extended-release products are once
daily at similar doses, with a plasma level between 75 and
1,252µg/mL.4 Side effects include sedation, diarrhoea, weight gain,
alopecia, tremor, and elevated liver enzymes.4 However, more
serious side effects such as thrombocytopenia, leukopenia,
pancreatitis, and hepatotoxicity should also be closely monitored
for.2,4 In addition, both lithium and valproate are known teratogens,
making prescribing difficult in young women of child-bearing age;
this led to a re-issued FDA ‘black box’ warning in 2016, as well as
renewed warnings via the Health Products Regulatory Authority
(HPRA).2,4,34
The antiepileptic lamotrigine is a mood stabiliser with the most
evidence for prevention of BD.2,35 Its use in acute mania shows
inconsistent data; however, this has been superseded through a
meta-analysis showing efficacy at doses higher than 200mg/day.35,36
Side effects of lamotrigine include a rash and Stevens-Johnson-type
syndromes, which may be fatal.2,37-39
Atypical antipsychotics
Atypical antipsychotics have an established role for the treatment of
acute mania, assisting with psychosis and insomnia.2,4,25 However,
monotherapy with these agents is not supported, apart from
quetiapine, which is often prescribed on its own.25 Quetiapine has
also been shown to reduce the symptoms associated with depression
in acute mixed episodes of BD.40 The side effect profile of
antipsychotics is well characterised, specifically weight gain with
olanzapine, risperidone, and quetiapine, and weight loss with
ziprasidone or topiramate (20-400mg).2,4,41 They also pose risks to
patients’ metabolic function, specifically glucose regulation.4,42
Antidepressants
The use of antidepressants in BD is controversial.2,43 A random-effect
meta-analysis showed that there was no significant difference
between their use in unipolar depression and BD.44 Notably,
antidepressant use was associated with a 2.5% weekly risk rate of
conversion to mania.2,44 Interestingly, the FDA has not approved
antidepressants for use in BD, apart from fluoxetine when in
combination with olanzapine.2 SSRIs, except paroxetine and
bupropion, may be used as first-line therapy in patients with no
previous history of rapid cycling and with absence of manic
symptoms.2 However, SSRIs should always be used in conjunction
with either a mood stabiliser or atypical antipsychotic.2
Since patients with BD are more conscious of depressive episodes and their associated symptoms, compared to mania
or hypomania, treatment often requires antidepressants
in all cycles.
Since patients with BD are more conscious of depressive episodes and their
associated symptoms, compared to mania or hypomania, treatment often
requires antidepressants in all cycles.3 This creates clinical concern, as
studies have shown that patients treated with concurrent antidepressants
and lithium are twice as likely to experience manic episodes than those
treated for mania with lithium alone.45-47 Bupropion and paroxetine are
two antidepressants shown to have a lower risk of provoking manic
episodes when in combination with lithium, in comparison to tricyclic
antidepressants such as amitriptyline.3,48 However, even with such benefits
they still appear to have long-term risks of inducing rapid cycling, and thus
caution must be exercised when prescribing these agents.3
staff reviewRCSIsmj
Volume 12: Number 1. 2019 | Page 89
staff review
References
1. World Health Organisation. Investing in Mental Health. 2003. [Internet].
Available from: http://www.who.int/mental_health/media/investing_mnh.pdf.
2. Jann MW. Diagnosis and treatment of bipolar disorders in adults: a review
of the evidence on pharmacologic treatments. Am Health Drug Benefits.
2014;7(9):489-99.
3. Goodwin FK, Ghaemi SN. Bipolar disorder. Dialogues Clin Neurosci.
1999;1(1):41-51.
4. Hilty DM, Brady KT, Hales RE. A review of bipolar disorder among adults.
Psychiatr Serv. 1999;50(2):201-13.
5. Culpepper L. The diagnosis and treatment of bipolar disorder: decision-
making in primary care. Prim Care Companion CNS Disord. 2014;16(3).
6. Hochman E, Weizman A, Valevski A et al. Association between bipolar
episodes and fluid and electrolyte homeostasis: a retrospective longitudinal
study. Bipolar Disord 2014;16(8):781-9.
7. Ashok AH, Marques TR, Jauhar S et al. The dopamine hypothesis of bipolar
affective disorder: the state of the art and implications for treatment. Mol
Psychiatry. 2017;22(5):666-79.
8. Hall RH. Theories of depression. 1998. [Internet]. [cited 2018 August 15].
Available from: http://web.mst.edu/~rhall/neuroscience/07_disorders/
depression_theory.pdf.
9. Schildkraut JJ. The catecholamine hypothesis of affective disorders: a
review of supporting evidence. Am J Psychiatry. 1965;122(5):509-22.
10. Bunney WE Jr, Davis JM. Norepinephrine in depressive reactions. A review.
Arch Gen Psychiatry. 1965;13(6):483-94.
11. Machado-Vieira R, Manji HK, Zarate CA Jr. The role of lithium in the
treatment of bipolar disorder: convergent evidence for neurotrophic effects
as a unifying hypothesis. Bipolar Disord. 2009;11(Suppl. 2):92-109.
12. Gropman AL, Summar M, Leonard JV. Neurological implications of urea
cycle disorders. J Inherit Metab Dis. 2007;30(6):865-79.
13. Arango C, Fraguas D, Parellada M. Differential neurodevelopmental
trajectories in patients with early-onset bipolar and schizophrenia
disorders. Schizophr Bull. 2014;40(Suppl. 2):S138-46.
14. Maletic V, Raison C. Integrated neurobiology of bipolar disorder. Front
Psychiatry. 2014;5:98.
15. Kerner B. Genetics of bipolar disorder. Appl Clin Genet. 2014;7:33-42.
16. Berrettini WH, Ferraro TN, Goldin LR et al. A linkage study of bipolar
illness. Arch Gen Psychiatry. 1997;54(1):27-35.
17. Stine OC, Xu J, Koskela R et al. Evidence for linkage of bipolar disorder to
chromosome 18 with a parent-of-origin effect. Am J Hum Genet.
1995;57(6):1384-94.
18. Straub RE, Lehner T, Luo Y et al. A possible vulnerability locus for bipolar
affective disorder on chromosome 21q22.3. Nat Genet. 1994;8(3):291-6.
19. Detera-Wadleigh SD, Badner JA, Goldin LR et al. Affected-sib-pair analyses
reveal support of prior evidence for a susceptibility locus for bipolar
disorder, on 21q. Am J Hum Genet. 1996;58(6):1279-85.
20. Ghaemi SN, Sachs GS, Chiou AM, Pandurangi AK, Goodwin FK. Is bipolar
disorder underdiagnosed? Are antidepressants overutilized? J Affect Disord.
1999;52(1-3):135-44.
RCSIsmj
A global call for renewed action
With a population of approximately 450 million people suffering
from mental or behavioural disorders, it is clear that further work is
required to improve mental health across the globe.1
The WHO has published ‘Investing in Mental Health: Evidence
for Action’, which identifies new areas of interest and strategies
for improving psychological well-being on an international
scale.49
The WHO has also emphasised that this global disease burden is
further accentuated by poor service availability and access,
highlighting the issues of economic development and social welfare
as additional areas in dire need of intervention.
This report has called on governments worldwide to address these
and other barriers, and to ensure that multidisciplinary mental
health initiatives are made an investment and health sector
priority.49
Conclusion
Unfortunately, lifetime management of BD is extremely challenging due to
its dynamic and fluctuating course. It is crucial for healthcare providers to
help limit the burdens of BD for both patients and their caregivers. While
pharmacological methods are currently the mainstay of treatment, it is
essential to consider drug efficacy and associated risk profiles when
initiating treatment, so as to prevent further complications or declines in
mental state. Most recently, a global renewed interest in research for
mental health conditions in terms of diagnosis and management is poised
to help alleviate the stigma of mental illness, expand care access, and
improve patient outcomes. Through multidimensional efforts, a hopeful
future is in sight for patients afflicted with mental health disorders. Bipolar
disorder remains a topic of interest for many healthcare providers and
communities alike, and will continue to demand collaborative,
compassionate, evidence-based strategies until suitable patient outcomes
have been achieved.
RCSIsmj
Page 90 | Volume 12: Number 1. 2019
21. Tylee A, Walters P. Onset of action of antidepressants. BMJ.
2007;334(7600):911-2.
22. Mousavi SG, Rostami H, Sharbafchi MR et al. Onset of action of atypical
and typical antipsychotics in the treatment of acute psychosis. J Res Pharm
Pract. 2013;2(4):138-44.
23. Gould TD, Chen G, Manji HK. Mood stabilizer psychopharmacology. Clin
Neurosci Res. 2002;2(3-4):193-212.
24. Chiang KJ, Tsai JC, Liu D et al. Efficacy of cognitive-behavioral therapy in
patients with bipolar disorder: A meta-analysis of randomized controlled
trials. PLoS One. 2017;12(5):e0176849.
25. Derry S, Moore RA. Atypical antipsychotics in bipolar disorder: systematic
review of randomised trials. BMC Psychiatry. 2007;7:40.
26. Naik SK. Management of bipolar disorders in women by nonpharmacological
methods. Indian J Psychiatry. 2015;57(Suppl. 2):S264-74.
27. Tondo L, Jamison KR, Baldessarini RJ. Effect of lithium maintenance on
suicidal behavior in major mood disorders. Ann N Y Acad Sci.
1997;836:339-51.
28. Geddes JR, Burgess S, Hawton K et al. Long-term lithium therapy for
bipolar disorder: systematic review and meta-analysis of randomized
controlled trials. Am J Psychiatry. 2004;161(2):217-22.
29. Baldessarini RJ, Tondo L, Floris G et al. Reduced morbidity after gradual
discontinuation of lithium treatment for bipolar I and II disorders: a
replication study. Am J Psychiatry. 1997;154(4):551-3.
30. Geddes JR, Miklowitz DJ. Treatment of bipolar disorder. Lancet.
2013;381(9878):1672-82.
31. Suppes T, Baldessarini RJ, Faedda GL et al. Risk of recurrence following
discontinuation of lithium treatment in bipolar disorder. Arch Gen
Psychiatry. 1991;48(12):1082-8.
32. BALANCE investigators and collaborators, Geddes JR et al. Lithium plus
valproate combination therapy versus monotherapy for relapse prevention
in bipolar I disorder (BALANCE): a randomised open-label trial. Lancet.
2010;375(9712):385-95.
33. Kessing LV, Hellmund G, Geddes JR et al. Valproate v. lithium in the
treatment of bipolar disorder in clinical practice: observational nationwide
register-based cohort study. Br J Psychiatry. 2011;199(1):57-63.
34. Iqbal MM, Sohhan T, Mahmud SZ. The effects of lithium, valproic acid,
and carbamazepine during pregnancy and lactation. J Toxicol Clin Toxicol.
2001;39(4):381-92.
35. Vieta E, Locklear J, Gunther O et al. Treatment options for bipolar
depression: a systematic review of randomized, controlled trials. J Clin
Psychopharmacol. 2010;30(5):579-90.
36. Geddes JR, Calabrese JR, Goodwin GM. Lamotrigine for treatment of
bipolar depression: independent meta-analysis and meta-regression of
individual patient data from five randomised trials. Br J Psychiatry.
2009;194(1):4-9.
37. American Psychiatric Association. Practice guideline for the treatment of
patients with bipolar disorder (revision). Am J Psychiatry. 2002;159(4
Suppl.):1-50.
38. Powell-Jackson PR, Tredger JM, Williams R. Hepatotoxicity to sodium
valproate: a review. Gut. 1984;25(6):673-81.
39. Seo HJ, Chiesa A, Lee SJ et al. Safety and tolerability of lamotrigine: results
from 12 placebo-controlled clinical trials and clinical implications. Clin
Neuropharmacol. 2011;34(1):39-47.
40. Suppes T, Ketter TA, Gwizdowski IS et al. First controlled treatment trial of
bipolar II hypomania with mixed symptoms: quetiapine versus placebo. J
Affect Disord. 2013;150(1):37-43.
41. Edwards SJ, Smith CJ. Tolerability of atypical antipsychotics in the
treatment of adults with schizophrenia or bipolar disorder: a mixed
treatment comparison of randomized controlled trials. Clin Ther.
2009;31(Pt 1):1345-59.
42. Newcomer JW. Second-generation (atypical) antipsychotics and metabolic
effects: a comprehensive literature review. CNS Drugs. 2005;19(Suppl.
1):1-93.
43. Ghaemi SN, Ostacher MM, El-Mallakh RS et al. Antidepressant
discontinuation in bipolar depression: a Systematic Treatment Enhancement
Program for Bipolar Disorder (STEP-BD) randomized clinical trial of
long-term effectiveness and safety. J Clin Psychiatry. 2010;71(4):372-80.
44. Vazquez G, Tondo L, Baldessarini RJ. Comparison of antidepressant
responses in patients with bipolar vs. unipolar depression: a meta-analytic
review. Pharmacopsychiatry. 2011;44(1):21-6.
45. Prien RF, Klett CJ, Caffey EM Jr. Lithium prophylaxis in recurrent affective
illness. Am J Psychiatry. 1974;131(2):198-203.
46. Prien RF, Kupfer DJ, Mansky PA et al. Drug therapy in the prevention of
recurrences in unipolar and bipolar affective disorders. Report of the NIMH
Collaborative Study Group comparing lithium carbonate, imipramine, and
a lithium carbonate-imipramine combination. Arch Gen Psychiatry.
1984;41(11):1096-104.
47. Quitkin FM, Kane J, Rifkin A et al. Prophylactic lithium carbonate with and
without imipramine for bipolar 1 patients. A double-blind study. Arch Gen
Psychiatry. 1981;38(8):902-7.
48. Sachs GS, Lafer B, Stoll AL et al. A double-blind trial of bupropion versus
desipramine for bipolar depression. J Clin Psychiatry. 1994;55(9):391-3.
49. World Health Organisation. Investing in Mental Health: Evidence for
Action. 2013. [Internet]. Available from:
http://apps.who.int/iris/bitstream/handle/
10665/87232/9789241564618_eng.pdf?sequence=1.
staff review
Volume 12: Number 1. 2019 | Page 91
Abstract
Depression is a complex illness that significantly contributes to the global burden of disease. The two main
symptoms are depressed mood and anhedonia; however, depression can manifest with a wide range of
psychological, behavioural, and physical symptoms. This diversity of symptoms, as well as the reluctance
of many patients to reveal their depressed feelings, can make it difficult for healthcare practitioners to
diagnose. Depression is especially common in patients with chronic physical illnesses, such as diabetes
mellitus (DM), as the management regimen for DM is complex and usually involves significant lifestyle
changes. Having one or more chronic illnesses is the greatest risk factor for depression, and having
depression is also a major risk factor for developing those physical illnesses. Importantly, depression is
largely treatable as there are many management options available to patients, including psychotherapy,
pharmacological treatments, electroconvulsive therapy, and community support groups. However, despite
the wide range of treatments, increasing awareness of the prevalence of depression and how it can
negatively impact patient health when comorbid with other illnesses increases the efficacy of primary care
interactions and allows healthcare organisations and policies to better address the issue. This can increase
diagnostic rates and ultimately strengthen the amount of support provided to such patients.
Ola M. Michalec
RCSI medical student
The intersection of depressionand chronic illness
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 91-95.
staff reviewRCSIsmj
Page 92 | Volume 12: Number 1. 2019
Introduction
Depression is a complex and medically relevant illness. According to
the Global Health Estimates made by the World Health Organisation
(WHO) in 2015, depression affects 4.4% of the global population.1
That amounts to a total of 322 million people living with depression,
and the numbers are increasing.1 In a span of ten years, from 2005 to
2015, the prevalence of depression increased by approximately
19%.1-4 Globally, depression contributed 7.5% to all years lived with
disability (YLDs) in 2015, making it the greatest contributor to
non-fatal disease burden.1 Additionally, depression contributes to
approximately 800,000 suicides annually.1 The increasing prevalence
and morbidity of depression highlights the need for both current and
future healthcare practitioners (HCPs) to thoroughly understand the
effect that depression has on a patient’s quality of life. As Harvey
Cushing (1869-1939) said: “A physician is obligated to consider more
than a diseased organ, more even than the whole man – he must view
the man in his world”. It is the responsibility of HCPs to take the time
to gain a thorough understanding of the realities faced by patients,
mentally and physically, in order to optimise and strengthen the care
and support provided. It is essential to realise that patients coping
with depression often face new limits regarding their capabilities and
well-being, which can prove difficult and may impact on other
illnesses.
Depression is highly prevalent among patients with chronic illnesses,
such as diabetes mellitus (DM), with 9-23% of patients presenting
with comorbid depression, depending on the illness.5 However,
depression is left undiagnosed in approximately 42-48% of patients
who have a chronic physical illness.6-8 This review aims to emphasise
that increasing awareness of the signs and symptoms of depression,
as well as its classification as a mental illness, optimises the support
that patients with chronic physical illness receive at both an
individualised and community-wide level. It is through this
understanding and effort that HCPs can strive to improve the mental
health of patients worldwide.
The stigma associated with depression, as well as how it varies cross culturally, are important factors to consider, as theymay impair the willingness of patients
to admit they are experiencing emotional distress.
Depression: clinical signs and symptoms
Depression most commonly presents with two main symptoms: a
deep feeling of sadness (depressed mood) and a loss of interest or
pleasure in regular activities (anhedonia).9 However, depression can
manifest itself in a variety of ways, including through psychological,
behavioural, and physical symptoms (Table 1). If untreated,
depression carries a higher risk of suicide.10 The duration and severity
of symptoms, which can persist from weeks to years, determine the
difference between major depressive disorder (MDD) and dysthymia.
MDD is defined by the Diagnostic and Statistical Manual of Mental
Disorders – 5th Edition (DSM-5) as having five or more of the nine
outlined symptoms (Table 2) nearly every day in the same two-week
period, at least one of which must be depressed mood or
anhedonia.9,10 Dysthymia, or chronic depression, is defined as the
persistence of the outlined symptoms of MDD for over two years,
even though patients often present less severely.9,10
Even when aware of the psychological, behavioural, and physical
staff reviewRCSIsmj
Table 1: Common psychological, behavioural, and physical symptoms associated with depression.10
Psychological symptoms
Depressed mood
Irritability/anxiety/nervousness
Difficulty concentrating
Lack of motivation
Anhedonia: lack of pleasure in doing things
Indecisiveness
Feelings of helplessness
Reduced libido
Sensitive to rejection and criticism
Perfectionism/obsessiveness
Behavioural symptoms
Crying spells
Interpersonal friction/confrontation
Anger outbursts
Avoiding situations that may cause anxiety
Decreased productivity
Social withdrawal
Substance use/abuse
Avoiding emotional and sexual intimacy
Workaholic tendencies
Physical symptoms
Fatigue
Dull, heavy or slow feelings in arms or legs
Insomnia or hypersomnia
Change in appetite
Significant weight gain or loss
Erectile dysfunction
Difficulties with sexual arousal
Pains and aches
Muscle tension
Gastrointestinal upset
Heart palpitations
Volume 12: Number 1. 2019 | Page 93
staff reviewRCSIsmj
symptoms, diagnosing depression remains challenging. The stigma
associated with depression, as well as how it varies cross culturally, are
important factors to consider, as they may impair the willingness of
patients to admit they are experiencing emotional distress.10,11 Since
patients are often reluctant to come forward regarding their
depressed emotions, offering the opportunity for a patient to raise
concerns, and referring them to a psychologist if necessary, is
effective.12-14 By enhancing the quality of communication with
patients who are depressed, HCP–patient interactions can become
more meaningful.13 In addition to the wide range of symptoms,
associated stigma, and reluctance of patients to come forward,
diagnosing depression can be further complicated by its association
with other conditions, in particular chronic physical illnesses that can
manifest with similar symptoms.
Depression, chronic illness, or both?
Chronic physical illnesses, also known as non-communicable diseases,
are not transmitted from person to person, tend to be of long
duration, and usually progress slowly.1 On a global scale, chronic
physical illnesses are responsible for 71% of deaths worldwide, or 41
million deaths annually.1 Based on the World Health Survey (WHS)
conducted by the WHO, which surveyed 250,000 participants from
60 countries, 9-18% of those with one or more chronic physical
illnesses also had depression (Table 3).5 Depression is significantly
more prevalent in patients with a chronic physical illness compared to
those without, and the greatest risk factor for depression is having
multiple comorbid diagnoses.5,15,16 Additionally, comorbidity
decreases health status in elderly, female, less educated, low-income,
and unemployed patient groups.5 Therefore, the comorbidity of
depression and common chronic physical illnesses, like cardiovascular
disease, cancer, respiratory diseases, arthritis, and diabetes, is an
important issue for HCPs to consider when managing patients with
these conditions.10,17
Moreover, depression itself is a risk factor for physical illness.12 Having
depression either on its own or in conjunction with another condition
is significantly associated with poorer health status.12 This is of
particular importance as patients who have depression together with
a chronic physical illness are often left undiagnosed by physicians,
which can lead to worse health outcomes.5,18,19
Diabetes mellitus as a comorbid condition of depression
DM is a chronic physical illness commonly associated with
depression.20-22 In the long term, if not well controlled, DM may lead
to complications that can be a source of significant distress for many
patients.23 The majority of patients with DM have type 2 DM, which
more frequently occurs in overweight or obese patients.24,25
Additionally, the management regimen for diabetes is complex, time
intensive, and can prove emotionally distressing for patients, as many
elements of the treatment are often carried out by the patient
themselves.26 When diagnosed with DM, an individual will be
counselled to adopt significant lifestyle changes including a healthier
diet, engaging in a regular exercise regimen, and counting and
regulating carbohydrates. The comorbidity between diabetes and
depression can often impair adherence to treatment regimens and
compromise glycaemic control, leading to further complications and
poor health outcomes.20,21,27
The prevalence of depression in people with diabetes is three times as
high for patients with type 1 diabetes and twice as high for patients
with type 2 diabetes compared to those without.21,28 This is a major
concern given that depression significantly increases the risk of
Major depressive disorder
Having five or more out of the following nine symptoms
nearly every day in the same two-week period:
Note: Must include at least one of depressed mood or anhedonia.
1. Depressed mood (subjective OR observed)
2. Anhedonia: loss of interest or pleasure in all, or almost all,
activities, most of the day
3. Change in appetite or significant weight loss or gain
4. Insomnia or hypersomnia
5. Observed psychomotor agitation or retardation
6. Fatigue/loss of energy
7. Feelings of worthlessness or excessive guilt
8. Impaired concentration
9. Thoughts of death or suicidal ideation or attempt
Table 2: DSM-5 diagnostic criteria for major depressive
disorder (MDD).
Disease Prevalence of having Prevalence of having
the disease on the disease comorbid
on its own with depression
Depression 3.2% –
Diabetes 2.0% 9.3%
Asthma 3.3% 18.1%
Arthritis 4.1% 10.7%
Angina 4.5% 15%
Table 3: The prevalence of a particular illness concurrent
with depression was higher compared to prevalence of
the disease on its own.5
Page 94 | Volume 12: Number 1. 2019
staff review
References
1. World Health Organisation. Depression and Other Common Mental
Disorders: Global Health Estimates. Geneva: World Health Organisation.
2017. [Internet]. Available from: https://www.who.int/mental_health/
management/depression/prevalence_global_health_estimates/en/.
2. Ustun TN, Ayuso-Mateos JL, Chatterji S, Mathers C, Murray CJL. Global
burden of depressive disorders in the year 2000. Br J Psychiatry.
2000;184:386-92.
3. Ferrari AJ, Charlson FJ, Norman RE, Patten SB, Freedman G, Murray CJ et al.
Burden of depressive disorders by country, sex, age and year: findings from
the global burden of disease study 2010. PLoS Med. 2013;10(11):e1001547.
4. Ferrari A, Somerville AJ, Baxter A, Normal R, Patten S, Vos T et al. Global
variation in the prevalence and incidence of major depressive disorder: a
mortality in patients with diabetes.22 Furthermore, patients with
diabetes under the age of 65, of female gender, and those not married
are more likely to become depressed.21,28 Since patients with diabetes
often present to a large team of HCPs multiple times a year, or at the
very least for an annual review, there are many opportunities for
engagement, education, and supportive healthcare interactions.
Improving mental health: current treatments and increasing
awareness
Recovery takes time, but depression is often a treatable illness. The
management options are highly individualised, and vary based on
symptom severity. Patients with depression can receive support at the
primary care level, as well as from support groups in the community.
Cognitive behavioural therapy (CBT), a type of psychotherapy, is a
treatment option that focuses on changing patients’ thoughts and
behaviours to help manage depression.29 CBT is a highly effective
treatment modality and significantly reduces relapse rates in comparison
to pharmacological treatments such as selective serotonin reuptake
inhibitors (SSRIs).29 Most patients with depression are managed with a
combination of therapies that involve both psychotherapy and
pharmacological treatment.29 In severe cases of depression, or in patients
with treatment-resistant depression, electroconvulsive therapy (ECT) may
be offered.30,31 ECT involves passing small electric currents through the
brain to induce brief seizures. While controversial, ECT is thought to be
one of the most effective treatments for depression.30,31 Additional
management options for depression exist beyond those mentioned here,
including a spectrum of individualised and community-based support
programmes.
In addition to the variety of treatment options available, increasing
mental health awareness is particularly important due to the increased
likelihood of depression in patients with a chronic physical illness.5,15,16
Ultimately, the delivery of mental health interventions should be
integrated into all levels of healthcare provision: on a small scale, with
every primary patient–HCP interaction, as well as on a larger scale,
through health systems policy changes.32,33 Currently, global models such
as the Chronic Care Model (CCM), implemented by the WHO, aim to
decrease the global burden of disease for chronic illness by improving the
delivery of healthcare to patients with chronic conditions on many
levels.34 Such an approach would prove valuable in improving patient
access to mental health support. It is by increasing awareness of the
negative effects that depression can have on patient health, especially
when comorbid with other chronic illnesses, that strides can be made to
improve multidisciplinary psychological support at the individual,
healthcare organisational, and policy-making level.
Depression can be difficult to diagnose, as it is often comorbid with chronicphysical illnesses that may present
with similar symptoms.
Conclusion
Depression is both a serious illness and an important public health
concern. It is crucial that HCPs are aware of the psychological,
behavioural, and physical symptoms of depression so that they use each
patient interaction to observe for symptoms. Depression can be difficult
to diagnose, as it is often comorbid with chronic physical illnesses that
may present with similar symptoms. DM is one chronic illness commonly
associated with depression, and the complex interplay between these
two illnesses can have lasting psychological and physical implications.
Therefore, it is important that patients with chronic physical illnesses
receive optimal emotional and psychosocial support to address
depression, and prevent it from resulting in poorer health outcomes.
Finally, advancements in individualised treatment options are needed to
decrease the prevalence and impact of depression. Through improved
awareness of the association between depression and chronic physical
illness, and enhanced mental health support for patients at every level of
care, steps can be taken to decrease the global burden of depression
complicating chronic physical illness.
RCSIsmj
Volume 12: Number 1. 2019 | Page 95
staff review
systematic review of the epidemiological literature. Psychol Med.
2013;43(3);471-81.
5. Moussavi S, Chatterji S, Verdes E, Tandon A, Patel V, Ustun B. Depression,
chronic diseases, and decrements in health: results from the World Health
Surveys. Lancet. 2007;370:851-8.
6. Li C, Ford E, Zhao G, Ahluwalia I, Pearson W, Mokdad A. Prevalence and
correlates of undiagnosed depression among US adults with diabetes: The
Behavioural Risk Factor Surveillance System, 2006. Diabetes Res Clin Pract.
2009;83(2):268-79.
7. Lecrubier Y. Widespread underrecognition and undertreatment of anxiety
and mood disorders: results from 3 European studies. J Clin Psychiatry.
2007;68(Suppl. 2):36-41.
8. Katon WJ, Simon G, Russo J, Von KM, Lin EH, Ludman E. Quality of
depression care in a population based sample of patients with diabetes
and major depression. Med Care. 2004;42:1222-9.
9. American Psychiatric Association. Diagnostic and Statistical Manual of
Mental Disorders, Fifth Edition (DSM-5). American Psychiatric Association;
Washington, DC, 2013.
10. Cassano P, Fava M. Depression and public health: An overview. J
Psychosom Res. 2002;53(4):849-57.
11. Chowdhury AN, Sanyal D, Bhattacharya A, Dutta SK, De R, Banerjee S et
al. Prominence of symptoms and level of stigma among depressed
patients in Calcutta. J Indian Med Assoc. 2001;99(1):20-3.
12. Pollock K. Maintaining face in the presentation of depression: constraining
the therapeutic potential of the consultation. Health (London). 2007;11:163-80.
13. Gask L, Rogers A, Oliver D, May C, Roland M. Qualitative study of patients’
perceptions of the quality of care for depression in general practice. Br J
Gen Pract. 2003;53:278-83.
14. Beverly E, Ganda O, Ritholz M, Lee Y, Brooks K, Lewis-Schroeder N et al.
Look who’s (not) talking: diabetic patients’ willingness to discuss self-care
with physicians. Diabetes Care. 2012;35(7):1466-72.
15. Katon W, Schulberg H. Epidemiology of depression in primary care. Gen
Hosp Psychiatry. 1992;14(4):237-47.
16. Harpole LH, Williams JW Jr, Olsen MK et al. Improving depression
outcomes in older adults with comorbid medical illness. Gen Hosp
Psychiatry. 2005;27(1):4-12.
17. Chapman DP, Perry GS, Strine TW. The vital link between chronic disease
and depressive disorders. Prev Chronic Dis. 2005;2(1):A14.
18. Andrews G. Should depression be managed as a chronic disease? BMJ.
2001;322(7283):419-21.
19. Luber MP, Hollenberg JP, Williams-Russo P et al. Diagnosis, treatment,
co-morbidity, and resource utilization of depressed patients in general
medical practice. Int J Psych Med. 2000;30(1):1-13.
20. Ciechanowski PS, Katon WJ, Russo JE. Depression and diabetes: impact of
depressive symptoms on adherence, function and costs. Arch Intern Med.
2000;160(21):3278-85.
21. Egede LE, Zheng D, Simpson K. Comorbid depression is associated with
increased health care use and expenditures in individuals with diabetes.
Diabetes Care. 2002;25(3):464-70.
22. Park M, Katon WJ, Wolf FM. Depression and risk of mortality in individuals
with diabetes: a meta-analysis and systematic review. Gen Hosp Psychiatry.
2013;35(3):217-25.
23. Ismail K, Moulton C, Winkley K, Pickup J, Thomas S, Sherwood R et al. The
association of depressive symptoms and diabetes distress with glycaemic
control and diabetes complications over 2 years in newly diagnosed type 2
diabetes: a prospective cohort study. Diabetologia. 2017;60(10):2092-102.
24. Liu N, Brown A, Folias A, Younge M, Guzman S, Close K et al. Stigma in
people with type 1 or type 2 diabetes. Clin Diabetes. 2017;35(1):27-34.
25. Browne J, Ventura A, Mosely K, Speight J. ‘I call it the blame and shame
disease’: a qualitative study about perceptions of social stigma surrounding
type 2 diabetes. BMJ Open. 2013;3(11):e003384.
26. Tanenbaum ML, Kane NS, Kenowitz J, Gonzalez JS. Diabetes distress from
the patient’s perspective: Qualitative themes and treatment regimen
differences among adults with type 2 diabetes. J Diabetes Complications.
2016;30(6):1060-8.
27. Van Bastelaar K, Pouwer F, Geelhoed-Duijvestijn P, Tack C, Bazelmans E,
Beekman A et al. Diabetes-specific emotional distress mediates the
association between depressive symptoms and glycaemic control in type 1
and type 2 diabetes. Diabet Med. 2010;27(7):798-803.
28. Roy T, Lloyd CE. Epidemiology of depression and diabetes: a systematic
review. J Affect Disord. 2012;142(Suppl.):S8-21.
29. Sudak DM. Cognitive behavioural therapy for depression. Psychiatr Clin
North Am. 2012;35(1):99-110.
30. Oremus C, Oremus M, McNeely H et al. Effects of electroconvulsive
therapy on cognitive functioning in patients with depression: protocol for
a systematic review and meta-analysis. BMJ Open. 2015;5(3):e006966.
31. Antunes PB, Rosa MA, Belmonte-de-Abreu PS et al. Electroconvulsive
therapy in major depression: current aspects. Braz J Psychiatr.
2009;31(Suppl. 1):S26-33.
32. Katon W, Guico-Pabia CJ. Improving quality of depression care using
organized systems of care: a review of the literature. Prim Care
Companion CNS Disord. 2011;13(1):pii: PCC. 10r01019blu.
33. Von Korff M, Gruman J, Schaefer J et al. Collaborative management of
chronic illness. Ann Intern Med. 1997;127(12):1097-102.
34. Epping-Jordan JE, Pruitt SD, Bengoa R, Wagner EH. Improving the quality of
health care for chronic conditions. Qual Saf Health Care. 2004;13:299-305.
RCSIsmj
Page 96 | Volume 12: Number 1. 2019
staff review
Abstract
Chronic kidney disease (CKD) is an indolent yet unrelenting condition that affects 10% of the global
population. CKD often progresses to end-stage renal disease (ESRD), and resulted in 1.2 million deaths in
2017. The prevalence of CKD continues to rise, with increasingly stark health and economic consequences.
Current treatment seeks to delay the onset of ESRD, with kidney transplantation representing the only
potentially curative option. However, donor organ supplies are unpredictable, and as demand continues to
increase, waiting list times are increasingly untenable for late-stage patients. Transplantation is also
associated with complications, immunosuppression, and unending expense. Therefore, alternatives or
adjuncts are desperately needed. One particularly attractive option is the rejuvenation of a patient’s own
failing organs via whole tissue bioengineering. Utilising the techniques of decellularisation, wherein a
surgically excised organ is stripped of its dysfunctional cells into an extracellular matrix scaffold ex vivo, and
recellularisation, where the organ is repopulated with the patient’s own in vitro-derived stem cells prior to
re-implantation, whole tissue bioengineering may improve patient outcomes. However, it remains an
iterative practice that has yet to achieve translational relevance. This is primarily due to the problems
associated with applying animal-derived research findings to human clinical practice. In general, scaling
established protocols from animal to human kidneys remains challenging and elusive. Due to the
compartmentalised structure of the kidneys, methods of establishing viable cellular growth remain to be
refined to the extent necessary for clinical trials. This review will explore the remaining challenges facing
whole-organ tissue bioengineering in the context of CKD.
Vladimir Djedovic
RCSI medical student
Bioengineered transplants:stem cell reversal of chronickidney disease
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 96-100.
RCSIsmj
Volume 12: Number 1. 2019 | Page 97
Introduction to CKD
Chronic kidney disease (CKD) is a complex condition defined by
progressive loss of kidney function over months to years.1 Initially, patients
are asymptomatic or may present with non-specific clinical symptoms
such as fatigue, loss of appetite, or confusion.1 Diagnosis relies on clinical
parameters such as creatinine clearance, reduced estimated glomerular
filtration rate, and altered urinary albumin excretion rate.2 The prime risk
factors for CKD include type 1 or 2 diabetes, a family history of kidney
disease, and hypertension.3 However, it ultimately arises from a complex
interplay of genetic, lifestyle, and environmental factors.3
At present, CKD affects 750 million people worldwide, with a mortality
rate of 1.2 million in 2017.4,5 As prevalence continues to rise on an
unabated upward trajectory, the economic implications for international
health systems are stark.6,7 For example, CKD costs the British National
Health Service (NHS) £1.5bn, the American Government $48bn US, and
the Irish Government €120m annually, representing 6-7% of total
healthcare expenditure.6-8
Current treatments
Initial treatment of CKD consists of adjusting lifestyle habits to manage risk
factors, and focused interventions to preserve nephron function.9 As 80%
of CKD patients have hypertension, this involves limiting sodium and
alcohol intake, increasing exercise, and smoking cessation.10 Further, for
patients with comorbidities such as hyperlipidaemia and type 1 or 2
diabetes mellitus, managing these conditions is a paramount goal of early
therapy.10 When lifestyle modifications are insufficient, pharmacological
intervention commences with angiotensin-converting enzyme (ACE)
inhibitors or angiotensin II receptor blockers, and progresses to
beta-adrenergic receptor blockers, calcium channel blockers, diuretics,
and vasodilators.11 However, these efforts are not curative, and the stated
objective is merely to delay the often inevitable onset of end-stage renal
disease (ESRD).11
ESRD is temporarily treated by replacing renal function through
haemodialysis or peritoneal dialysis.12 The management of water and
electrolyte balance, and removal of blood-borne waste products such as
creatinine and urea, is well established through these methods, allowing
patients to survive near indefinitely.12 However, the immunomodulatory
and endocrine functions of the kidney are absent and unrestored by
dialysis.13 Quality of life is also diminished due to the frequency of
treatment, as it is generally performed 156 times per annum.12 As a result,
associated costs for haemodialysis and peritoneal dialysis reach up to
$90,000 US or €60,000 annually, per patient, as treatment typically
occurs in an outpatient setting requiring specialised equipment and
personnel.5,6,13,14 Successful haemodialysis also frequently requires the
surgical formation of arteriovenous shunts (sites of unimpeded blood flow
between arteries and veins), which serve as access points to the patient’s
circulation.15 Complications of shunt formation include thrombosis,
ischaemic neuropathy, infection, aneurysm, and lymphoedema, which
further obscure the clinical picture of CKD sufferers.15 Haemodialysis and
peritoneal dialysis are undoubtedly useful tools for preserving the patient’s life
while they await a donor kidney, but this expensive and complication-prone
procedure does not represent a long-term solution; every year spent on
dialysis increases relative risk of all-cause mortality by 7%.16,17
Kidney transplantation
The only potentially curative treatment for CKD is a kidney transplant.18
Ireland performed a record number of organ transplants in 2018.19
However, this occurred in the midst of an increasing patient backlog due
to organ shortages.19 Globally, the supply of donor kidneys is
unpredictable, and patients typically wait over three years to receive an
organ.19 Moreover, once the kidney is transplanted there are considerable
risks and considerations. The immunosuppressive drugs required to keep
organ-damaging antibody titres within acceptable ranges to prevent
organ rejection increase the predilection for post-transplant infection and
carcinogenesis. Concurrently, these medications impose a financial
expenditure of up to €10,000 annually per patient in Ireland and $15,000
in the United States.8,19 While a successful transplant yields an improved
quality of life, most healthcare systems are unable to meet the demand for
organs due to the increasing prevalence of CKD.19 Indeed, 13 patients die
awaiting a donor, while 110 patients are added to transplant lists, each day.19
In this climate, it is clear that the capacity of transplantation and dialysis
procedures alone is insufficient to deal with the growing issue of CKD.
Innovative or supplementary methods of replacing native kidney function
are urgently needed. One promising method may be through
rejuvenating organs using a process called whole organ tissue
bioengineering.20 Whole organ tissue bioengineering – a process by which
a patient’s or donor’s organs are stripped of defective cells and then
repopulated with host-derived stem cells, to restore function – holds the
promise of maximising the potential of transplant surgery by reducing
organ shortages, and by providing a potentially life-saving resource bereft
of the complications and associated costs that currently plague organ
transplantation in general.20 Fortunately, the means to actualise this
technology are drawing closer.6
Whole organ tissue bioengineering
Whole organ tissue bioengineering is the process of using chemical,
biological, and physical means to regenerate biological tissues.20 Broadly, the
method consists of several steps: the damaged organ is removed from a
patient, stripped of its dysfunctional cells, repopulated with the patient’s own
stem cells, and implanted back into the original patient (Figure 1).20,21
staff reviewRCSIsmj
Page 98 | Volume 12: Number 1. 2019
Regenerating organs in this manner begins by first creating a tissue
scaffold.20,21 Tissue scaffolds are three-dimensional (3D) extracellular
matrix (ECM) structures containing the complete anatomy, cues, and
homeostatic signals necessary for cellular function, proliferation, and
organ ultrastructure.22 Scaffolds are generated by the process of organ
decellularisation.22 Decellularisation is the gradual removal of cells from an
organ’s ECM backbone with gentle detergents, enzymes, and other stimuli
such as physical agitation.22 The purpose of this process is to generate a
biological ‘blank slate’, which can theoretically support new, healthy cell
growth.22,23 In the decellularised state, organ scaffolds appear as hollow or
translucent versions of the original organ itself.22,23 The methodological
challenges of decellularisation have been surmounted, as various
organ-specific scaffold-generating protocols exist.23 For example, initial
studies in rat kidney decellularisation guided the way to the recent
achievements in viable human kidney scaffold production.24 Recently, even
entire human limbs have been successfully decellularised, leaving the ECM
and anatomical features intact.25
Once a scaffold is cleared of defective cells, the methodological challenge
of re-establishing functional cell growth on the organ begins. Effective
recellularisation is currently the bottleneck for development of
bioengineered organs.26 The process broadly consists of perfusing stem
cells through the acellular scaffold by slowly propelling them through the
preserved vestiges of the vascular framework, or other channels such as
the ureters.26 This is currently accomplished using tissue bioreactors,
devices that both preserve and mature scaffolds, and provide the chemical
and physical regulatory signals required to induce appropriate stem cell
differentiation therein.26 At present, recellularisation faces several key
challenges that present a roadblock to clinical application.
Today’s hurdle – challenges in whole organ tissue bioengineering
Challenges to recellularising the human kidney result from: (1) its
complex structure; and (2) its large size relative to experimental
models.13 The primary difficulties researchers face at present include:
identifying appropriate cells for recellularisation; establishing suitable
blood flow to allow cell growth; and identifying reproducible means of
seeding cells throughout scaffolds.13
The functional unit of the kidney is the nephron. Anatomically, each
nephron contains a proximal and distal tubule, loop of Henle, and
collecting ducts. These components are each lined by regionally
specialised epithelial cells with unique functional properties, a
basement membrane composed of endothelial cells enclosed by
podocytes, and mesangial cells, which exhibit both immune and
structural activities.13 As such, these crucial cell types must be either
produced ex vivo through cell culture methods, or in vivo via the
differentiation of pluripotent stem cells.13
Currently, induced pluripotent stem cells are the most heavily studied
option since, in contrast to differentiated cell lines, they can expand
readily in vivo, do not require costly continual cell culture growth
methods, and have been demonstrated to develop into renal cells in
animal models.13 Importantly, they can be procured autologously from
the patient, bypassing the need for immunosuppression.13 Indeed,
nearly 160 bioengineered tracheas, blood vessels, and bladders have
been successfully transplanted into humans without need for
immunosuppressive medication.28,29 However, these organs all bypass
the need for establishing vascular access within the patient since they
are able to meet cellular metabolic requirements through diffusion due
to their thin-walled structures.28,29
staff review
FIGURE 1: Process of whole-organ tissue bioengineering using de- and recellularisation in a tissue bioreactor. First, the native human kidney is removed from the patient;functionally deficient cells are then removed to generate a tissue scaffold, using chemical reagents, ionic detergents, and physical agitation; next, cells grown in vitro or invivo are seeded into the scaffold using a tissue bioreactor; finally, the recellularised organ is ready for transplantation.27
RCSIsmj
Sourcen Goatn Humann Mousen Porcinen Ratn Monkey
Native kidney
Implantation
Decellularising agents
t
t
t
t
ECM scaffold
Seeding cells
Recellularisation assess
Cell lines:n iPSCsn BM-MSCn Pluripotent ES cellsn ES cellsn NKCsn HUVECsn RCTE
t
Chemical reagents:NaOH/Triton X-100/SDS/NaDOC/PAA/NH4OH/Trypsin/EDTA/EGTA
GAGs + fibrous proteins(collagen, elastin, fibronectin,laminin, etc.)
Other ECM proteins like growthfactors, etc.
Minimal residual DNA
Volume 12: Number 1. 2019 | Page 99
In contrast, tissue bioengineering in the human kidney primarily hinges
on establishing sufficient blood perfusion to the organ’s various
structural compartments.30-32 As oxygen diffuses only up to 100-200µM
through tissues, sustainable 3D cell growth is limited to within this
proximity of the capillaries.20 In smaller bioengineered tissues, such as
the trachea, oxygen can diffuse sufficiently throughout the entire
structure, permitting cell growth and in vivo viability.13 In others, such
as bioengineered skin grafts, vascular access can be achieved through
anastomosis with native vasculature due to the sheet-like structure of
the graft itself.13,20 However, for more complex viscera, the presence of
an oxygen saturation gradient that decreases further away from the
capillaries leads to phenotypic differences in cell number, spatial
viability, and maturation.20 For example, as kidneys have a convoluted
and compartmentalised structure with differences in regional blood
flow, less vascularised areas of the organ will have less robust cell growth
and may consequently necrose.13,20 Rat kidney scaffold recellularisation
trials have recently achieved nearly 100% viable cell coverage, leading
to the production of rudimentary urine.33 Yet, human scaffold
repopulation currently remains difficult, partly due to avascular cellular
necrosis resulting from ischaemia and hypoxia.33 However,
recellularisation of kidney scaffolds has been recently scaled up to rhesus
monkeys, representing a major milestone in primate tissue
bioengineering.34 Recent work has also identified the paramount need
to first seed endothelial cells into the scaffolds, as endothelial cells
undergo angiogenesis and generate the capillaries necessary to support
colonisation of further cellular subtypes.13 Recent studies are currently
refining these protocols, but preliminary results indicate that, following
endothelial cell perfusion, admixtures of stem cells are more likely to
implant successfully.35
As the kidney relies on vascular access for both its glomerular filtration
function and its endocrine role, it is imperative to achieve sufficient cellular
viability through adequate blood flow.13 Prior to the first human trials,
complete recellularisation must also be achieved to minimise the presence
of free ECM subunits.13 Uncovered ECM components, such as collagen,
can activate the coagulation cascade, leading to thrombosis and
tissue damage.13 Therefore, prior to human bioengineered organ
transplantation, it is necessary to achieve 100% recellularisation to limit
the risk of thrombogenic events, especially when treating vulnerable
patient populations.13,20 While animal models can currently survive
indefinitely with anti-coagulation therapy, it is essential to limit this risk in
patients with comorbidities that may predispose to vascular events;
indeed, this is a feature of the majority of individuals with CKD.13,20
Consequently, a major challenge in the development of bioengineered
kidneys is identifying appropriate routes for cell perfusion.36 The kidney has
tripartite access for recellularisation due to its complete venous, arterial,
and ureteric architecture. As a result, multiple routes and strategies for
seeding cells currently exist.36 The most commonly used method relies on
impelling cells solely through the renal artery.33,36 This technique has been
used to recellularise rat kidneys sufficiently to produce rudimentary
urine.33,36 At present, this is the subject of much active research.
The growing burden of CKD is drivingdemand for dialysis and renal
transplantation to unprecedented levels and overwhelming current capacities.
Conclusions
The growing burden of CKD is driving demand for dialysis and renal
transplantation to unprecedented levels and overwhelming current
capacities. Whole organ tissue bioengineering offers an attractive,
cutting-edge alternative to conventional treatments, and seeks to
overcome the pitfalls of inherent organ shortages, mounting costs, and
unavoidable complications. Indeed, optimism is called for, as a majority of
the main technological hurdles – developing scaffolds, recellularising
animal organs, and demonstrating functionality in vivo – have been
successfully surmounted. It is true that organ engineering remains largely
an iterative practice, requiring continual refinement and troubleshooting,
but that will no doubt remain true long beyond the first human trials.
While three key challenges remain – namely, full organ recellularisation,
codifying seeding methods, and establishing in vivo vascular access –
recent successes portend the proximity of future clinical trials, which may
revolutionise how we treat CKD.
staff review
References
1. Centers for Disease Control and Prevention. National Chronic Kidney Disease
Fact Sheet. 2017. [Internet]. [Accessed 2017 June 7]. Available from:
https://www.cdc.gov/diabetes/pubs/pdf/kidney_factsheet.pdf.
2. Wouters OJ, O’Donoghue DJ, Ritchie J, Kanavos PG, Narva AS. Early chronic
kidney disease: diagnosis, management and models of care. Nat Rev
Nephrol. 2015;11(8):491-502.
3. Kazancioglu R. Risk factors for chronic kidney disease: an update. Kidney Int
Suppl. 2011;3(4):368-71.
RCSIsmj
Page 100 | Volume 12: Number 1. 2019
4. World Health Organization. Preventing chronic diseases: a vital investment.
2005. [Internet]. Available from:
http://www.who.int/chp/chronic_disease_report/full_report.pdf.
5. Kerr M, Bray B, Medcalf J, O’Donoghue D, Matthews B. Estimating the
financial cost of chronic kidney disease to the NHS in England. Nephrol Dial
Transpl. 2012;27(3):73-80.
6. Honeycutt AA, Segel JE, Zhuo X, Hoerger TJ, Imai K, Williams D. Medical costs
of CKD in the Medicare population. J Am Soc Nephrol. 2013;24(9):1478-83.
7. Houston M. Number of people needing kidney dialysis rises by 50%. The
Irish Times. 2009. [Internet]. Available from: https://www.irishtimes.com/
news/health/number-of-people-needing-kidney-dialysis-rises-by-50-1.719332.
8. Gordon EJ, Prohaska TR, Sehgal AR. The financial impact of
immunosuppressant expenses on new kidney transplant recipients. Clin
Transplant. 2008;22(5):738-48.
9. Xie X, Liu Y, Perkovic V, Li X, Ninomiya T, How W et al. Renin-angiotensin
system inhibitors and kidney and cardiovascular outcomes in patients with
CKD: a Bayesian network meta-analysis of randomized clinical trials. Am J
Kidney Dis. 2015;67(5):728-41.
10. Levey AS, Schoolwerth AC, Burrows NR, Williams DE, Stith KR, McClellan W.
Comprehensive public health strategies for preventing the development,
progression, and complications of CKD: report of an expert panel convened by
the centres for disease control and prevention. Am J Kidney Dis. 2009;53(3):522-35.
11. Toto RD. Treatment of hypertension in chronic kidney disease. Semin
Nephrol. 2005;25(6):435-9.
12. Rocco MV. More frequent hemodialysis: back to the future? Adv Chronic
Kidney Dis. 2007;14(3):e1-9.
13. Madariaga MLL, Ott HC. Bioengineering kidneys for transplantation. Semin
Nephrol. 2014;34(4):384-93.
14. National Kidney Foundation. Dialysis. 2015. [Internet]. Available from:
https://www.kidney.org/atoz/content/dialysisinfo.
15. Stolic R. Most important chronic complications of arteriovenous fistulas for
hemodialysis. Med Princ Pract. 2013;22(3):220-8.
16. Chertow GM, Johansen KL, Lew N, Lazarus JM, Lowrie EG. Vintage, nutritional
status, and survival in hemodialysis patients. Kidney Int. 2000;57(3):1176-81.
17. Braun L, Sood V, Hogue S, Lieberman B, Copley-Merriman C. High burden
and unmet patient needs in chronic kidney disease. Int J Nephrol Renovasc
Dis. 2012;5:151-63.
18. Organization for Economic Co-operation and Development. OECD health
statistics 2018. 2018. [Internet]. Available from:
http://www.oecd.org/els/health-systems/health-data.htm.
19. National Kidney Foundation. Organ donation and transplantation statistics.
2014. [Internet]. Available from: https://www.kidney.org/news/newsroom/
factsheets/Organ-Donation-and-Transplantation-Stats.
20. Lovett M, Lee K, Edwards A, Kaplan DL. Vascularization strategies for tissue
engineering. Tissue Eng Part B Rev. 2009;15(3):353-70.
21. Caralt M, Uzarski JS, Iacob S, Obergfell KP, Berg N, Bijonowski BM et al.
Optimization and critical evaluation of decellularization strategies to develop
renal extracellular matrix scaffolds as biological templates for organ
engineering and transplantation. Am J Transplant. 2015;15(1):64-75.
22. Jain A, Bansal R. Applications of regenerative medicine in organ
transplantation. J Pharm Bioallied Sci. 2015;7(3);188-94.
23. Gilbert TW. Strategies for tissue and organ decellularization. J Cell Biochem.
2012;113(7):2217-22.
24. Ott HC. Perfusion decellularization of discarded human kidneys: a valuable
platform for organ regeneration. Transplantation. 2015;99(9):1753.
25. Gerli MFM, Guyette JP, Evangelista-Leite D, Ghoshhajra BB, Ott HC. Perfusion
decellularization of a human limb: a novel platform for composite tissue
engineering and reconstructive surgery. PLoS One. 2018;13(1):e0191497.
26. Scarritt ME, Pashos NC, Bunnell BA. A review of cellularization strategies for
tissue engineering of whole organs. Front Bioeng Biotechnol. 2015;3:43.
27. Destefani AC, Sirtoli GM, Nogueira BV. Advances in the knowledge about
kidney decellularization and repopulation. Front Bioeng Biotechnol. 2017;5:34.
28. Orlando G, Soker S, Stratta RJ. Organ bioengineering and regeneration as
the new Holy Grail for organ transplantation. Ann Surg. 2013;258(2):221-32.
29. Orlando G, Di Cocco P, D’Angelo M, Clemente K, Famulari A, Pisani F.
Regenerative medicine applied to solid organ transplantation: where do we
stand? Transplant Proc. 2010;42(4):1011-3.
30. Folkman J, Hochberg M. Self-regulation of growth in three dimensions.
Journal Exp Med. 1973;138(4):745-53.
31. DesRochers TM, Palma E, Kaplan DL. Tissue-engineered kidney disease
models. Adv Drug Deliv Rev. 2014;0:67-80 [published online].
32. Rosines E, Johkura K, Zhang X et al. Constructing kidney-like tissues from
cells based on programs for organ development: toward a method of in vitro
tissue engineering of the kidney. Tissue Eng Part A. 2010;16(8):2441-55.
33. Song JJ, Guyette J, Gilpin G, Gonzalez G, Vacanti JP, Ott HC. Regeneration
and experimental orthotopic transplantation of a bioengineered kidney.
Nature Med. 2013;19(5):646-51.
34. Nakayama KH, Batchelder CA, Lee CI, Tarantal AF. Decellularized rhesus
monkey kidney as a three-dimensional scaffold for renal tissue engineering.
Tissue Eng Part A. 2009;16(7):2207-16.
35. Xu Y, Yan M, Gong Y, Chen L, Zhao F, Zhang Z. Response of endothelial cells
to decellularized extracellular matrix deposited by bone marrow
mesenchymal stem cells. Int J Clin Exp Med. 2014;7(12):4997-5003.
36. Remuzzi A, Figliuzzi M, Bonadrini B, Silvani S, Azzollini N, Nossa R et al.
Experimental evaluation of kidney regeneration by organ scaffold
recellularization. Sci Rep. 2017;7:43502.
staff reviewRCSIsmj
Volume 12: Number 1. 2019 | Page 101
staff review
Abstract
Life-threatening cardiac arrhythmias are rare yet perilous conditions that affect children and adults alike.
Implantable cardioverter-defibrillators (ICDs) have been proven as effective means of preventing sudden
cardiac death (SCD) secondary to arrhythmias, but studies regarding their use in children remain limited.
There are many considerations surrounding paediatric ICD use that may significantly impact on quality of
life. Physical complications manifest in numerous forms, including inappropriate shocks, infections, and
higher frequency of device replacement. Psychosocial effects, such as depression and anxiety, and financial
repercussions may be overlooked, but are essential concerns. Potential solutions have been proposed and
developed to reduce the burden of ICDs in paediatric patients, ranging from conservative to invasive
procedures. Further studies must be conducted to clarify the effectiveness and potential negative effects of
such interventions when compared to the current risk–benefit profile of paediatric ICDs.
Jeffrey Lam
RCSI medical student
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 101-106.
Introduction
Palpitations are one of the most common
presenting complaints for patients in hospitals,
outpatient clinics, and specialist care centres.1,2
Forming the primary symptomatology of roughly
16% of presenting patients, palpitations arise from a
wide range of underlying aetiologies.2 Common
causes include cardiac arrhythmias, psychosomatic
disorders such as anxiety, medication use, caffeine
consumption, and illicit substance use, among
many others.1,2 While palpitations are often benign,
they may be due to an underlying life-threatening
condition, such as an arrhythmia that could result in
debilitating palpitations, syncope, or even death.3
Cardiac arrhythmia severity and risk of adverse
health effects range from mild, benign, and
RCSIsmj
The big and the small of it:paediatric implantablecardioverter-defibrillators
Page 102 | Volume 12: Number 1. 2019
occasional discomfort to significant risk of sudden cardiac death (SCD).3
Technological advances have led to the development of numerous
strategies to combat these life-threatening conditions.
An implantable cardioverter-defibrillator (ICD) is a small battery-powered
medical device that is typically implanted subcutaneously on the anterior
chest wall below the left clavicle. First developed for clinical use in 1980,
ICDs are considered the standard first-line management for primary
prevention of SCD in at-risk patients, and secondary prevention of SCD in
patients with a history of ventricular fibrillation (VF) or ventricular
tachycardia (VT).4-9 SCD is mainly caused by sustained VF or VT, as such
arrhythmias may result in sudden cessation of cardiac activity.10
SCD secondary to cardiac arrest is the most common cause of death
worldwide and accounts for over 50% of all cardiovascular deaths.11-13
Cardiac arrests that occur outside of the healthcare setting have an
estimated overall survival rate of only 10%.12 VF accounts for most
sudden cardiac arrest episodes and SCD, with survival declining by
nearly 10% for every minute the patient is in VF.11,13,14 Survivors of a VF
episode have a recurrence rate of up to 30%, and thus ICDs may have
an important role in secondary prevention of SCD.14,15 A meta-analysis of
three randomised trials found that ICDs led to a 50% reduction in
relative risk of arrhythmic death.16
However, existing paediatric studies havefound ICDs to have significant physical,psychosocial, and financial impacts on a
child’s quality of life.
While the uses, effectiveness, indications, and complications of adult ICDs
have been investigated extensively, studies on paediatric ICDs remain
limited.17,18 Existing paediatric studies have found ICDs to have significant
physical, psychosocial, and financial impacts on a child’s quality of life.
Alternatives to paediatric ICDs have been proposed, but further
comparative studies are required.
What is an ICD?
An ICD is composed of a pulse generator and one to three leads for
electrodes that regulate pacing and defibrillation.5,6 The pulse generator
contains a battery, capacitors for charge storage, microprocessors for
rhythm analysis, and memory chips for data storage.5 Pacing and
defibrillation electrodes are typically located on a single lead, with
transvenous leads as the present standard.5,6 Defibrillation leads contain
coils for heat dissipation during discharges to the myocardium, as well
as bipolar electrodes for ventricular pacing and sensing.5,6
While an ICD may look similar to an artificial cardiac pacemaker, there
are key differences. Artificial pacemakers are implantable medical
devices that automatically regulate heart rate, especially in patients with
bradycardia.19 Bradycardia is defined as a resting heart rate of less than
60 beats per minute.20 In contrast, ICDs act as safeguards against
potential cardiac arrest by delivering high-energy shocks when
arrhythmias such as VF or VT are detected.21 Contemporary ICDs often
have the functional capabilities of both pacemakers and defibrillators.6,22
ICD indications in children
A study in the United States found the estimated annual incidence of
SCD in children to be roughly one in 100,000.23,24 As a result, children
and patients with congenital heart disease constitute fewer than 1% of
all ICD recipients.9,25,26 The low volume of paediatric ICD implants
results in minimal available data regarding benefits following ICD
placement, as most paediatric ICD studies are retrospective, with small
sample sizes, and conducted at individual hospitals.25,27 However,
studies show that ICD use in children is increasing.23,25,27 The average
age of children who receive ICDs varies, but typically ranges from 12 to
14 years of age.23,27-29 Indications for ICDs in children include a variety
of pathologies and conditions; one study found that 46% of paediatric
ICD patients had congenital heart disease, 23% had cardiomyopathy,
and 31% had primary electrical diseases, such as congenital long QT
syndrome.26 Idiopathic VF and a family history of SCD were also found
to be indications for ICDs in children.17,27
Physical complications of paediatric ICDs
In paediatric medicine, anatomical differences prevent the effective use
of transvenous ICD technology designed for adults.27,30 Studies have
found that there are greater instances of inappropriate shock therapy
and lead-related complications within the paediatric population, with
lead failure being a significant cause of complications and
staff reviewRCSIsmj
Implantable cardioverter defibrillator (ICD)
ICD
Leads
Heart
Pulsegenerator
t
tt
t
t
Volume 12: Number 1. 2019 | Page 103
malfunction.17,26,27,31-33 Infections and higher ICD replacement frequency
are also important complications to consider.
Treadmill exercise testing after ICDimplantation may be an effective means ofidentifying the appropriate high heart rate
zone within a controlled setting.
Inappropriate shocks
Supraventricular tachycardia and sinus tachycardia during physical activity
have been found to be the primary causes of inappropriate shock
delivery.34,35 To avoid inappropriate shock discharge, it is crucial for ICDs to
be able to differentiate between supraventricular tachycardia,
physiological sinus tachycardia (e.g., normal exercise), and life-threatening
VT. Treadmill exercise testing after ICD implantation may be an effective
means of identifying the appropriate high heart rate zone within a
controlled setting.18 It is suggested that the higher incidence of paediatric
ICD complications may be due to increased physical activity alongside
significant physiological growth occurring throughout childhood.9,36
Interestingly, growth as measured by height and weight changes is a
strong predictor of lead failure.31 Children will also live with their ICD for
longer periods of time compared to adults, inherently giving rise to an
increased likelihood of complications, such as lead failure and infection.
Infections
In addition to accidental inappropriate shocks, one study found paediatric
ICD infection rates to be higher compared to adults despite undergoing
implantation by the same surgeons, in the same facility and time period.18
Overall infection rates following ICD implantation range from 1-8%.37
Infections may occur during the implantation procedure, during
subsequent surgical procedures involving the ICD, or if parts of the ICD
erode through the skin over time.37 Infection of the ICD leads can also
occur via haematogenous spread of a distant infection.37 Paediatric ICD
infections were found to be more common in patients with a congenital
heart disease or defect.37 The majority of infections are superficial and thus
are successfully treated with antibiotics, but more severe infections require
device removal.37,38 It is thought that the higher infection rates may be
associated with suboptimal wound care and an insufficient rest period
prior to returning to physical activity.26,37
Higher frequency of ICD replacement
Paediatric ICDs are associated with a higher frequency of ICD
replacement.17,26,31 Dislodgment, haemothorax, pneumonia, and superior
vena cava syndrome are additional acute complications that contribute to
paediatric morbidity.31 One study found that 26% of paediatric ICD
patients required reintervention in the form of ICD extraction or
replacement as a result of an underlying issue.39 In contrast, another study
found that 18.3% of adult ICD patients required replacement due to
device fault.40 Ultimately, small patient body size and rapid somatic growth
pose unique challenges for ICD placement in children; further work to
develop strategies to minimise these complications in children is required.
Psychosocial effects in children
Although paediatric ICDs are effective life-saving devices, the
psychological implications must be considered during patient
management. In the setting of SCD primary prevention, asymptomatic
patients and their families may find the diagnosis of a potentially
life-threatening condition to be particularly distressing, especially in
children who have a poor understanding of the overarching situation.41
Likewise, secondary prevention of VT or VF recurrence is undoubtedly
stressful following a recent traumatic episode.41 Some children and
adolescents report a feeling of helplessness and loss of control during ICD
placement, as the decision to have an ICD is essentially non-negotiable –
it is a matter of life or death.42
Delivery of an ICD shock can be adistressing and physically painful
experience that leaves patients feelingfrightened and anxious. As a result, many
children with ICDs develop anxiety.
The fear of unpredictable inappropriate shock episodes may have a
significant psychological impact. A study that interviewed adolescents
with ICDs found that following ICD placement, their concept of
normalcy had changed; they subsequently always felt “slightly short of
normal”.42 Delivery of an ICD shock can be a distressing and physically
painful experience that leaves patients feeling frightened and anxious.42
As a result, many children with ICDs develop anxiety, with one study
reporting the development of school phobias and post-traumatic stress
disorder requiring administration of antidepressant and anxiolytic
medications.17,18,33 Some paediatric patients develop transient
depression, especially shortly following the ICD placement
procedure.41,42 Social isolation, learning difficulties, and sleep
disturbances are also prevalent in this patient demographic.33,41,43
Professional psychological support should be considered for paediatric
ICD patients, both before and after implantation, as part of the
long-term multidisciplinary management plan.17,43
In addition, paediatric ICD patients often report changes to quality of life.
As a physically active demographic, many patients stated that they were
required to immediately cease participation in nearly all competitive
staff reviewRCSIsmj
Page 104 | Volume 12: Number 1. 2019
sports.42 Athletic sports involving physical contact are typically
discouraged.44 Sports have innumerable positive effects on child social
development, health, and even cognitive development; abstaining from
such activities may yield unintentional social isolation.41,45-47 There are
reports of harassment and bullying from peers as a result of the ICD,
further pushing paediatric ICD patients to social isolation.42 Furthermore,
many paediatric ICD patients state that they feel overprotected due to
consistent monitoring by family, friends, and teachers.42 It is evident that
the psychosocial implications of paediatric ICDs are significant and
numerous, and must challenge healthcare providers to seek solutions to
these complex consequences.
The burden of cost
In addition to the physical and psychosocial effects, the significant financial
implications of paediatric ICDs must also be considered. One American
study found the total ICD cost per patient to be $433,000 US in 2006,
which included the costs of implantation, management, and issues such
as device malfunction.48 When adjusted for inflation, this would add up to
more than $540,500 US in 2017. Furthermore, it must be considered that
the process of ICD implantation may reduce the patient’s quality of life.49
In Ireland and the UK, many insurance companies provide coverage for
ICD and pacemaker implantation.50 While insurance companies cover
specific cases of paediatric ICD implantation deemed medically necessary,
some situations are considered investigational and thus costs may not be
covered.51-54 Variances are present when comparing insurance coverage in
the United States, UK, Ireland, and Canada, and further discrepancies arise
in the setting of paediatric ICDs as they are far less common than adult
ICDs. With the high costs and precarious nature of some insurance
schemes, a paediatric ICD may be an unaffordable luxury for some families.
Looking to the future – potential alternatives to ICDs
In light of these considerable and varied challenges, many families may
seek alternatives to ICDs. ICD implantation in children is more complex
than in adults, often owing to children’s smaller venous vessels, which put
them at risk for occlusion.55 The subcutaneous ICD (SICD) is a recent
development that may be used as an alternative to traditional ICDs.56 This
is a particularly attractive option for children because SICDs do not require
a transvenous approach during lead implantation, nor do they require a
sternotomy.27,57 The avoidance of a transvenous approach obviates the
need for attempting to implant ICD leads within the child’s smaller venous
vessels. Adult studies have found SICDs to be equally effective in
defibrillation compared to ICDs.58 However, the SICD is larger than
conventional ICDs and thus may be less comfortable when implanted in
children with smaller body sizes.27 Despite this, SICDs remain a viable
alternative for children in comparison to traditional ICDs.
Alternatives to ICDs include anti-arrhythmic medications, ablation surgery,
and cardiac transplantation.6 Compared to treatment with anti-arrhythmic
medications, quality of life in adults remains unchanged or is improved
with ICD treatment.59 However, no such research on paediatric ICD
patients is currently available.59 Unfortunately, there is limited data that
focuses on alternatives to paediatric ICDs; it remains a niche area in
desperate need of further research and innovation.
Conclusion
Ultimately, the decision to undergo ICD implantation is complicated and
the risks of SCD, implant complications, comorbidities, and decreased life
expectancy must be considered.8 While an ICD can prevent SCD caused
by VF or VT, its drawbacks must be thoroughly assessed prior to
implantation, especially in children. Children must live with the ICD for
longer, and will undergo significant growth that may increase the risk of
lead-related complications. Anxiety and depression are common in
paediatric ICD patients, and must be considered as part of the
management process. The costs of ICD implantation and maintenance are
substantial, and may place families in difficult financial situations. However,
the effectiveness of ICDs in preventing SCD is well established, and for
many patients the decision to implant an ICD is not negotiable. While
some alternatives to paediatric ICDs exist, further research and developments
are needed to reduce complications and improve quality of life.
staff review
References
1. Zimetbaum PJ, Aronson MD, Melin JA. Overview of palpitations in adults.
UptoDate. 2016. [Internet]. [updated 2016; cited 2018 Jul 20]. Available
from: https://www.uptodate.com/contents/overview-of-palpitations-in-
adults?search=arrhythmia&source=search_result&selectedTitle=3~150&usa
ge_type=default&display_rank=3.
2. Raviele A, Giada F, Bergfeldt L, Blanc JJ, Blomstrom-Lundqvist C, Mont L et
al. Management of patients with palpitations: a position paper from the
European Heart Rhythm Association. EP Europace. 2011;13(7):920-34.
3. Gale CP. Assessment of palpitations. BMJ. 2016;352:h5649.
4. Mirowski M, Reid PR, Mower MM, Watkins L, Gott VL, Schauble JF et al.
Termination of malignant ventricular arrhythmias with an implanted
automatic defibrillator in human beings. N Engl J Med. 1980;303:322-4.
RCSIsmj
Volume 12: Number 1. 2019 | Page 105
staff review
5. DiMarco JP. Implantable cardioverter-defibrillators. N Engl J Med.
2003;349:1836-47.
6. Ganz LI, Piccini J, Downey BC. Implantable cardioverter-defibrillators:
overview of indications, components, and functions. UptoDate. 2018.
[Internet]. [updated 2018; cited 2018 Jun 9]. Available from:
https://www.uptodate.com/contents/implantable-cardioverter-defibrillators
-overview-of-indications-components-and-functions?search=implantable%
20cardioverter%20defibrillator&source=search_result&selectedTitle=1~150
&usage_type=default&display_rank=1.
7. Myerburg RJ, Reddy V, Castellanos A. Indications for implantable
cardioverter-defibrillators based on evidence and judgment. J Am Coll
Cardiol. 2009;54(9):747-63.
8. Merchant FM, Quest T, Leon AR, El-Chami MF. Implantable
cardioverter-defibrillators at end of battery life. J Am Coll Cardiol.
2016;67(4):435-44.
9. Blom NA. Implantable cardioverter-defibrillators in children. Pacing Clin
Electrophysiol. 2008;31:S32-4.
10. Podrid PJ, Olshansky B, Manaker S, Downey BC. Pathophysiology and
etiology of sudden cardiac arrest. UptoDate. 2018. [Internet]. [updated
2018; cited 2018 Jun 10]. Available from: https://www.uptodate.com/
contents/pathophysiology-and-etiology-of-sudden-cardiac-arrest?search=
ventricular%20fibrillation&source=search_result&selectedTitle=2~150&us
age_type=default&display_rank=2.
11. Estes NA. Predicting and preventing sudden cardiac death. Circulation.
2011;124(5):651-6.
12. Al-Khatib SM, Stevenson WG, Ackerman MJ, Bryant WJ, Callans DJ, Curtis
AB et al. 2017 AHA/ACC/HRS guidelines for management of patients with
ventricular arrhythmias and the prevention of sudden cardiac death. J Am
Coll Cardiol. 2017. [Epub ahead of print].
13. Rea T, Page RL. Community approaches to improve resuscitation after
out-of-hospital cardiac arrest. Circulation. 2010;121:1134-40.
14. Chen Q, Kirsch GE, Zhang D, Brugada R, Brugada J, Brugada P et al.
Genetic basis and molecular mechanism for idiopathic ventricular
fibrillation. Nature. 1998;392:293-6.
15. Brugada J, Brugada R, Brugada P. Right bundle branch block, ST segment
elevation in leads V1-V3: a marker for sudden death in patients without
demonstrable structural heart disease. Circulation. 1997;96:1151.
16. Connolly SJ, Hallstrom AP, Cappato R, Schron EB, Kuck KH, Zipes DP et al.
Meta-analysis of the implantable cardioverter defibrillator secondary
prevention trials. Eur Heart J. 2000;21(24):2071-8.
17. Eicken A, Kolb C, Lange S, Brodherr-Heberlein S, Zrenner B, Schreiber C et
al. Implantable cardioverter defibrillator (ICD) in children. Int J Cardiol.
2006;107(1):30-5.
18. Stefanelli CB, Bradley DJ, Leroy S, Dick II M, Serwer GA, Fischbach PS.
Implantable cardioverter defibrillator therapy for life-threatening arrhythmias
in young patients. J Interv Card Electrophysiol. 2002;6:235-44.
19. Das G, Carlblom D. Artificial cardiac pacemakers. Int J Clin Pharmacol Ther
Toxicol. 1990;28(5):177-89.
20. Olshansky B, Gopinathannair R. Bradycardia. BMJ Best Practice. [Internet].
Available from: https://bestpractice.bmj.com/topics/en-gb/832.
21. Pinski SL, Fahey GJ. Implantable cardioverter-defibrillators. Am J Med.
1999;106:446-58.
22. Kang S, Snyder M. Overview of pacemakers and implantable cardioverter
defibrillators (ICDs). University of Rochester Medical Center Health
Encyclopedia. [Internet]. Available from:
https://www.urmc.rochester.edu/encyclopedia/content.aspx?contenttypeid
=85&contentid=p00234.
23. Jin BK, Bang JS, Choi EY, Kim GB, Kwon BS, Bae EJ et al. Implantable
cardioverter defibrillator therapy in pediatric and congenital heart disease
patients: a single tertiary center experience in Korea. Korean J Pediatr.
2013;56(3):125-9.
24. Cross BJ, Estes NA, Link MS. Sudden cardiac death in young athletes and
nonathletes. Curr Opin Crit Care. 2011;17(4):328-34.
25. Burns KM, Evans F, Kaltman J. Pediatric ICD utilization in the United States
from 1997-2006. Heart Rhythm. 2011;8(1):23-8.
26. Berul CI, Van Hare GF, Kertesz NJ, Dubin AM, Cecchin F, Collins KK et al.
Results of a multicenter retrospective implantable cardioverter-defibrillator
registry of pediatric and congenital heart disease patients. J Am Coll
Cardiol. 2008;51(17):1685-91.
27. Griksaitis MJ, Rosengarten JA, Gnanapragasam JP, Haw MP, Morgan JM.
Implantable cardioverter defibrillator therapy in paediatric practice: a
single-centre UK experience with focus on subcutaneous defibrillation.
Europace. 2013;15:523-30.
28. Wilson WR, Greer GE, Grubb BP. Implantable cardioverter-defibrillators in
children: a single-institutional experience. Ann Thorac Surg.
1998;65:775-8.
29. Corcia MCG, Sieira J, Pappaert G, de Asmundis C, Chierchia GB, La Meir
M et al. Implantable cardioverter-defibrillators in children and adolescents
with Brugada syndrome. J Am Coll Cardiol. 2018;71(2):148-57.
30. Stephenson EA, Batra AS, Knilans TK, Gow RM, Gradaus R, Balaji S et al. A
multicenter experience with novel implantable cardioverter defibrillator
configurations in the pediatric and congenital heart disease population. J
Cardiovasc Electrophysiol. 2006;17(1):41-6.
31. Alexander ME, Cecchin F, Walsh EP, Triedman JK, Bevilacqua LM, Berul CI.
Implications of implantable cardioverter defibrillator therapy in congenital
heart disease and pediatrics. J Cardiovasc Electrophysiol. 2004;15(1):72-6.
RCSIsmj
RCSIsmjstaff review
Page 106 | Volume 12: Number 1. 2019
32. McLeod KA. Cardiac pacing in infants and children. Heart. 2010;96:1502-8.
33. Harkel ADJT, Blom NA, Reimer AG, Tukkie R, Sreeram N, Bink-Boelkens
MTE. Implantable cardioverter defibrillator implantation in children in The
Netherlands. Eur J Pediatr. 2005;164(7):436-41.
34. Korte T, Köditz H, Niehaus M, Paul T, Tebbenjohanns J. High incidence of
appropriate and inappropriate ICD therapies in children and adolescents
with implantable cardioverter defibrillator. Pacing Clin Electrophysiol.
2004;27(7):924-32.
35. Gradaus R, Wollmann C, Köbe J, Hammel D, Kotthoff S, Block M et al.
Potential benefit from implantable cardioverter-defibrillator therapy in
children and young adolescents. Heart. 2004;90:328-9.
36. Sanatani S, Cunningham T, Khairy P, Cohen MI, Hamilton RM, Ackerman
MJ. The current state and future potential of pediatric and congenital
electrophysiology. J Am Coll Cardiol. 2017;3(3):195-206.
37. Baddour LM, Epstein AE, Erickson CC, Knight BP, Levison ME, Lockhart PB
et al. Update on cardiovascular implantable electronic device infections
and their management. Circulation. 2010;121:458-77.
38. Cohen MI, Bush DM, Gaynor JW, Vetter VL, Tanel RE, Rhodes LA. Pediatric
pacemaker infections: twenty years of experience. J Thorac Cardiovasc
Surg. 2002;124:821-7.
39. DeWitt ES, Triedman JK, Cecchin F, Mah DY, Abrams DJ, Walsh EP et al.
Time dependence of risks and benefits in pediatric primary prevention
implantable cardioverter-defibrillator therapy. Circ Arrhythm Electrophysiol.
2014;7:1057-63.
40. Gould PA, Krahn AD. Complications associated with implantable
cardioverter-defibrillator replacement in response to device advisories.
JAMA. 2006;295(16):1907-11.
41. Dunbar SB, Dougherty CM, Sears SF, Carroll DL, Goldstein NE, Mark DB et
al. Educational and psychological interventions to improve outcomes for
recipients of implantable cardioverter defibrillators and their families.
Circulation. 2012;126:2146-72.
42. Ziegler VL, Nelms T. Almost normal: experiences of adolescents with
implantable cardioverter defibrillators. J Spec Pediatr Nurs.
2009;14(2):143-51.
43. Koopman HM, Vrijmoet-Wiersma CMJ, Langius JND, van den Heuvel F,
Clur SA, Blank CA et al. Psychological functioning and disease-related
quality of life in pediatric patients with an implantable cardioverter
defibrillator. Pediatr Cardiol. 2012;33:569-75.
44. Dimsdale C, Dimsdale A, Ford J, Shea JB, Sears SF. My child needs or has
an implantable cardioverter-defibrillator; what should I do? Circulation.
2012;126:e244-7.
45. Felfe C, Lechner M, Steinmayr A. Sports and child development. PLoS
One. 2016;11(5):e0151729.
46. Light RL. Children’s social and personal development through sport: a case
study of an Australian swimming club. J Sport Soc Issues.
2010:34(4);379-95.
47. Bailey R. Physical education and sport in schools: a review of benefits and
outcomes. J Sch Health. 2006;76(8):397-401.
48. Feingold B, Arora G, Webber SA, Smith KJ. Cost-effectiveness of
implantable cardioverter-defibrillators in children with dilated
cardiomyopathy. J Card Fail. 2010;16(9):734-41.
49. Sanders GD, Hlatky MA, Owens DK. Cost-effectiveness of implantable
cardioverter defibrillators. N Engl J Med. 2005;353:1471-80.
50. GloHealth. List of Cardiac Procedures. 2012. [Internet]. Available from:
https://www.123.ie/health-insurance-quote/cardiac-procedures.pdf.
51. Blue Cross Blue Shield of Massachusetts Association. Medical Policy –
Implantable Cardioverter Defibrillator. [Internet]. Available from:
https://www.bluecrossma.com/common/en_US/medical_policies/070%20I
mplantable%20Cardioverter%20Defibrillator%20prn.pdf.
52. EmblemHealth. Implantable Cardioverter Defibrillators. 2018. [Internet].
Available from:
https://www.emblemhealth.com/~/media/Files/PDF/_med_guidelines/MG
_Implantable_Cardioverter_Defibrillators.pdf.
53. PriorityHealth. Cardioverter Defibrillators. 2017. [Internet]. Available from:
https://www.priorityhealth.com/-/media/priorityhealth/documents/medical
-policies/91410.pdf?la=en.
54. Gateway Health Plan. Subcutaneous Implantable Cardioverter Defibrillator.
2017. [Internet]. Available from:
https://www.gatewayhealthplan.com/Portals/0/med_drug_policies/MC_O
H_Subcutaneous%20ICD.pdf.
55. Bové T, François K, Caluwe WD, Suys B, De Wolf D. Effective cardioverter
defibrillator implantation in children without thoracotomy: a valid
alternative. Ann Thorac Surg. 2010;89:1307-9.
56. Adragão P, Agarwal S, Barr C, Boersma L, Brock-Johanssen J, Butter C et al.
Worldwide experience with a totally subcutaneous implantable
defibrillator: early results from the EFFORTLESS S-ICD Registry. Eur Heart J.
2014;35(25):1657-65.
57. Jarman JWE, Lascelles K, Wong T, Markides V, Clague JR, Till J. Clinical
experience of entirely subcutaneous implantable cardioverter-defibrillators
in children and adults: cause for caution. Eur Heart J. 2012;33(11):1351-9.
58. Bardy GH, Smith WM, Hood MA, Crozier IG, Melton IC, Jordaens L et al.
An entirely subcutaneous implantable cardioverter defibrillator. N Engl J
Med. 2010;369:36-44.
59. Sears SF, Conti JB. Implantable cardioverter-defibrillators for children and
young adolescents: mortality benefit confirmed – what’s next? Heart.
2004;90(3):241-2.
staff review
Abstract
Cardiovascular disease (CVD) is the leading cause of death worldwide. Advances in interventional
cardiology and cardiothoracic surgery, coinciding with an increasing number of patients with complex
comorbidities, have led to new hybrid treatments for cardiovascular disease. These hybrid procedures can
be used to treat coronary artery disease (CAD), arrhythmias, and valvular heart disease. The approaches aim
to utilise minimally invasive procedures with reduced complications, while maximising improved patient
outcomes. Hybrid coronary revascularisation (HCR) involves performing both coronary artery bypass
grafting (CABG) and percutaneous coronary intervention (PCI), or stenting of the coronary vessels, to treat
multi-vessel coronary disease. Patients with concomitant valvular and coronary artery disease can be treated
with a hybrid valve and PCI via minimally invasive procedures. Advantages include lower rates of cardiac
and cerebrovascular events, shortened hospital stays, and reduced blood transfusion rates. The hybrid maze
procedure combines epicardial thoracoscopic ablation surgery and endocardial catheter ablation.
Advantages of this hybrid maze procedure include improved patient recovery times, reduced mortality, and
increased success rates compared to either cardiology interventions or cardiothoracic surgery alone.
However, hybrid cardiovascular procedures are performed in only a limited number of facilities, and no
standardised methods exist at present to determine which patients will most benefit from these procedures.
Randomised controlled trials need to be performed to maximise patient benefit. While challenges remain,
these pioneering hybrid procedures offer a remarkable combination of the most effective, lowest risk, and
highest quality technologies and techniques from interventional cardiology and cardiothoracic surgery
alike. Such hybrid interventions deliver improved patient outcomes and success rates, particularly for
high-risk patients with comorbidities.
Laura Staunton
RCSI medical student
An evolving strategy forcardiovascular intervention: a hybrid approach
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 107-112.
RCSIsmj
Volume 12: Number 1. 2019 | Page 107
Introduction
Cardiovascular disease (CVD) is the leading cause of mortality worldwide
and is responsible for 17.7 million deaths per annum.1 CVD includes
coronary artery disease (CAD), valvular heart disease, arrhythmias, and
congenital heart conditions.1 With the increasing prevalence of CVD, great
advances have evolved in the fields of interventional cardiology and
cardiothoracic surgery. As interventional cardiology strives to become
more invasive, cardiothoracic surgery becomes less so.2-5
Invasive cardiology procedures, including cardiac catheterisation and
angioplasty, have revolutionised the treatment of CVD.6 Since the first
percutaneous coronary intervention (PCI) in humans in 1977,
technology and expertise have continuously improved.6 In 2018,
60% of patients presenting with acute coronary syndrome (ACS)
underwent a PCI procedure.6
From the development of the cardiopulmonary bypass (CPB) in 1953,
to the advancement of minimally invasive robotic-assisted techniques,
cardiac surgery has undergone a revolution of its own.4,5 With the use
of the da VinciTM Surgical System to perform totally endoscopic
robotic coronary artery bypass grafting (TECAB), as well as atrial
fibrillation ablation, the specialty has been transformed.4
Subsequent to the significant advances in each of these two fields,
hybrid approaches to patient cardiac care have emerged, combining
the technology and skills from the catheterisation lab with the
expertise of the operating theatre.2,7-10 Hybrid procedures were
initially performed in the 1990s to accommodate the increasing
number of patients with CVD presenting with complex
comorbidities.2,7-10 Such strategies include hybrid coronary
revascularisation, hybrid valve/PCI intervention, and hybrid maze for
treating atrial fibrillation.2,7-10 These techniques allow use of minimally
invasive surgery and stent technology to effectively treat patients with
CAD or valvular disease. Therefore, high-risk patients with comorbid
conditions can be treated successfully in a minimally invasive way.
These procedures have varying success rates, but continue to
challenge boundaries and advance the fields of cardiology and
cardiothoracic surgery.
Hybrid coronary revascularisation
PCI is the standard of care for unstable angina, non-ST elevated
myocardial infarction (NSTEMI), and ST-elevated myocardial
infarction (STEMI) when performed between 90 and 120 minutes
post STEMI.6,11 However, CAD is often multi vessel, and frequently
occurs with comorbidities. Coronary artery bypass grafting (CABG) is
the gold standard for treating multi-vessel CAD, due to increased
survival rates, lower risk of MI, and improved angina relief. CABG also
reduces the need for repeat revascularisation compared to
PCI.2,3,5,7,8,12 Typically, the CABG technique involves grafting of the left
internal mammary artery (LIMA) to the left anterior descending (LAD)
artery, using either on-pump or off-pump techniques.13 The on-pump
CABG procedure involves median sternotomy with CPB and aortic
clamping of an arrested heart.14,15 A newer technique, performed
without the use of CPB, is the off-pump CABG, which utilises
specialised instruments to stabilise the beating heart.14,15 On-pump
CABG is more widely used, as the off-pump procedure is highly
specialised, more technically challenging, and can lead to incomplete
revascularisation, which would require a repeat procedure.15,16 Many
other risks are associated with CABG, including increased incidence of
atrial fibrillation and stroke.2,17
Over time, stenting has progressed from utilising bare metal stents toinserting drug-eluting stents (DES),showing reduced complications,
morbidity, and mortality.
PCI has been shown to be superior for non-LAD revascularisation.
Over time, stenting has progressed from utilising bare metal stents to
inserting drug-eluting stents (DES), showing reduced complications,
morbidity, and mortality.3,6 More recently, some studies have reported
that the use of second-generation DES can be as effective as CABG for
multi-vessel coronary disease.2,12 At present, however, CABG remains
the gold standard procedure for multi-vessel revascularisation.14,15
With improved minimally invasive CABG and stent technology, hybrid
coronary revascularisation (HCR), combining LIMA-LAD grafting with
DES for non-LAD vessels, is shown to be extremely effective, especially
for high-risk patients with comorbidities.2,5,7,8,12,17
Staging
All hybrid procedures involving cardiothoracic surgery and
interventional cardiology must be staged. Staging refers to the
procedures occurring either on the same day in a hybrid suite, or
separated by time (days or weeks) and space (i.e., in the operating
theatre and catheterisation lab).2,5,7,8,12,17 A one-stage HCR approach
involves a hybrid suite with an operating team and cardiology team
performing the revascularisation on the same day.2,5,7,8,12,17
Advantages of a one-stage HCR procedure include reduced resources,
procedure time, and length of patient hospital stay, and fewer
postoperative complications.2,5,7,8,12,17 However, it is not without its
limitations. Dual anti-platelet therapy is required during a HCR
procedure, which can increase bleeding risk.2,5,7,8,12,17
Hypercoagulability has been associated with minimally invasive
staff reviewRCSIsmj
Page 108 | Volume 12: Number 1. 2019
CABG; therefore, stent thrombosis is a risk when PCI is performed at
the same time in a one-stage HCR procedure.2,5,7,8,12,17 The optimal
formula for performing a one-stage HCR, including timing, dosage of
dual anti-platelet drugs, and heparin reversal is still under
consideration.2,5,7,8,12,17
A two-stage hybrid approach for coronary revascularisation involves
performing PCI first, followed by the LIMA-LAD procedure, or vice
versa, on different days.2,5,7,8,12,17 Due to risk of bleeding from
anti-platelet therapy used in PCI, and the increased risk of
cardiovascular events, PCI followed by LIMA-LAD is rarely
performed.2,5,7,8,12,17 The majority of HCR procedures employ a
two-stage approach, with the LIMA-LAD procedure performed first in
the operating theatre, followed by PCI in the catheterisation lab days
or weeks later.2,5,7,8,12,17 Advantages include a reduction in bleeding
risk, and visualisation of the LIMA-LAD graft during angiography,
which can confirm patency.2,5,7,8,12,17
Outcomes of HCR
Current analysis of HCR procedures reveals conflicting evidence, with
some research reporting a lower rate of cardiac and cerebrovascular
events compared to CABG alone, while other studies report a similar
rate.5,12,17 Confirmed advantages include shortened hospital stays and
ventilator times, and decreased blood transfusion rates, thereby
shortening the overall recovery period.5,12,17 Disadvantages include
higher costs and longer operation times.5,12,17 As HCR has been
performed on a limited number of patients to date, and there is a
shortage of randomised controlled trials, additional evidence is
required to determine whether HCR can become the mainstay of
multi-vessel coronary disease management.5,12,17
Hybrid valve/PCI intervention
Valve surgery
Valvular heart disease is highly prevalent worldwide, and can arise via
damage following rheumatic fever, congenital cardiac defects, infection,
or valvular calcification.18 Most commonly affected are the mitral and
aortic valves.18 Mitral regurgitation occurs in approximately 19% of the
population, but typically does not have a significant impact on patient
mortality rates.18 Mitral regurgitation can occur due to cardiomyopathy or
heart failure.18 Aortic valve calcification has a significant impact on patient
mortality, and in high-income countries, its incidence is increasing due to
the ageing population.18 Calcified aortic valves occur in 40% of the
population over 75 years of age, and can lead to aortic stenosis requiring
valve replacement surgery.18
Cardiac valve surgery can be performed either using minimally invasive
techniques, or via median sternotomy, which requires CPB.2,19-22 Aortic and
mitral valve disease are treated surgically with either replacement or
repair.2,19-22 Upper hemi-sternotomy is performed for aortic valve surgery,
while right mini-thoracotomy or lower hemi-sternotomy are the minimally
invasive techniques used for mitral valve surgery.2,19 There are numerous
advantages to minimally invasive valve surgery, including shorter hospital
stays, faster recovery times, and improved patient outcomes.2,19
Concomitant valve and coronary artery disease
Up to 50% of patients undergoing valve surgery also present with severe
CAD.10,19-21 Currently, the surgical treatment of choice is an open
procedure employing median sternotomy, in which the CABG and valve
replacement are performed at the same time.10,19 This high-risk
procedure involves CPB and aortic clamping, the timing of which is
crucial to prevent cardiac ischaemia.10,19,22 This procedure is associated
with high mortality rates and long hospital stays.10,19,22 If valve surgery is
required post CABG in a separate procedure, morbidity and mortality
significantly increase due to a number of complications including
adhesions, scarring, and vascular or graft injury.10,19,22
Hybrid procedure
An alternative to the surgical procedures of CABG and valve repair or
replacement is to perform a hybrid procedure in which the CABG is
replaced with PCI; this would be performed prior to the valve
surgery.2,9,19-22 This hybrid valve/PCI procedure uses minimally invasive
surgical techniques to transform a high-risk operation into two
lower-risk procedures.10,19-22 Protection of the myocardium from
ischaemia is achieved by performing PCI prior to valve surgery, which
restores blood flow to the heart.21 Encouragingly, this intervention
results in lower rates of morbidity and mortality for patients who have
had previous cardiac surgery.10,19-22
staff reviewRCSIsmj
Volume 12: Number 1. 2019 | Page 109
Staging
In line with HCR, staged approaches are also used for the hybrid valve/PCI
intervention. A one-stage procedure involves performing PCI prior to valve
surgery in a hybrid suite on the same day.20 Advantages include reduced
resources, time, length of stay, and postoperative complications.23 Studies
have shown a higher risk of acute kidney injury (AKI) when these
procedures are performed in the same admission, however.10,19 Other
limitations include staff and hybrid suite availability.20 In addition, PCI
procedures require anti-platelet therapy to reduce the risk of stent
thrombosis.2,5,7,8,12 It has been hypothesised that performing the valve
surgery within six hours of PCI will reduce bleeding risk, as the anti-platelet
agent will not have taken full effect; this highlights the importance of a
hybrid suite to carry out the procedure.2
A two-stage procedure involves performing PCI days or weeks prior to
valve surgery.20 Advantages include a reduced risk of AKI, no anti-platelet
requirements on the day of surgery, and reduced postoperative
complications.10,19 To decrease the risk of AKI, an interval of three weeks
has been suggested between procedures.10,19 Currently, the hybrid
approach of valve surgery/PCI would only be beneficial in those patients
who do not have significant LAD disease.10,19 The two-stage approach is
the preferred hybrid procedure, due to the lower risk of AKI and the fact
that a hybrid suite is not required.2,9,10,19
Research shows that hybrid valve/PCIprocedures have considerable advantagesfor both first-time cardiac operations
and patients who have had previous cardiac surgery.
Outcomes of hybrid valve/PCI
Research shows that hybrid valve/PCI procedures have considerable
advantages for both first-time cardiac operations and patients who have
had previous cardiac surgery.10,19-22 Advantages of the hybrid approach
to valve/PCI include reduced operation times, decreased need for blood
transfusions, fewer cardiac and cerebrovascular adverse events, and
improved mortality rates.10,19-22 As with other hybrid procedures,
randomised controlled trials remain limited and will be required to
determine the optimal approach to hybrid valve/PCI intervention and its
potential for widespread application.10,19-22
Hybrid maze
Atrial fibrillation
Atrial fibrillation (AF) occurs when the cardiac atria beat in an
irregularly irregular pattern, causing unsynchronised contraction; it
carries a high risk of stroke.10,24 Patients diagnosed with AF who are
resistant to anti-arrhythmic drugs (AADs) are recommended for
pulmonary vein isolation catheter ablation or surgical
procedures.25-27 Atrial fibrillation can occur as: paroxysmal AF
(poAF), which is intermittent and usually resolves in 48 hours;
persistent AF (pAF), which persists >48 hours; or longstanding
persistent AF (LSpAF), which lasts ≥1 year.24 A successful endpoint
for either surgical or catheter-based treatment is achievement of
bidirectionally blocked signal conduction into the pulmonary veins,
or termination of AF.25-27
Endovascular catheter ablation
Endovascular catheter ablation for AF is performed by an
electrophysiologist, and has varying degrees of success due to the
use of different procedures to isolate the pulmonary veins from the
left atrium.26,27 Once isolated via a catheter inserted through the
femoral vein, radiofrequency is applied at the pulmonary vein
ostium.26,27 A linear lesion is created in hopes that the irregular
signal conduction will be blocked, thus terminating the AF.25-27
Paroxysmal AF can be effectively treated using this technique, with
an 80% success rate.28 However, for pAF, endovascular ablation has
much lower success rates as it involves a more challenging
procedure.28 Long-term linear lesions are more difficult to achieve,
and irregular conduction outside of the pulmonary veins can lead to
failure of endovascular treatment.27,28
Surgical management of AF
Surgical procedures have high success rates for poAF.29 Current
procedures include Cox Maze IV, thoracoscopic ablation, and the da
VinciTM robotic surgical system, which combines mitral valve
repair with cryomaze.11,24,26,30 Cox Maze IV is the gold standard
surgical procedure for AF, and combines surgical ablation and
incisional techniques, either via median sternotomy or right
mini-thoracotomy.30 However, Cox Maze IV is a complex procedure
with varying long-term success rates (66-96% of patients remain free
of atrial fibrillation after five years), and it is heavily dependent on
surgical skill.29,30 It is performed with CPB and has moderate
effectiveness for treating pAF and LSpAF.24,29,30 Thoracoscopic surgical
ablation eliminates the need for CPB, has improved morbidity and
mortality rates, and is successful in 70-87% of patients with poAF.29
The ability of surgical ablation to create transmural lesions is
limited.28,29 Linear lesions, especially at the mitral isthmus, are a
challenge, which can be crucial to achieving bidirectional block.26,28
These factors attribute to surgical ablation’s low success rates in
treating pAF and LSpAF.24,28,29
staff reviewRCSIsmj
Page 110 | Volume 12: Number 1. 2019
Hybrid maze
The hybrid maze procedure, which combines epicardial
thoracoscopic ablation surgery and endocardial catheter ablation,
is an effective alternative to the current gold standard for AF.29
The advantage of combining endocardial catheter ablation with
surgical ablation is that surgically inaccessible lesions can be
created.29 Delivering currents via catheter is also more effective for
transmural lesions in thicker tissue areas compared to surgical
ablation alone.29
Staging
A one-stage or two-stage approach can be adopted. The one-stage
approach involves surgical ablation followed by catheter ablation on
the same day.10,24,29 A one-stage procedure allows shorter patient
stays, and permits assessment of conduction gaps during the same
admission.10,24,29 A two-stage procedure involves performing catheter
ablation weeks to months after the surgical ablation.29 In this case,
stabilisation of the epicardial lesions can occur prior to catheter
ablation; therefore, success rates for the two-stage procedure are
higher, suggesting that this is the optimal hybrid procedure going
forward.10,29
Outcomes of hybrid maze
The fundamental basis of the hybrid maze procedure is its efficacious
ability to generate transmural lesions through epicardial and
endocardial approaches.2,10,26,28 Hybrid maze procedures result in
increased rhythm control for both poAF and pAF compared to
individual surgical or catheter-based approaches.10,29 Combining
endocardial and epicardial strategies allows bidirectional blockage
and AF termination, with similar results achieved for paroxysmal AF
and pAF.10,26,28 The risk of short-term severe complications is low and
comparable to thoracoscopic surgical ablation alone.10 As this
procedure is relatively new, there are limited long-term follow-up
studies. Short-term studies report a 78-100% success rate for pAF and
LSpAF without anti-arrhythmic drugs.10,28 After a three-year follow-up,
Maesen et al. reported success rates of 80% without AADs for poAF
and LSpAF, and a 95% return to sinus rhythm with repeat catheter
ablation and AADs.26 Randomised case-control studies are currently
unavailable for the hybrid maze procedure, and comparative studies
will also be required to determine whether this might present a new
gold standard treatment option for drug-resistant AF.10
Techniques such as hybrid coronary revascularisation, hybridvalve/PCI intervention, and the hybrid maze procedure have shown increased safety and
efficacy, especially in high-risk individuals.
Conclusion
Hybrid approaches to cardiovascular intervention include many novel
techniques, with the scope of clinical relevance ranging from
arrhythmias to coronary revascularisation. These techniques have
evolved due to the increasing number of high-risk patients presenting
with comorbid conditions. Techniques such as hybrid coronary
revascularisation, hybrid valve/PCI intervention, and the hybrid maze
procedure have shown increased safety and efficacy, especially in
high-risk individuals. Due to increasing rates of success that have been
demonstrated using such hybrid cardiovascular interventions, it is not
surprising that physicians and surgeons are beginning to adopt these
approaches.
However, many challenges persist, including the need for operator
training and optimal staging, access to technology, high costs, and
uncertain availability of appropriate facilities and hybrid
suites.2,6,8,10,12,20,26 At present, no standardised methods
exist to determine which patients will most benefit from these
procedures, and few, if any, randomised controlled trials have been
performed. A hybrid approach relies heavily on interdisciplinary
teamwork.2,8 Effective collaboration and ingenuity between
interventional cardiologists and cardiac surgeons is necessary for
these hybrid procedures to become successful and conventional.
Is hybrid cardiovascular intervention the way forward? Only time
will tell.
References
1. World Health Organisation. Cardiovascular diseases (CVDs). 2017.
[Internet]. [cited 2018 Sep 10]. Available
from:http://www.who.int/en/news-room/fact-sheets/detail/cardiovascular-dis
eases-(cvds).
staff reviewRCSIsmj
Volume 12: Number 1. 2019 | Page 111
2. Byrne JG, Leacche M, Vaughan DE, Zhao DX. Hybrid cardiovascular
procedures. JACC 2008;1(5):459-68.
3. Papadopoulos K, Lekakis I, Nicolaides E. Outcomes of coronary artery
bypass grafting versus percutaneous coronary intervention with
second-generation drug-eluting stents for patients with multivessel and
unprotected left main coronary artery disease. SAGE Open Med.
2017;5:2050312116687707.
4. Bush B, Nifong LW, Chitwood WRJ. Robotics in cardiac surgery: past,
present, and future. Rambam Maimonides Med J. 2013;4(3):e0017.
5. Sardar P, Kundu A, Bischoff M, Chatterjee S, Owan T, Nairooz R et al.
Hybrid coronary revascularization versus coronary artery bypass grafting in
patients with multivessel coronary artery disease: a meta-analysis. Catheter
Cardiovasc Interv. 2018;91(2):203-12.
6. Bhatt DL. Percutaneous coronary intervention in 2018. JAMA.
2018;319(2):2127-8.
7. Misumida N, Moliterno DJ. Hybrid coronary revascularization: time for a
new comparator? Catheter Cardiovasc Interv. 2018;91(2):213-4.
8. Papakonstantinou NA, Baikoussis NG, Dedeilias P, Argiriou M, Charitos C.
Cardiac surgery or interventional cardiology? Why not both? Let’s go
hybrid. J Cardiol. 2017;1(69):46-56.
9. Santana O, Xydas S, Williams RF, LaPietra A, Mawad M, Rosen GP et al.
Outcomes of a hybrid approach of percutaneous coronary intervention
followed by minimally invasive aortic valve replacement. J Thorac Dis.
2017;9(7):569-74.
10. Jiang YQ, Tian Y, Zeng LJ, He SN, Zheng ZT, Shi L et al. The safety and
efficacy of hybrid ablation for the treatment of atrial fibrillation: a
meta-analysis. PLoS One. 2018;13(1):e0190170.
11. Mancone M, van Mieghem NM, Zijlstra F, Diletti R. Current and novel
approaches to treat patients presenting with ST elevation myocardial
infarction. Expert Rev Cardiovasc Ther. 2016;14(8):895-904.
12. Harskamp RE, Zheng Z, Alexander JH, Williams JB, Xian Y, Halkos ME et al.
Status quo of hybrid coronary revascularization for multi-vessel coronary
artery disease. Ann Thorac Surg. 2013;96(6):2268-77.
13. Karthik S, Fabri BM. Left internal mammary artery usage in coronary artery
bypass grafting: a measure of quality control. Ann R Coll Surg Engl.
2006;88:367-9.
14. Shekar PS. On-pump and off-pump coronary artery bypass grafting.
Circulation. 2006;113(4):51-2.
15. Blackstone EH, Sabik JF III. Changing the discussion about on-pump versus
off-pump CABG. N Engl J Med. 2017;377:692-3.
16. Dieberg G, Smart NA, King N. On- vs. off-pump coronary artery bypass
grafting: a systematic review and meta-analysis. Int J Cardiol.
2016;233:201-11.
17. Dong Li, Kang Y-K, Xiang-guang A. Short-term and mid-term clinical outcomes
following hybrid coronary revascularization versus off-pump coronary artery
bypass: a meta-analysis. Arg Bras Cardiol. 2018;110(4):321-30.
18. Coffey S, Cairns BJ, Lung B. The modern epidemiology of heart valve
disease. Heart. 2015;102(1):1-2.
19. Santana O, Funk M, Zamora C, Escolar E, Lamas GA, Lamelas J. Staged
percutaneous coronary intervention and minimally invasive valve surgery:
results of a hybrid approach to concomitant coronary and valvular disease.
Thorac Cardiovasc Surg. 2012;144(3):634-9.
20. George I, Nazif TM, Kalesan B, Kriegal J, Yerebakan H, Kirtane A et al.
Feasibility and early safety of single-stage hybrid coronary intervention and
valvular cardiac surgery. Anne Thorac Surg. 2015;99(6):2032-7.
21. Lu JC, Shaw M, Grayson AD, Poullis M, Pullan M, Fabri BM. Do beating
heart techniques applied to combined valve or graft operations reduce
myocardial damage? Interact Cardiovasc Thorac Surg. 2018;7(11):111-5.
22. Byrne JG, Leacche M, Unic D, Rawn JD, Simon DI, Campbell D et al.
Staged initial percutaneous coronary intervention followed by valve
surgery (hybrid approach) for patients with complex coronary and valve
disease. J Am Coll Cardiol. 2005;45(1):14-8.
23. Bundhun PK, Sookharee Y, Bholee A, Huang F. Application of the SYNTAX
score in interventional cardiology: a systematic review and meta-analysis.
Medicine (Baltimore). 2017;96(28):e7410.
24. Pinho-Gomes AC, Amorim MJ, Oliveira SM, Leite-Moreira AF. Surgical
treatment of atrial fibrillation: an updated review. Eur J Cardiothorac Surg.
2014;46(2):167-78.
25. Kim J, Desai S, Jadonath R, Beldner SJ. The importance of bidirectional
block during pulmonary vein isolation. Pacing Clin Electrophysiol.
2013;36(5):143-5.
26. Maesen B, Pison L, Vroomen M, Luermans JG, Vernooy K, Maessen JG et
al. Three-year follow-up of hybrid ablation for atrial fibrillation. Eur J
Cardiothorac Surg. 2018;53(1):26-32.
27. Oral H, Knight BP, Tada H, Özaydin M, Chugh A, Hassan S et al.
Pulmonary vein isolation for paroxysmal and persistent atrial fibrillation.
Circulation. 2002;105:1077-81.
28. Pison L, La Meir M, Van Opstal J, Baauw Y, Maessen J, Crijns HJ. Hybrid
thoracoscopic surgical and transvenous catheter ablation of atrial
fibrillation. J Am Coll Cardiol. 2012;60(1):54-61.
29. Wang PJ. Hybrid epicardial and endocardial ablation of atrial fibrillation: is
ablation on two sides of the atrial wall better than one? J Am Heart Assoc.
2015;4(3):e001893.
30. Ruaengsri C, Schill MR, Khiabani AJ, Schuessler RB, Melby SJ, Damiano RJ.
The Cox-Maze IV procedure in its second decade: still the gold standard?
Eur J Cardiothorac Surg. 2018;53(1):19-25.
staff reviewRCSIsmj
Page 112 | Volume 12: Number 1. 2019
staff review
Abstract
Human sequencing tools and population genomics are becoming increasingly available commercially.
Advertised as fun and informative direct-to-consumer genomic kits, this commercial enterprise has become
commonplace in households across the world, and has helped to educate populations that were previously
unaware of genetic testing. Analysis of an individual’s genome can be an intriguing experiment for some,
or medically necessary for others; in either case, genetic testing is no longer confined to medical
laboratories. The expansion of commercial genetic sequencing companies pushes boundaries, posing many
questions and risks. Commercial accessibility can mean that the process of informed consent is not
rigorously upheld and patient understanding may not be uniform, resulting in potential confusion and
upset. Without stringent safeguarding, information security could pose another risk for users whose genetic
sequence may be stored in databases and sold to third-party companies in the name of profit. While it is
clear that strict regulations need to be placed on this growing field to protect patient identity and medical
information, the expansion of this scientific discipline has therapeutic potential. Improved genetic data
mining and family pedigree genealogy research is helping to identify new therapeutic targets for many
diseases, and holds promise for identifying new drug targets. If the human genome represents the
fundamental unit of ‘who we are’, should it be allowed to become a commercial enterprise, or must an
individual’s genetics be safeguarded?
Katie Nolan
RCSI medical student
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 113-117.
RCSIsmj
Introduction
The Human Genome Project and the rapidly
advancing ability to sequence individual human
genomes have opened the floodgates for commercial
utilisation of genetic home testing kits. Genetic
testing, which was once a medical and scientific
diagnostic tool, is now advertised and sold
commercially online through private companies.
While there is inherent benefit to advancing
technology and expanding the reaches of healthcare,
direct-to-consumer (DTC) genetic testing has yet to
overcome some very real and potentially devastating
pitfalls before it can be seen as sound medical and
legal practice. For a generation that uploads a vast
Six degrees of geneticidentification
Volume 12: Number 1. 2019 | Page 113
amount of personal and private information to the internet through apps
and online profiles, is uploading one’s genetic code for public consumption
merely the next step in digitising the human existence? While DTC genetic
testing allows everyone to explore their genetic history, increases genetic
awareness, and encourages people to take a proactive role in their own
healthcare decisions, it is also a new health promotion frontier, the
potential drawbacks of which have yet to be fully identified.1 Rapid
advancements in DTC genetic testing make it easier than ever to discover
information about ourselves and our ancestors – those that are known to
us, and those as yet unknown. However, the increasing amount of
information provided by genetic profiling and DNA mining raises
considerable personal and social implications, including ancestry inference,
discovery of misattributed paternity, and potential genetic discrimination.2,3
A study published in 2018 by Ambry Genetics identified a 40% false
positive rate among the results of DTC genetic tests analysed in their labs.4
Results such as these indicate the importance of confirmatory testing and
interpretation by medical professionals following DTC genetic testing.
While the possibility of identifying ‘risk genes’ holds promise for future
targeted therapies and a better understanding of disease mechanisms
through population genetics, the lack of confidentiality surrounding these
results necessitates attention toward ethical considerations such as
genomic discrimination and social stigmatisation.
Informed consent and the consumer
The broad consent obtained when an individual donates blood, saliva, or
urine to biobanks often means that users consent to their samples being
stored and used in genetic research. However, the range of uses covered
under this broad consent is vast, and people may be unaware of the full
scope of possibilities. Firstly, the understanding and recall of consent by
research participants has been shown to vary, and participants’ perceptions
of the harms or risks involved is also varied, with many studies using different
consent models for biobanking samples.5 A Canadian study examining
informed consent for online sexually transmitted infection (STI) screening
identified that participants had differing understandings of the various
components of informed consent, but that patients with prior testing
experience had a greater understanding of informed consent.6 The study
also identified the importance of webpage design in informing patients, and
the need to highlight consent pages to prevent users from clicking onto the
next webpage too quickly.6 Another study, which stratified participants
based on race and previous biobank participation, found that the broad
consent model was the most preferred model, followed closely by a
“study-specific” research model where consent is required for each
subsequent research project that utilises a patient’s sample.7 Participants
highlighted receiving sufficient information, being asked for permission
regarding the use of their samples, and provision of general and specific
information on secondary research as important factors for their consent
model preference.7 It is still unclear whether broad consent or a
study-specific model is advantageous, both for participants and for ethics
committees that must approve the work.8 Indeed, researchers have
examined the use of online dynamic consent models that allow ongoing
communication between researchers and research participants; however,
this model may require researchers to re-examine their relationship with
participants.9,10 There is evidence that patients are happy to give broad
consent for sample biobanking, in part due to a subjective understanding of
the samples’ potential use, and also for prosocial reasons such as altruism
and gratitude.11 Analysis of the ethics surrounding biospecimens has
endorsed the broad consent model as being adequate.12,13
There is evidence that patients are happy to give broad consent for sample biobanking,in part due to a subjective understanding
of the samples’ potential use.
One of the most famous examples of the importance of informed consent
in research is that of Henrietta Lacks. Henrietta Lacks was diagnosed with
cervical cancer in the United States in 1951, and her aggressive cancerous
cells were grown in culture by researchers hoping to use the thriving cell
line for research.14 The cells, termed HeLa cells after their owner, are
commonplace in labs all over the world; many companies and institutions
have profited from Ms Lacks’s contribution. While informed consent was
not required for research in the 1950s, this story has demonstrated the
importance of ownership and consent surrounding biospecimens, and the
associated implications for generations of family members into the
future.14 In 2013, Landry et al. published the entire genome sequence of
Henrietta Lacks, taken from her cells, without the consent of her family.15
While not illegal, this action had huge implications for members of her
family and exposed them on a level most would deem unimaginable.16
When family or distant relatives use DTC genetic testing and have their
genome sequenced, the story of Henrietta Lacks is a reminder that this can
have far-reaching implications for other members of that family.
Another area of concern is the possibility of minors using these ancestry
services and discovering misattributed paternity.17 In an era when
knowledge of family medical history is of great personal benefit, disclosure
of misattributed paternity is not necessarily a negative aspect of DTC
genetic testing, but it does have far-reaching social and psychological
implications. In addition to the personal implications for the minor, it also
affects the privacy of other individuals and their extended families. In an
analysis of 43 DTC genetic testing companies, one study found no specific
guidelines relating to participation by minors and, furthermore, the
companies were found not to address sensitive information, such as
staff reviewRCSIsmj
Page 114 | Volume 12: Number 1. 2019
misattribution of paternity, in their polices or terms of service.18 Situations
such as these make it clear that, in certain instances, a broad consent
model may not provide all the relevant information to a participant,
especially a minor. This is one area where biobanking regulation may
require further scrutiny, and the implementation of additional safeguards
to ensure protection for all participants and consumers.
Information safety and security
Security concerns about genetic testing arise when DNA samples volunteered
for one purpose are used without explicit consent for other purposes. For
example, police have recently used DNA samples submitted for ancestry or
genealogy purposes to identify DNA collected from a series of linked crimes
involving rape and murder.19 The previously unidentified Golden State killer
was identified when DNA submitted by relatives was partially matched to
DNA collected from the crime scenes.19 This use of online DNA and ancestry
databases by police calls into question the accessibility and indeed linkability
of genetic information, and how vulnerable society is to a new form of online
scrutiny, without knowledge or explicit consent.
For example, MyHeritage, a family tree and genealogy platform from
Israel, recently reported a data breach experienced in October 2017.20
Researchers were alerted that email and password data of 92 million of its
users had been hacked. While the only stolen information was log-in
details, and the company assured users that no family tree or DNA data
was accessed, the event does raise concerns over the security and usability
of this personal genetic information.20 Informed consent is an individual
task, but when informed consent involves the use of DNA and genetic
information, there is perhaps cause to include broader family members
and biological relatives in the consent process. A system of generational
consent may be required to ensure ethical practice when isolating and
analysing genetic samples, in order to prevent inadvertent privacy issues.21
With recreational genetics becoming commonplace across the world, it is
important that companies have safeguards and policies in place for dealing
with sensitive information that may be derived from their products.
The Ethical, Legal, and Social Implication (ELSI) research programme, set
up at the dawn of the human genome project to protect genetic
information, may not be followed or adequately upheld by privately
owned genetics companies.22,23 The question is whether newer personal
genetics companies have the potential to abuse the genetic information
they become privy to in the future. The rapidly expanding opportunities
in personal genetics necessitate a review and revision of regulatory
guidelines to ensure that these newer commercial entities uphold the
confidentiality of this sensitive information.
Contributions to science and research
Genealogy databases are also an extremely useful tool for scientific
research. Access to worldwide genealogical information can allow
researchers to perform familial aggregation studies to look for genetic
linkages in particular diseases, or assess geographical shifts in disease in
relation to population dynamics. Genealogical inference is a tool used by
researchers to establish patterns of gene inheritance and genotype
profiling across populations. Data mining and computational tools can
assist in this analysis of both genetic information and familial pedigree
data. Analysis of familial clustering patterns for specific diseases can help
researchers to confirm familial inheritance patterns, and high-risk pedigrees
can be further analysed for genes of interest and genetic pathways.24 A
major problem with the accuracy of this research is genetic recombination
or single nucleotide polymorphisms, which can impair researchers’ ability to
analyse the genetic polymorphisms at a particular locus.25
Research using companies such as Ancestry.com has used genealogical
data to investigate large-scale population dynamics and migration
patterns among US populations.26 This work looks at identity-by-descent
inference, which can combine inheritance patterns with demographic
data to reveal common ancestry. While this work is ongoing, it has allowed
researchers to identify clusters with disease-risk variants.26 On large-scale
analysis, these genome-wide association studies (GWAS) allow researchers
to identify multiple loci that correspond to disease phenotypes, which
could introduce new research targets for treatments.27,28 This emerging
facet of research can identify relatedness in diseases where a familial
component may not have been previously identified. Analysis of the Utah
Population Database genealogical data has identified high-risk familial
clusters with a predisposition to developing spinal cord tumours.29
Identifying high-risk pedigrees is the first step in identifying novel gene
variants that may contribute to tumour development.29 This research is
also very important for establishing new treatment targets in rare diseases
that are not yet fully understood. Amyotrophic lateral sclerosis (ALS) is a
fatal neurodegenerative disease with no successful treatment options and
little understanding of its pathological mechanisms. Project MinE is an
international research collaboration that includes ALS researchers in
staff reviewRCSIsmj
Volume 12: Number 1. 2019 | Page 115
Ireland, which aims to identify and characterise ALS using large-scale
GWAS data from thousands of ALS patients.30 While large-scale studies like
this may take time to yield results, they involve a wealth of genetic
information and allow researchers to examine population disease
dynamics as a whole rather than in parts.
Warning! The need for regulation
The simplicity of DTC genetic tests lies in the fact that they do not require
a medical professional to perform the test or deliver the results. In Ireland,
which has no defined legislation for DTC genetic testing, this alone
presents many ethical, legal, and regulatory issues. Furthermore, this
format does not provide any advice or direct counselling for the patient
who may receive test results they do not fully understand, or may
misinterpret, results which often have weighty medical and psychosocial
consequences.31 New studies are examining the importance of
concomitant genetic counselling for patients receiving genomic risk
assessment results online. These studies have shown that genetic
counselling is important for a patient’s understanding of their disease risk
and genetic susceptibility, as well as to help patients gain personal utility
from the results.32,33 In the United States, the Food and Drug
Administration (FDA) has repeatedly placed restrictions on DTC home
testing kits and the companies that provide them, in an effort to prevent
unnecessary medical decisions being made by uninformed or panicked
members of the public.34,35 Ethical analysis of commercial genetic testing
websites and marketing strategies demonstrates that companies use
medical language to establish trust in the product; they foster a sense of
personal empowerment in the individual through the use of their
products, and they appeal to a consumer’s sense of personal
responsibility.36 Furthermore, third-party interpretation websites, which
seek to help consumers to understand the results of their DTC genetic
testing, add another level of complexity to the ethical and privacy issues
surrounding DTC genetic testing.37
Finally, additional concern lies in the transparency (or lack thereof) of
personal genetics companies’ research activities. Some companies have
been shown to store consumer data for unspecified future research, or for
shared research with third-party companies. While all data is anonymised,
there is little information regarding the ability to re-identify or trace an
individual’s samples, and the possible implications of such events. Without
proper legislation and monitoring of such systems, the process of informed
consent cannot be validated, as neither consumers nor the companies
involved have complete control of the future fate of the data.38
Conclusion
Whether one considers DTC genetic testing to be a novel birthday gift
idea, or a way to explore heritage and health, it is clear that the scope and
reach of this platform is ever expanding. As a research tool, it holds
promise to provide a deeper understanding of genetic linkages within
populations and across generations. Therapeutically, DTC genetic testing
could yield further insight into disease mechanisms and genetic
inheritance, and could pave the way for developing novel disease
therapies. As a commercial medical platform, it offers the public a learning
resource and an appealing introduction to genetics and health promotion.
However, as DTC genetic testing evolves, it will require an expanding set
of ethical and legal legislation to ensure the safety and privacy of its
customers. Furthermore, more stringent genetic counselling and
education is required to provide consumers with up-to-date and accurate
methods to interpret their results and guide their subsequent healthcare.
With all great advances come great responsibility; with proper scrutiny and
oversight, DTC genetic testing could revolutionise how the medical
community promotes, distributes, and advances healthcare.
staff review
References
1. Khoury MJ. Centers for Disease Control and Prevention. Direct to consumer
genetic testing: think before you spit. 2017. [Internet]. Available from:
https://blogs.cdc.gov/genomics/2017/04/18/direct-to-consumer-2/.
2. Allyse MA, Robinson DH, Ferber MJ, Sharp RR. Direct-to-consumer testing 2.0:
emerging models of direct-to-consumer genetic testing. Mayo Clin Proc.
2018;93(1):113-20.
3. Puryear L, Downs N, Nevedal A, Lewis ET, Ormond KE, Bregendahl M et al.
Patient and provider perspectives on the development of personalized
medicine: a mixed-methods approach. J Community Genet. 2018;9(3):283-91.
4. Tandy-Connor S, Guiltinan J, Krempely K, LaDuca H, Reineke P, Gutierrez S et
al. False-positive results released by direct-to-consumer genetic tests
highlight the importance of clinical confirmation testing for appropriate
patient care. Genet Med. 2018. [Epub ahead of print].
5. D’Abramo F, Schildmann J, Vollmann J. Research participants’ perceptions
and views on consent for biobank research: a review of empirical data and
ethical analysis. BMC Med Ethics. 2015;16:60.
6. Gilbert M, Bonnell A, Farrell J, Haag D, Bondyra M, Unger D et al. Click yes
to consent: acceptability of incorporating informed consent into an
internet-based testing program for sexually transmitted and blood-borne
infections. Int J Med Inform. 2017;105:38-48.
RCSIsmj
Page 116 | Volume 12: Number 1. 2019
7. Brown KM, Drake BF, Gehlert S, Wolf LE, DuBois J, Seo J et al. Differences in
preferences for models of consent for biobanks between black and white
women. J Community Genet. 2016;7(1):41-9.
8. Chin WW, Wieschowski S, Prokein J, Illig T, Strech D. Ethics reporting in
biospecimen and genetic research: current practice and suggestions for
changes. PLoS Biol. 2016;14(8):e1002521.
9. Budin-Ljosne I, Teare HJ, Kaye J, Beck S, Bentzen HB, Caenazzo L et al.
Dynamic consent: a potential solution to some of the challenges of modern
biomedical research. BMC Med Ethics. 2017;18(1):4.
10. Thiel DB, Platt J, Platt T, King SB, Fisher N, Shelton R et al. Testing an online,
dynamic consent portal for large population biobank research. Public Health
Genomics. 2015;18(1):26-39.
11. Richter G, Krawczak M, Lieb W, Wolff L, Schreiber S, Buyx A. Broad consent
for health care-embedded biobanking: understanding and reasons to donate
in a large patient sample. Genet Med. 2018;20(1):76-82.
12. Warner TD, Weil CJ, Andry C, Degenholtz HB, Parker L, Carithers LJ et al.
Broad consent for research on biospecimens: the views of actual donors at
four U.S. medical centers. J Empir Res Hum Res Ethics. 2018;13(2):115-24.
13. Grady C, Eckstein L, Berkman B, Brock D, Cook-Deegan R, Fullerton SM et al.
Broad consent for research with biological samples: workshop conclusions.
Am J Bioeth. 2015;15(9):34-42.
14. Beskow LM. Lessons from HeLa cells: the ethics and policy of biospecimens.
Annu Rev Genomics Hum Genet. 2016;17:395-417.
15. Landry JJ, Pyl PT, Rausch T, Zichner T, Tekkedil MM, Stutz AM et al. The
genomic and transcriptomic landscape of a HeLa cell line. G3 (Bethesda).
2013;3(8):1213-24.
16. Skloot R. The immortal life of Henrietta Lacks, the sequel. The New York Times.
2013. [Internet]. Available from: https://www.nytimes.com/2013/03/24/
opinion/sunday/the-immortal-life-of-henrietta-lacks-the-sequel.html.
17. Hercher L, Jamal L. An old problem in a new age: revisiting the clinical
dilemma of misattributed paternity. Appl Transl Genom. 2016;8:36-9.
18. Moray N, Pink KE, Borry P, Larmuseau MH. Paternity testing under the cloak
of recreational genetics. Eur J Hum Genet. 2017;25(6):768-70.
19. Ducharme J. A DNA site helped authorities crack the golden state killer case.
Here’s what you should know about your genetic data privacy. Time. 2018.
[Internet]. Available from:
http://time.com/5257474/golden-state-killer-genetic-privacy-concerns/.
20. Reuters. Security breach at MyHeritage website leaks details of over 92
million users. Reuters: Thomson Reuters. 2018. [Internet]. Available from:
https://www.reuters.com/article/us-myheritage-privacy/security-breach-at-my
heritage-website-leaks-details-of-over-92-million-users-idUSKCN1J1308.
21. Wallace SE, Gourna EG, Nikolova V, Sheehan NA. Family tree and ancestry
inference: is there a need for a ‘generational’ consent? BMC Med Ethics.
2015;16(1):87.
22. Caulfield T, Chandrasekharan S, Joly Y, Cook-Deegan R. Harm, hype and
evidence: ELSI research and policy guidance. Genome Med. 2013;5(3):21.
23. Pullman D, Etchegary H. Clinical genetic research 3: Genetics ELSI (ethical,
legal, and social issues) research. Methods Mol Biol. 2015;1281:369-82.
24. Scholand MB, Coon H, Wolff R, Cannon-Albright L. Use of a genealogical database
demonstrates heritability of pulmonary fibrosis. Lung. 2013;191(5):475-81.
25. Mirzaei S, Wu Y. RENT+: an improved method for inferring local genealogical
trees from haplotypes with recombination. Bioinformatics. 2017;33(7):1021-30.
26. Han E, Carbonetto P, Curtis RE, Wang Y, Granka JM, Byrnes J et al. Clustering
of 770,000 genomes reveals post-colonial population structure of North
America. Nat Commun. 2017;8:14238.
27. Pickrell JK, Berisa T, Liu JZ, Segurel L, Tung JY, Hinds DA. Detection and
interpretation of shared genetic influences on 42 human traits. Nat Genet.
2016;48(7):709-17.
28. Szulc P, Bogdan M, Frommlet F, Tang H. Joint genotype- and ancestry-based
genome-wide association studies in admixed populations. Genet Epidemiol.
2017;41(6):555-66.
29. Spiker WR, Brodke DS, Goz V, Lawrence B, Teerlink CC, Cannon-Albright LA.
Evidence of an inherited predisposition for spinal cord tumors. Global Spine
J. 2018;8(4):340-4.
30. Project MinE ALS Sequence Consortium. Project MinE: study design and pilot
analyses of a large-scale whole-genome sequencing study in amyotrophic
lateral sclerosis. Eur J Hum Genet. 2018;26(10):1537-46.
31. de Paor A. Direct to consumer genetic testing – law and policy concerns in
Ireland. Ir J Med Sci. 2017;187(3):575-84.
32. Sweet K, Sturm AC, Schmidlen T, McElroy J, Scheinfeldt L, Manickam K et al.
Outcomes of a randomized controlled trial of genomic counseling for
patients receiving personalized and actionable complex disease reports. J
Genet Couns. 2017;26(5):980-98.
33. Chung MW, Ng JC. Personal utility is inherent to direct-to-consumer
genomic testing. J Med Ethics. 2016;42(10):649-52.
34. Baird S. Don’t try this at home: the FDA’s restrictive regulation of home
testing devices. Duke Law J. 2017;67(2):383-426.
35. Seward B. Direct-to-consumer genetic testing: finding a clear path forward.
Ther Innov Regul Sci. 2017;52(4):482-8.
36. Schaper M, Schicktanz S. Medicine, market and communication: ethical
considerations in regard to persuasive communication in direct-to-consumer
genetic testing services. BMC Med Ethics. 2018;19(1):56.
37. Badalato L, Kalokairinou L, Borry P. Third party interpretation of raw genetic
data: an ethical exploration. Eur J Hum Genet. 2017;25(11):1189-94.
38. Niemiec E, Howard HC. Ethical issues in consumer genome sequencing: use
of consumers’ samples and data. Appl Transl Genom. 2016;8:23-30.
staff reviewRCSIsmj
Volume 12: Number 1. 2019 | Page 117
Page 118 | Volume 12: Number 1. 2019
staff review
Abstract
Recent advances in neonatal and perinatal medicine (NPM) enable greater chances of survival for
premature infants. Due to the advent of neonatal intensive care units (NICU), infants born today at 24-25
weeks of gestation have an increased chance of survival. Over the past few years, the limits of viability have
been further challenged, aiming to push the boundary of survivability down to 22-23 weeks of gestation.
These advances come with controversies and concerns regarding long-term neurodevelopmental and
learning disabilities. The decision to stop intensive support in futile circumstances requires the involvement
of family members and a multidisciplinary team of healthcare providers who may not share the same views,
therefore necessitating a collaborative, interdisciplinary care framework to promote communication and
transparency in healthcare decision-making. This balanced approach requires consideration of clinical
evidence while respecting the personal, cultural, and religious views of the family. Additionally, the growing
influence of social media has a profound impact on healthcare information, public perception, and care
expectations. The cost of supporting premature infants and caring for prematurity-related morbidities with
unique developmental needs can carry a substantial financial burden. In high-income countries, these costs
undergo constant contemplation due to financial pressures entangled with questions about quality of life,
pain, and suffering. In contrast, restricted resources in low-income countries mean that care is largely
limited to treating full-term infants and providing only essential care to support a healthy pregnancy. The
incidence of preterm birth in low-income countries can be reduced through better maternal education,
family planning, extended breastfeeding, more optimal birth spacing, preventive health measures, and
hygiene strategies.
Monika Cieslak
RCSI medical student
Against all odds: ethicaldilemmas in modern peri- and neonatal medicine
Royal College of Surgeons in Ireland Student Medical Journal 2019; 1: 118-123.
RCSIsmj
Volume 12: Number 1. 2019 | Page 119
Introduction
Dedicated to the care of the most vulnerable newborns, neonatology
has emerged as one of the newest branches of medicine. Historically,
only the fittest and full-term infants had a chance to survive.1 The first
basic incubators were installed at the Paris Maternity Hospital in 1880.2
In the 1950s, Dr Virginia Apgar created the eponymous newborn
assessment score, which is still used worldwide today.3 In the following
years, better understanding of neonatal pathophysiology led to the
development of interventions aimed at reducing morbidity.3 This
included the use of oxygen to care for hypoxic neonates and the
introduction of ventilators generating positive pressure for infants in
respiratory distress.4,5 In 1975, the first American neonatologists were
certified to provide specialised care to the infant, especially to those
born prematurely at less than 37 weeks of gestation.6
In recent years, emerging technologies and innovative therapies have
led to remarkable advances in neonatal and perinatal medicine
(NPM). Healthcare professionals are increasingly able to provide
effective care to prematurely born infants, as well as term infants born
under unfavourable circumstances or with complex conditions.
Advances in respiratory therapy, such as high-frequency ventilators,
positive pressure breathing-assisting machines, laboratory
micro-methods, imaging techniques, and real-time monitoring of
relevant physiological parameters, have all reduced infant morbidity
and mortality.7,8 In addition, better hygiene, parenteral nutrition, and
effective infection control methods have allowed for the achievement
of outcomes that were far from possible merely a decade ago.9 And
yet, it remains that not all battles to support the life and health of
prematurely born infants can be won.
Redefining the limits of viability
Infants born at less than 28 weeks of gestation are described as
“extremely low gestational age” (ELGA).7 The dramatic progress in
NPM now enables physicians to effectively support ELGA infants born
at 24-25 weeks of gestation.10 Over the past few years, the boundaries
of viability have been further challenged by outcome reports from
centres with sophisticated neonatal intensive care programmes.11 This
includes promising results from Japan and Germany, where infants
born at 22-23 weeks were provided life-saving care; this gestational
age was previously considered “periviable”.11 However, prior to
altering existing guidelines and recommendations, clinicians must
take into consideration the long list of clinical, ethical, cultural, and
religious variables that play a significant role when it comes to
performing medical interventions for these vulnerable patients.12,13
Infants born at the borders of viability typically endure highly
dynamic, rapidly changing clinical circumstances and numerous
complications, requiring the utilisation of an array of medical
resources. The full implications of these intensive supportive measures
may not be known for days, weeks, or perhaps longer.9 In some
circumstances, medical interventions may only temporarily postpone
death, at the consequence of severe suffering; at other times, they
may lead to survival with life-long morbidity and tribulation for the
patient and their family.14 The lack of ability to accurately predict
long-term complications in individual ELGA survivors increases
prognostic uncertainty and therefore the moral distress involved in
making life or death decisions for ELGA infants.15 For this reason,
determining the best course of action while achieving consensus
regarding treatment, at times when outcomes are uncertain, poses
major ethical dilemmas.16 If available medical interventions fail, the
decision to withdraw care can be very complex, multidimensional,
and fraught with ethical turmoil.17
Despite the statistics available addressing prematurely born infants as a population, outcomes for each
specific infant are unknown. Frequently, the most controversial decision to be made is whether or not to initiate resuscitation efforts
or withdraw care.
Decision-making: the role of the healthcare team
In many modern neonatal programmes, these exceedingly difficult
situations tend to be approached on a case-by-case basis, using an
individualised prognostic strategy.18 A team approach is utilised to
steer the direction of medical and nursing care offered to support
premature infants. Those involved in the decision-making process
include parents, family members, physicians, nurses, allied
professionals, and often hospital ethics committees.12 The inherent
difficulty with team-based decision-making comes from the fact that
survival and disability are often viewed very differently among
healthcare providers, and even families.19 At times, physicians and
nurses may have contrasting views regarding how and by whom
end-of-life decisions should be made.12 The key difference between
both groups seems to be a function of the professional role played by
each, rather than differences in ethical reasoning or moral
motivation.20 In general, physicians bear the weight of clinical
decision making.20 Nurses frequently experience conflict regarding
healthcare decisions, and may implement actions that they perceive
as morally wrong.20
staff reviewRCSIsmj
Page 120 | Volume 12: Number 1. 2019
Despite the statistics available addressing prematurely born infants as a
population, outcomes for each specific infant are unknown. Frequently,
the most controversial decision to be made is whether or not to initiate
resuscitation efforts or withdraw care.14 Furthermore, decisions are not
only based on acute care, but require the additional consideration of
other factors such as neurodevelopmental sequelae.10 Structural
abnormalities of the brain can persist into childhood, adolescence, and
adulthood.9 Advanced brain imaging has therefore become a standard
of care to predict prognosis and unify a treatment plan by the
healthcare team.9 Decision-making in NPM continues to be an area of
ethical debate and clinical complexity, and a better collaborative
interdisciplinary care framework is required to promote ongoing
discourse in order to reduce moral distress.21
Decision-making: the role of parents and society
In addition to helping to guide NPM interventions and management,
a team-based approach to decision-making is also required upon
recognising the futility of care. Such circumstances occur when the
outlook for an infant’s survival is extremely grim, or invariably carries
devastating disability or poor quality of life burdened by significant
physical discomfort and suffering.9 Typically, the NPM team takes into
account existing clinical evidence, but must balance this with the
wishes of the family, respecting their personal, cultural and religious
values.16 The decision to limit or withdraw life support is very difficult
for all involved; it is thus essential for healthcare professionals and
families to engage in effective, mutually respectful communication
during this highly emotional time.22,23
In cases of disagreement about the direction of care and support for the
infant, it has been suggested that there should be an option in place
for a family to appeal the clinical decision.24 This is exemplified by
recent controversies, such as the case of Liverpool’s Alfie Evans in early
2018.24 Alfie had a severe undiagnosed neurodegenerative disorder
and had been ventilated in intensive care for 16 months before his
physicians determined that continuing treatment would be futile and
would only prolong his suffering.24 Although his parents wanted life
support to continue, a judge ruled that further medical treatment failed
to respect Alfie’s autonomy.24 His case brought to light the ever-present
debates regarding the value of life, the rights of parents versus the
state, and in what circumstances discontinuing life support could be
interpreted as terminating innocent life. While the ethical principles are
guiding pillars that healthcare providers frequently turn to when faced
with difficult decisions, these tenets do not always align: how does one
weigh autonomy against beneficence, or non-maleficence?
Conflicting views on what is deemed a reasonable level of care and
intervention for prematurely born infants is often deeply influenced
by cultural beliefs and differences in society. This is further
complicated by a new influence that did not exist even a generation
ago: social media. Social media enables easy access to medical
literature, communication with healthcare providers, interprofessional
co-operation in clinical management, and promotion of public
health.25 People commonly use social media and web searches to
explore health-related issues.26 Nevertheless, internet data may
contain incorrect information that contradicts present-day,
established medical understanding.27 The media is also capable of
spreading sensational stories, often conveyed without trusted
background information or objectivity. This may fuel unreasonable
healthcare expectations and demands, and play a role in the push for
intensifying life-sustaining supports or extending futile treatments
and care.26,28 Such ‘public pressure’ may place physicians and other
allied professionals in an unnecessarily difficult position. In contrast, few
examples exist where extremely premature infants have been
resuscitated against parental wishes and survived with severe
neurological abnormalities.18 Finding a balance between these two
extremes, and navigating the ever-expanding social media landscape
and its associated pressures, will continue to be a challenge in the future.
Neonatal medicine in developed countries: societal
priorities and costs
In most high-income countries, there is an understandable
expectation for governments and regulatory bodies to ensure a
caring, compassionate approach to the weak and vulnerable
staff reviewRCSIsmj
Volume 12: Number 1. 2019 | Page 121
members of society. This includes not only seniors and those living
with disability, but infants as well. In most cases, financial cost is
considered secondary to other values.16 It is inherent within most
societies to consider the health and well-being of children as a
necessary and worthy expense, a form of investment into the future
of the country.
A specialised neonatal intensive care unit (NICU) has become the
standard in many hospitals, offering a level of support unheard of
20-30 years ago.
Nevertheless, the cost of caring for premature infants is substantial,
especially for those born below 28 weeks of gestation, or born
weighing less than 500g.7 A specialised neonatal intensive care unit
(NICU) has become the standard in many hospitals, offering a level of
support unheard of 20-30 years ago. All of this comes at a
considerable expense to taxpayers. In Canada, preterm births
accounted for approximately 8% of pregnancies between 2009 and
2010, with CAN $8bn in associated costs.29 Expenditures associated
with prematurity-associated morbidities and interventions contribute
to an annual average hospitalisation cost of CAN $201,250 per
infant.30,31 Additionally, these expenses often amplify with increased
resource utilisation as patients grow into adulthood.32 Long-term
impairment, particularly neurological or cognitive, is a concern for
many healthcare providers and families of prematurely born infants.19
The association of prematurity with learning and motor disabilities,
and visual and hearing impairments, means that these infants require
specialised neurodevelopmental follow-up for years to come.9 Beyond
impacting on the lives of these infants, these cases often create a
burden for the family, as they are associated with significant
caregiving responsibilities that are often lifelong.32 Families often
require substantial support for many years, either through financial
aid from the government or through services already in high demand.
As a result, intensive neonatal interventions must be balanced by an
informed consideration of the long-term individual, familial, and
societal costs of such efforts.
Neonatal medicine in developing countries: financial
burden and logistical barriers
In low-income countries, childhood mortality has fallen dramatically
over the past few decades, but neonatal death rates are still
disproportionately high, accounting for 40% of total mortality in
children less than five years of age.33 Infants born at 28 weeks of
gestation in low-income countries have a 90% mortality rate within
the first few days of life, in contrast to 10% mortality in high-income
countries.34 In these resource-poor areas, 50% of all infants born at or
below 32 weeks of gestation die due to lack of basic cost-effective
care.34 Preventable maternal infections, such as malaria or syphilis,
have now been linked to higher rates of stillbirths.1 Low-income
countries still have limited means to support full-term infants who
require specialised care involving costly equipment, medications, or
procedures. A large portion of neonatal deaths result from birth
asphyxia, with an estimated 10 million newborns requiring some level
of intervention to establish breathing and circulation at the time of
death.35 Unfortunately, the ability to support newborns, especially
those born at extremely low gestational ages, is not universal, due to
lack of finances, resources, facilities, and trained personnel.
One of the initiatives aimed at reducing neonatal mortality associated
with asphyxia and respiratory complications in low-income countries
has been to improve neonatal resuscitation knowledge and skills
among providers.36 A low-cost and low-tech simulation-based
educational resuscitation programme, known as Helping Babies
Breathe (HBB), was developed to support this strategy.36 After a
successful pilot study, a large-scale implementation of HBB in
Tanzania over a period of three years was reported in 2017.36 This
noteworthy initiative provided healthcare policy makers with valuable
information and a wealth of data, but the impact of improving
knowledge and skills on health outcomes, most notably mortality, still
requires further evaluation.36
staff reviewRCSIsmj
Page 122 | Volume 12: Number 1. 2019
In the meantime, the reality in many low-income countries is that
scarce resources must be prudently allocated to areas of healthcare
benefiting term, or near-term, newborns. Currently, the primary focus
is on lowering maternal deaths associated with childbearing,
preventing infections, and providing basic nutrition, clean water, and
vaccinations.1 In the current climate, with limited or non-existent
options to offer modern life-saving support for premature infants, the
best strategy is to prevent or minimise the incidence of premature
deliveries.34 An interesting statistical observation noted that for
neonatal and infant mortality, the risk of death decreases with
increasing birth interval lengths up to 36 months, at which point the
risk plateaus.37 Additionally, as there is a clear risk pattern of childhood
malnutrition with short birth intervals, there are recommendations for
counselling women on optimal birth spacing, which is currently
understood to be between 36 and 59 months.37 Better maternal
education, family planning, healthcare access, proper use of
contraceptives, and extended breastfeeding leading to lactation
amenorrhoea (temporary postnatal infertility) can increase the
inter-pregnancy interval.38 Factors contributing to decreased
inter-pregnancy spacing include poverty, cultural and religious
attitudes towards sex, and lack of contraceptive use.39
Specific risk factors associated with preterm delivery include
primiparity, single marital status, low frequency of antenatal visits,
antenatal infection, previous child death, and poor maternal
education.40 To address this, the WHO’s antenatal guidelines to
prevent preterm birth include counselling on healthy diet,
counselling on tobacco and substance use, and management of key
risk factors such as infection.34 It is estimated that by implementing
fundamental cost-effective care, more than three-quarters of
preterm infant lives could be saved in low-income countries.34 Given
the circumstances, is it more ethical to direct finances towards
improving survival among term and near-term infants in
low-income countries? Determining whether international financial
support should be directed to these infants versus the extremely
preterm infants with uncertain outcomes in high-income countries
is an ongoing global ethical dilemma.
Conclusion
It is inevitable that healthcare professionals face ethical challenges
almost daily in their practice. In contrast to paediatric or adult
medicine, the principles of biomedical ethics are not as easily
applied in neonatal and perinatal medicine. Autonomy bears
significant weight in medical decision-making. However, an infant is
unable to voice an opinion, and beneficence and non-maleficence
are often impossible to reconcile at the time of intensive
neonatal decision-making. Collaborative and integrative measures
in neonatology have resulted in a nearly 50% decline in morbidity
in infants less than 29 weeks of gestation,41 but even the
most cutting-edge clinical advances are unable to resolve the
weighty ethical dilemmas and disparities surrounding neonatal life
saving across the globe. Future healthcare providers must approach
these quandaries by remaining unbiased, clinically astute and,
above all, respectful of parental wishes, culture, religion, and spiritual
needs.
References
1. Global Burden of Disease Child Mortality Collaborators. Global, regional,
national, and selected subnational levels of stillbirths, neonatal, and
under-5 mortality, 1980-2015: a systematic analysis for the Global Burden
of Disease Study 2015. Lancet. 2016;388(10053):1725-74.
2. Baker JP. The incubator and the medical discovery of the premature infant.
J Perinatol. 2000;20(5):321-8.
3. Apgar V. A proposal for a new method of evaluation of the newborn
infant. Originally published in July 1953, volume 32, pages 250-259.
Anesth Analg. 2015;120(5):1056-9.
4. Silverman WA. Prematurity and retrolental fibroplasia. Sight Sav Rev.
1969;39(1):42-6.
5. Delivoria-Papadopoulos M, Swyer PR. Assisted ventilation in terminal
hyaline membrane disease. Arch Dis Child. 1964;39:481-4.
6. Fanaroff A, Martin R, Walsh M. Fanaroff and Martin’s Neonatal-Perinatal
Medicine. (10th ed.). Elsevier Saunders, 2015.
7. Berrington J, Ward Platt M. Recent advances in the management of infants
born <1,000g. Arch Dis Child. 2016;101(11):1053-6.
8. Papa F, Rongioletti M, Ventura MD, Di Turi F, Cortesi M, Pasqualetti P et al.
Blood cell counting in neonates: a comparison between a low volume
micromethod and the standard laboratory method. Blood Transfus.
2011;9(4):400-6.
9. Glass HC, Costarino AT, Stayer SA, Brett CM, Cladis F, Davis PJ. Outcomes
for extremely premature infants. Anesth Analg. 2015;120(6):1337-51.
10. Ceriani Cernadas JM. The limits of viability in preterm infants, a growing
ethical dilemma. Arch Argent Pediatr. 2018;116(3):170-1.
11. Patel RM, Rysavy MA, Bell EF, Tyson JE. Survival of infants born at
staff reviewRCSIsmj
Volume 12: Number 1. 2019 | Page 123
periviable gestational ages. Clin Perinatol. 2017;44(2):287-303.
12. Bucher HU, Klein SD, Hendriks MJ, Baumann-Holzle R, Berger TM, Streuli JC et
al. Decision-making at the limit of viability: differing perceptions and opinions
between neonatal physicians and nurses. BMC Pediatr. 2018;18(1):81.
13. Lorch SA, Enlow E. The role of social determinants in explaining
racial/ethnic disparities in perinatal outcomes. Pediatr Res.
2016;79(1-2):141-7.
14. Liu J, Chen XX, Wang XL. Ethical issues in neonatal intensive care units.
J Matern Fetal Neonatal Med. 2016;29(14):2322-6.
15. Coughlin KW, Hernandez L, Richardson BS, da Silva OP. Life and death
decisions in the extremely preterm infant: What happens in a level III
perinatal centre? Paediatr Child Health. 2007;12(7):557-62.
16. Einaudi MA, Gire C, Auquier P, Le Coz P. How do physicians perceive
quality of life? Ethical questioning in neonatology. BMC Med Ethics.
2015;16:50.
17. Mahgoub L, van Manen M, Byrne P, Tyebkhan JM. Policy change for
infants born at the “cusp of viability”: a Canadian NICU experience.
Pediatrics. 2014;134(5):e1405-10.
18. Nadroo AM. Ethical dilemmas in decision making at limits of neonatal
viability. J IMA. 2011;43(3):188-92.
19. Dupont-Thibodeau A, Barrington KJ, Farlow B, Janvier A. End-of-life
decisions for extremely low-gestational-age infants: why simple rules for
complicated decisions should be avoided. Semin Perinatol.
2014;38(1):31-7.
20. Oberle K, Hughes D. Doctors’ and nurses’ perceptions of ethical problems
in end-of-life decisions. J Adv Nurs. 2001;33(6):707-15.
21. Browning DM, Solomon MZ, Initiative for Pediatric Palliative Care (IPPC)
Investigator Team. The Initiative for Pediatric Palliative Care: an
interdisciplinary educational approach for healthcare professionals. J
Pediatr Nurs. 2005;20(5):326-34.
22. Abe N, Catlin A, Mihara D. End of life in the NICU. A study of ventilator
withdrawal. MCN Am J Matern Child Nurs. 2001;26(3):141-6.
23. Burt RA. Resolving disputes between clinicians and family about “futility”
of treatment. Semin Perinatol. 2003;27(6):495-502.
24. Wilkinson D, Savulescu J. Alfie Evans and Charlie Gard – should the law
change? BMJ. 2018;361:k1891.
25. Levac J, O’Sullivan T. Social media and its use in health promotion. Revue
interdisciplinaire des sciences et de la santé – Interdisciplinary Journal of
Health Sciences. 2016. [Internet]. [cited 2018 July 23]. Available from:
https://uottawa.scholarsportal.info/ojs/index.php/RISS-IJHS/article/view/1534.
26. de Martino I, D’Apolito R, McLawhorn AS, Fehring KA, Sculco PK,
Gasparini G. Social media for patients: benefits and drawbacks. Curr Rev
Musculoskeletal Med. 2017;10(1):141-5.
27. Ghenai A. DH’17: Proceedings of the 2017 International Conference on
Digital Health. New York, USA; Association for Computing Machinery, 2017.
28. Heath S. How healthcare bias affects patient-provider
relationships. 2017. [Internet]. [cited 2018 July 24]. Available from:
https://patientengagementhit.com/news/how-healthcare-bias-affects-p
atient-provider-relationships.
29. Lim G, Tracey J, Boom N, Karmakar S, Wang J, Berthelot JM et al. CIHI
survey: hospital costs for preterm and small-for-gestational age babies in
Canada. Healthc Q. 2009;12(4):20-4.
30. Johnson TJ, Patel AL, Jegier BJ, Engstrom JL, Meier PP. Cost of morbidities
in very low birth weight infants. J Pediatr. 2013;162(2):243-49.e1.
31. Muraskas J, Parsi K. The cost of saving the tiniest lives: NICUs versus
prevention. Virtual Mentor. 2008;10(10):655-8.
32. Johnston KM, Gooch K, Korol E, Vo P, Eyawo O, Bradt P et al. The
economic burden of prematurity in Canada. BMC Pediatr. 2014;14:93.
33. Global Health Observatory. Global Health Observatory (GHO) data. 2016.
[Internet]. [cited 2018 August 1]. Available from: http://www.who.int/ gho/en.
34. World Health Organisation. Preterm birth. 2018. [Internet]. [cited 2018
June 18]. Available from:
http://www.who.int/news-room/fact-sheets/detail/ preterm-birth.
35. Wall SN, Lee SE, Niermeyer S, English M, Keenan WJ, Carlo W et al. Neonatal
resuscitation in low-resource settings: what, who, and how to overcome
challenges to scale up? Int J Gynaecol Obstet. 2009;107(Suppl. 1):S47-62, S3-4.
36. Arlington L, Kairuki AK, Isangula KG, Meda RS, Thomas E, Temu A et al.
Implementation of “Helping Babies Breathe”: a 3-year experience in
Tanzania. Pediatrics. 2017;139(5):pii. E20162132.
37. Rutstein SO. Effects of preceding birth intervals on neonatal, infant and
under-five years mortality and nutritional status in developing countries:
evidence from the demographic and health surveys. Int J Gynaecol Obstet.
2005;89(Suppl. 1):S7-24.
38. Ganatra B, Faundes A. Role of birth spacing, family planning services, safe
abortion services and post-abortion care in reducing maternal mortality.
Best Pract Res Clin Obstet Gynaecol. 2016;36:145-55.
39. Conde-Agudelo A, Belizan JM. Maternal morbidity and mortality associated
with interpregnancy interval: cross sectional study. BMJ.
2000;321(7271):1255-9.
40. Prazuck T, Tall F, Roisin AJ, Konfe S, Cot M, Lafaix C. Risk factors for
preterm delivery in Burkina Faso (west Africa). Int J Epidemiol.
1993;22(3):489-94.
41. Shah PS, McDonald SD, Barrett J, Synnes A, Robson K, Foster J et al. The
Canadian Preterm Birth Network: a study protocol for improving
outcomes for preterm infants and their families. CMAJ Open.
2018;6(1):E44-E9.
staff reviewRCSIsmj
Page 124 | Volume 12: Number 1. 2019
perspective
Introduction
Regular engagement in physical exercise has been shown to vastly
improve both mental and physical health. Exercise is used as a form
of therapy in many disorders due to its ability to regulate mood and
sleep patterns, improve cognitive function, reduce neurocognitive
and metabolic diseases, and increase overall well-being.1 By
increasing serotonin, brain-derived neurotrophic factor (BDNF),
dopamine, and other hormones such as adrenaline and
noradrenaline, exercise improves the brain’s ability to regulate mood
and sleep patterns.1 It has been shown that those who participate
regularly in exercise tend to sleep better at night and feel more
energetic throughout the day.1 Unfortunately, most young adults do
not engage in the level of exercise recommended for their age
group, contributing to increased levels of obesity in children and
young adults.2
Physical health is of particular importance for medical students, as the
positive effects of exercise are seen to directly correlate with academic
output. As such, the benefits of exercise, its influence on brain
neurotransmitters, and its impact on cognitive function in medical
students and young adults warrant discussion.
RCSIsmj
HARVEEN HAYER and SUKHMANI BENIPAL discuss the neurocognitive
benefits of exercise, particularly for medical students.
How fitness can fuel the brain
Volume 12: Number 1. 2019 | Page 125
perspective
Modulation of neurotransmitters through exercise
Adrenaline and noradrenaline are the hormones whose concentrations
have been shown to increase the most dramatically in response to
exercise (1.5 to >20 times resting concentrations).3 After being released
from neurons into the synaptic cleft, noradrenaline and adrenaline bind
to appropriate α- and β-adrenergic receptors to mediate their effects.
High levels of noradrenaline released into the synaptic cleft have a
positive effect on attention, while low levels are strongly associated
with depression.4 Additionally, noradrenaline is responsible for memory
retrieval and stress resistance due to enhanced neuronal adaptation,
while adrenaline can mediate similar effects on stress adaption and
memory consolidation via β-adrenergic receptors.3,5 Lastly, the increase
in adrenaline and noradrenaline during exercise results in
cardiovascular and respiratory adjustments, such as an increased heart
rate. This positive chronotropic effect supplies more oxygen to working
muscles and promotes energy release from cells. This prompts
psychological changes as well, explaining why exercise of low to
moderate intensity lasting more than 20 minutes can decrease fatigue.6
Exercise has been shown toimprove cognitive performance due to the downregulation of 5-HT1A
receptors and upregulation of 5-HT2Areceptors in cognition-related
areas of the brain.
The neuronal circuitry within the brain plays a critical role in
controlling cognitive function. Catecholamines such as dopamine and
BDNF are released during exercise, resulting in improved verbal
memory.7 The interaction of dopamine with its receptors activates the
reward system via the nucleus accumbens and the amygdala,
resulting in pleasure sensations, and via the dorsolateral prefrontal
cortex, resulting in enhanced working memory.8,9 Working memory
processes visual and auditory stimuli until they are translated from
short-term memory into long-term memory. As previously
mentioned, adrenaline also enhances long-term memory storage
through the activation of peripheral β receptors.5 Neurogenesis, or
the creation of new neurons, is carried out by the protein fibronectin
type III domain-containing protein 5 (FNDC5).5 During exercise and
sweat production, FNDC5 is released into the bloodstream, where it
stimulates the production of BDNF, which in turn triggers the growth
of neurons and their associated circuitry, resulting in improved
cognitive function.10
Additionally, serotonin (5-HT) has been proven to play important roles
in cognitive function. While serotonin is typically known for its
antidepressant and anxiolytic properties, it plays a supplementary role
in learning and memory.11 Exercise has been shown to improve
cognitive performance due to the downregulation of 5-HT1A receptors
and upregulation of 5-HT2A receptors in cognition-related areas of the
brain.11 Serotonin also collaborates with the BDNF pathway to regulate
and improve neurogenesis, neuronal plasticity, and neural survival.8,9
Neurocognition, mood disorders, and addiction:
the benefits of exercise
Depression and anxiety
Exercise has been successfully utilised as a form of therapy for many
illnesses such as anxiety, depression, Parkinson’s disease (PD), and
dementia. On the contrary, sedentary behaviour has been directly
linked to an increase in anxiety disorders.12 Some researchers have
shown that exercise is just as effective as antidepressants when treating
mild to moderate depression.13 Exercise can also prevent individuals
from relapsing into depressive and/or anxious states by serving as a
form of distraction. In the distraction hypothesis, exercise allows
individuals to reserve time for themselves during which they can be free
from negative thoughts.14 In addition to distraction, exercise is also
involved in the release of the neurotransmitter β-endorphin.
β-endorphin is released primarily from the anterior pituitary gland as
the precursor molecule pro-opiomelanocortin (POMC).15 During
exercise of sufficient capacity, the circulating levels of β-endorphin
increase, resulting in a marked increase in energy and mood.15 Through
both distraction and the release of endorphins, exercise can help to
relieve stress, reduce its impact, and enhance overall well-being,
thereby easing the symptoms of depression. Specifically in anxiety
disorders, exercise has been linked to increasing alpha wave brain
activity, resulting in reflexive relaxation and reduction of anxiety.3,15
Neurocognitive disorders
In addition to mood disorders, regular physical activity is associated
with a reduced risk of neurocognitive diseases such as PD and
dementia.15 Physical exercise, through the neurotransmitters
mentioned above, enhances cognitive processes controlled by the
prefrontal cortex such as scheduling, planning, and working
memory.15 The prefrontal cortex is particularly susceptible in
neurocognitive diseases. Furthermore, the use of physical exercise in
patients with PD has been studied as a non-pharmaceutical
therapeutic approach to avoid the high side effect profile that is
associated with pharmaceutical treatment.11 Overall, studies have
shown that exercise has the capability to improve executive function
in patients with mild to moderate PD.11
RCSIsmj
Page 126 | Volume 12: Number 1. 2019
perspective
Substance abuse
Physical activity is inversely correlated with substance use disorders,
and is increasingly used as a form of adjunctive therapy in these
situations.16,17 All drugs of abuse activate the mesolimbic dopamine
system, and either directly or indirectly increase dopamine
transmission in the nucleus accumbens.18 In prolonged drug use,
there is a build-up of dopamine, due to drug interactions with
dopamine transporters and receptors, which ultimately diminishes
the natural release of dopamine. Exercise can effectively restore
changes in the neuroplasticity of the brain, acting on the mesolimbic
reward pathway through normalised dopamine release.19 Similar to its
use in mood disorders, a secondary benefit of the use of exercise in
substance abuse disorders is the introduction of exercise as a
distraction from cravings to prevent relapse.
Physical exercise in young adults
With clear evidence of the impact of exercise on mood regulation and
chronic disease prevention, young adults, and particularly medical
students, should engage in regular physical activity to avoid the
detrimental effects of stress on the mind. Many risk factors contribute to a
decreased quality of life in medical students, including increased workload,
long shifts in the hospital, and recurrent stressful exam preparation.20
However, due to high stress levels, lack of time, and lack of motivation,
medical students often find it hard to adopt healthy exercise habits.21
Colleges, universities, and medical schools play a tremendous role in
promoting physical health in students. Increasing awareness about the
beneficial impacts of exercise, and making workout classes and gym
hours easily accessible to students, are valuable and effective ways to
improve exercise adherence. Beyond student well-being, it is essential
that future physicians understand and adopt healthy lifestyles in order
to promote good health practices for their patients.21
According to World Health Organisation (WHO) recommendations,
young adults should partake in moderate physical activity for at least
150 minutes per week.22 This is an average of 20-25 minutes per day;
however, students may find it difficult to allot even this amount of
time within their busy schedules. But in comparison to those who lack
engagement in physical activity, regular participants are more likely to
appropriately respond to and cope with high levels of stress.22
Exercise has also been proven to increase levels of REM sleep.1 The
role of exercise in the sleep-wake cycle is paramount, as a lack of
quality sleep is linked to poor academic performance and excessive
daytime sleepiness.1,20 Therefore, exercise is a tool that can be used by
young adults, specifically medical students, to supplement their
productivity and alleviate stressors.
Recommendations for fellow students
Medical students are recommended to engage in exercise as part of
preventive health behaviour. If young adults fall into a sedentary
pattern and practise an unhealthy lifestyle, including smoking or
excessive drinking, this behaviour can continue into middle or late
adulthood.23 Although it may be difficult to adhere to an active
lifestyle amid the demanding schedule of medical school, the
cognitive benefits of exercise should be discussed and promoted as an
additional incentive for students, beyond just the physical benefits.
Finding a balance between studying and keeping physically active
is crucial to avoid problems such as burnout and fatigue. Minor
changes in day-to-day activity, such as walking or biking instead of
motor transport, can make a significant difference. Consistent effort
to engage in physical activity can increase adherence until exercise
eventually becomes a routine part of daily life. This increase in
physical activity is proven to increase cognition, induce changes in
working memory, and increase the brain’s processing capacity, all
RCSIsmj
Volume 12: Number 1. 2019 | Page 127
perspective
of which directly correlate with the amount of energy and focus
students have for their academic work.20
Conclusion
With a growing body of literature addressing the effects of exercise
on mental health, the myriad positive effects of exercise are
undeniable. Predominantly, exercise is immensely beneficial for an
individual’s overall cognitive functioning, mood, and sleep cycles.
Through its impact on neurotransmitters such as dopamine,
serotonin, and noradrenaline, exercise is able to modulate brain
function and improve cognitive performance. Exercise can also serve
as a form of preventive therapy for neurocognitive disorders; those
who regularly participate in physical activity have a reduced risk of
developing diseases such as Parkinson’s and dementia. As an outlet
from negative thoughts and stressful environments, exercise can
greatly assist medical students in clearing their heads and
replenishing their mental energy. As research continues to enhance
our knowledge, the roles and benefits of exercise will continue to
expand within the realm of medicine as a form of both preventive
and maintenance therapy.
References
1. Gerber M, Brand S, Herrmann C, Colledge F, Holsboer-Trachsler E, Pühse
U. Increased objectively assessed vigorous-intensity exercise is associated
with reduced stress, increased mental health and good objective and
subjective sleep in young adults. Physiol Behav. 2014;135:17-24.
2. Daugherty PL. The effects of three exercise intensities on the alpha brain
wave activity of adult males. Dissertation. 1986. [Internet]. Available from:
https://commons.lib.niu.edu/handle/10843/16167.
3. Zouhal H, Jacob C, Delamarche P, Gratas-Delamarche A. Catecholamines and
the effects of exercise, training and gender. Sports Med. 2008;38(5):401-23.
4. Norepinephrine. Caam.rice.edu. 2019. [Internet]. [cited 2019 January 15].
Available from: https://www.caam.rice.edu/~cox/wrap/norepinephrine.pdf.
5. Roozendaal B, McGaugh JL. Memory modulation. Behav Neurosci.
2011;125(6):797-824.
6. Loy BD, O’Connor PJ, Dishman RK. The effect of a single bout of exercise
on energy and fatigue states: a systematic review and meta-analysis.
Fatigue: Biomedicine, Health & Behavior. 2013;1(4):223-42.
7. Winter B, Breitenstein C, Mooren FC, Voelker K, Fobker M, Lechtermann A
et al. High impact running improves learning. Neurobiol Learn Mem.
2007;87(4):597-609.
8. Tanaka S. Dopaminergic control of working memory and its relevance to
schizophrenia: a circuit dynamics perspective. Neuroscience.
2006;139(1):153-71.
9. Berridge KC, Kringelbach ML. Pleasure systems in the brain. Neuron.
2015;86(3):646-64.
10. Sleiman SF, Henry J, Al-Haddad R, El Hayek L, Haidar EA, Stringer T et al.
Exercise promotes the expression of brain derived neurotrophic factor
(BDNF) through the action of the ketone body β-hydroxybutyrate. eLife.
2016;5:e15092.
11. Lin T-W, Kuo Y-M. Exercise benefits brain function: the monoamine
connection. Brain Sci. 2013;3(1):39-53.
12. Allen MS, Walter EE, Swann C. Sedentary behaviour and risk of anxiety: a
systematic review and meta-analysis. J Affect Disord. 2019;242:5-13.
13. Blumenthal JA, Babyak MA, Doraiswamy PM, Watkins L, Hoffman BM,
Barbour KA et al. Exercise and pharmacotherapy in the treatment of major
depressive disorder. Psychosom Med. 2007;69(7):587-96.
14. Mikkelsen K, Stojanovska L, Polenakovic M, Bosevski M, Apostolopoulos V.
Exercise and mental health. Maturitas. 2017;106:48-56.
15. Hillman CH, Erickson KI, Kramer AF. Be smart, exercise your heart: exercise
effects on brain and cognition. Nat Rev Neurosci. 2008;9(1):58-65.
16. Taylor CB, Sallis JF, Needle R. The relation of physical activity and exercise
to mental health. Public Health Rep. 1985;100(2):195-202.
17. Linke SE, Ussher M. Exercise-based treatments for substance use disorders:
evidence, theory, and practicality. Am J Drug Alcohol Abuse. 2015;41(1):7-15.
18. Annett S. Drugs of Abuse [unpublished lecture notes]. Royal College of
Surgeons in Ireland; notes provided at lecture given on October 1, 2018.
19. Zhou Y, Zhao M, Zhou C, Li R. Sex differences in drug addiction and
response to exercise intervention: From human to animal studies. Front
Neuroendocrinol. 2016;40:24-41.
20. El Hangouche AJ, Jniene A, Aboudrar S, Errguig L, Rkain H, Cherti M et al.
Relationship between poor quality sleep, excessive daytime sleepiness and
low academic performance in medical students. Adv Med Educ Pract.
2018;9:631-8.
21. Rao CR, Darshan B, Das N, Rajan V, Bhogun M, Gupta A. Practice of
physical activity among future doctors: A cross sectional analysis. Int J Prev
Med. 2012;3(5):365-9.
22. World Health Organisation. Prevalence of insufficient physical activity.
2018. [Internet]. [Accessed 2018 October 16]. Available at:
http://www.who.int/gho/ncd/risk_factors/physical_activity_text/en/.
23. Lau JS, Adams SH, Irwin Jr CE, Ozer EM. Receipt of preventive health
services in young adults. J Adolesc Health. 2013;52(1):42-9.
RCSIsmj
Donateor do not?
MATTHEW PATEL looks at evidence-based strategies
to increase rates of organ donation.
Page 128 | Volume 12: Number 1. 2019
Introduction
Organ transplantation is the standard of care for patients with
end-of-life organ failure.1 A rapid increase in the number of patients
requiring organ transplantation, without an increase in the number of
organ donors, has resulted in long transplant waiting lists and a
scenario where patients often die before receiving a transplant.2
Organ donation rates remain frustratingly low, despite
encouragement for donation from governments, the public, and
healthcare professionals.3
Efforts to increase organ donation have largely failed, due to a lack of
theoretical framework and standardisation.4 Expanding the scope of
organ donation requires a multidisciplinary approach, grounded in
ethics, considering the multitude of factors and stakeholders
involved.4,5 Evidence-based strategies involving legislation, education,
donation co-ordinators, hospital committees, and population-based
donor programmes have the potential to increase organ donation rates
and eliminate the disconnect between organ supply and demand.
Legislative strategies
Changing legislation from an opt-in system, where citizens must
register to donate their organs, to an opt-out system, where
individuals are automatically regarded as a donor and must convey
their objection to donating, has been shown to successfully increase
organ donation rates.6-8 An opt-out donor strategy can also be
justified using tacit and normative consent.9
By default, opt-out legislation registers individuals without a
preference regarding donation as organ donors, and has been shown
to increase family consent rates when an individual’s donation wishes
are unknown.9 This strategy also increases donation among the
young, by enabling the procurement of organs from those who had
not registered to donate prior to an untimely death.8 Opt-out
legislation leads to the belief that the government recommends that
its citizens become donors, altering public opinion and influencing
decision-making.6 Perhaps most effectively, opt-out legislation
removes the barrier between an intention to become an organ donor
perspectiveRCSIsmj
Volume 12: Number 1. 2019 | Page 129
and the act of legally doing so, providing significant support for this
model.4,6
Opt-out legislation is supported by medical professionals, who believe
it is the most effective method of increasing donation rates, as well as
the majority of the public, who support the policy if it is ethically
implemented.4,8 Furthermore, a ‘soft opt-out’ system, present in most
countries with this legislation, allows family wishes to override the
legislation, eliminating potential legal or ethical issues.4,6,7 The effect
of opt-out legislation is modulated by the public’s awareness of the
legislation, their knowledge surrounding organ donation, and their
level of trust in the healthcare system, thus making public education
and action plans crucial in order to ensure optimal impact.6
Consequently, opt-out legislation can only be effective in increasing
organ donation rates if a platform of social support for this change in
legislation is established, through developing strong public trust in
the healthcare system and implementing widespread public
awareness campaigns.8
Ireland stands as one of the few countries in the European Union that
does not currently have opt-out legislation. However, this could
change in 2019 as the Irish cabinet will consider the Human Tissue
Bill, proposing opt-out legislation.10 If the Bill is passed, the deceased’s
family will still be involved in the decision and their wishes will be
respected.10
Healthcare staff at transplant centres have specialised training and play critical roles in identifying potential
donors and determining their donation preferences.
The role of education
Knowledge about organ donation and the donation process is a
strong predictor of willingness to be a donor.11 Predictably, a lack of
knowledge about organ donation results in decreased donation
rates.1 Education is a proven strategy, and increases family consent
rates and donor conversion rates by 75%.2,7,12-14 If skilfully
implemented, education compares favourably to other strategies, is
cost-effective, and can be easily implemented.15
Knowledge about donation lessens the fear and sense of uncertainty
surrounding donation, while also dispelling myths and clarifying
misconceptions.2 Education should also focus on the positive impacts
of donation, such as saving a life and immeasurably aiding a family.16
Successful programmes must cover a wide variety of topics and
should be longitudinal, since organ donation is time sensitive and
potential donors are more likely to consent if they have seen
information recently and repeatedly.7 Examples of effective education
methods include home-based video programmes and outreach
efforts involving the mass media, schools, places of religious worship,
and hospitals.2,7,12-14
Informative outreach campaigns have a major impact on minority
and religious populations, where there is a need for cultural, religious,
language, and communication sensitivity. One strategy to educate
minority groups is to collaborate with donation co-ordinators of
specific ethnic backgrounds who can target certain populations using
programmes in multiple languages.17 As all major religions accept
organ donation and leave this decision to the individual, religious
leaders, using a faith-based approach, can inform individuals and
communities on the religious views of donation to help quell
misconceptions.12,13
Healthcare staff at transplant centres have specialised training and
play critical roles in identifying potential donors and determining their
donation preferences.5 While healthcare providers are involved in
various processes of organ donation and believe it is within their duty
of care, many are uncomfortable performing tasks related to
donation.2,14 It is therefore vital to educate healthcare professionals
regarding strategies for initiating donation conversations; by
enhancing their readiness to identify potential donors and provide
information related to consent, rates of organ donation will
rise.3,15,18,19 Increasing healthcare professionals’ confidence in
completing tasks associated with the donation process will strengthen
patient trust and help to overcome donation barriers.2,16 There are no
reports of reduced patient satisfaction when healthcare providers
initiate educational discussions about organ donation, thus
eliminating the worry that informing patients about donation could
lead to dissatisfaction.2
Healthcare staff education should begin in training programmes and
continue throughout practice, where it has the greatest impact.16
Staff with key roles in the donation process should receive mandatory
specialised training to ensure that they can provide information on all
aspects of donation.20
Population-based strategies
Increasing donor registration increases organ donation rates by
enlarging the donor pool.7 Low donor registration rates are attributed
to non-cognitive factors, which are intricately linked to an individual’s
willingness to donate organs.13 Non-modifiable, non-cognitive factors
include age, ethnicity, gender, and religion, with low organ donation
rates seen in older, minority, and highly religious popuations.1,11,16,21
Modifiable non-cognitive factors include medical mistrust, a sense of
fear and anxiety about organ donation, and a reluctance to
perspectiveRCSIsmj
Page 130 | Volume 12: Number 1. 2019
contemplate one’s own death or discuss one’s wishes after death with
relatives.13 Non-cognitive organ donation perceptions can be
positively influenced through education and awareness campaigns.
Cognitive factors include donor socioeconomic status, organ
donation perceptions, and financial incentives for donors.16 Financial
incentives to increase donor registration include direct payment for
organs, reimbursement for lost income, and payment of donor
funeral expenses.7,16,22 However, introducing financial incentives for
organ donation can create ethical dilemmas, despite well-established
direct payment for plasma donations in the USA.23 In countries with
opt-in legislation, the challenges of registration are often cited as a
disincentive.6 Thus, a simplified registration process is needed to
increase donor registration rates.7
The majority of organ donors are cadaveric or living brain dead
donors.24 Broadening the donor pool through the introduction of
different donor types, such as living donors and non heart-beating
donors, has increased donation rates.1,3 These donor eligibility
amendments have been publicly well received, with greater
willingness for donation after cardiac death.3 The non-medical public
generally understands cardiac death better than brain death,
explaining increased willingness to donate a family member’s organs
after cardiac death.3 The use of living donors also has many
advantages, including shorter wait times and hospital stays, and
improved post-transplant outcomes.18
Donation co-ordinators
Donation co-ordinators also play a crucial role in increasing donation
rates.1 They ensure that all potential donors are identified and
monitored, and provide education sessions and family support
throughout the donation process.4,20 Donation co-ordinators
synchronise tasks and aid communication between the emergency,
intensive care unit (ICU), and surgical transplant teams.20
Organ donation rates are greater when co-ordinators, rather than the
healthcare provider, connect with families regarding donation. This is
explained by the separation between the notification of brain death
(relayed by the healthcare practitioner) and request for donation,
carried out by the co-ordinator.11,18,20 Donation co-ordinators from a
variety of backgrounds can conduct culturally sensitive discussions with
families and patients, further increasing donation rates.21 Co-ordinators
act as a neutral third party to provide family support and education,
and have a unique role in the effort to increase organ donation rates.20
Hospital-based strategies
Hospitals play a vital role in the donation process during the organ
procurement stage.22 For hospitals to have a positive impact on
donation, there must be a formal organ procurement programme
that supports staff and donors, and promotes a positive attitude
towards organ donation.25
Hospitals specialising in transplant services with neurosurgical and
emergency departments see a greater number of potential donors,
leading to a higher organ procurement rate compared to general
hospitals.5,22 Resource limitations necessitate that organ donor
programmes be allocated at regional transplant centres and large
centres with neurosurgical programmes, matching resources to
potential donors.22 However, general hospitals can increase their organ
donation rates by providing staff training and implementing best
practices for donation.5,25 Increasing ICU bed capacity positively affects
a hospital’s donation rate by expanding facilities to prolong donors’ lives
and ensure that their organs remain viable for transplant.14
Administrative barriers, such as poor communication between healthcare
teams involved in donor management or improper identification of
potential donors, negatively affect organ donation rates.19 However,
sharing donor management techniques and creating additional transplant
centres with specialised units can help to overcome this.
Conclusion
Increasing organ donation rates is critical to saving the lives of
patients with end-stage organ failure. Intensified attention to this
issue has increased the number of potential donors who ‘convert’ and
become true registered donors, and has increased the number of
perspectiveRCSIsmj
Volume 12: Number 1. 2019 | Page 131
organs procured per donor.12 Evidence-based approaches to increase
organ donation include legislative change to an opt-out system,
liberalising eligibility criteria, and increasing the population of
potential donors.4,6-8 Education regarding organ donation fosters
discussion, dispels myths, enhances public trust, and promotes the
benefits of donation.2 Working to change non-cognitive donor
factors, such as mistrust in the healthcare system and a sense of
anxiety regarding donation, can further increase recruitment of
potential donors. Hospitals can work towards increasing donation
rates by sharing donation best practices, providing training,
promoting a positive attitude towards donation among staff, and
hiring donation co-ordinators.5,25 For maximal effect, efforts to
overcome barriers to organ donation should combine the multitude
of evidence-based approaches.
perspective
References
1. Brown CV, Foulkrod KH, Dworaczyk S, Thompson K, Elliot E, Cooper H et
al. Barriers to obtaining family consent for potential organ donors. J
Trauma. 2010;68(2):447-51.
2. Thornton JD, Sullivan C, Albert JM, Cedeno M, Patrick B, Pencak J et al.
Effects of a video on organ donation consent among primary care patients:
a randomized controlled trial. J Gen Intern Med. 2016;31(8):832-9.
3. Bastami S, Matthes O, Krones T, Biller-Andorno N. Systematic review of
attitudes toward donation after cardiac death among healthcare providers
and the general public. Crit Care Med. 2013;41(3):897-905.
4. Rudge CJ, Buggins E. How to increase organ donation: does opting out
have a role? Transplantation. 2012;93(2):141-4.
5. Redelmeier DA, Markel F, Scales DC. Organ donation after death in
Ontario: a population-based cohort study. CMAJ. 2013;185(8):E337-44.
6. Shepherd L, O’Carroll RE. Awareness of legislation moderates the effect of
opt-out consent on organ donation intentions. Transplantation.
2013;95(8):1058-63.
7. Salim A, Malinoski D, Schulman D, Desai C, Navarro S, Ley EJ. The
combination of an online organ and tissue registry with a public education
campaign can increase the number of organs available for transplantation. J
Trauma. 2010;69(2):451-4.
8. Ugur ZB. Does presumed consent save lives? Evidence from Europe. Health
Econ. 2015;24(12):1560-72.
9. Saunders B. Normative consent and opt-out organ donation. J Med Ethics.
2010;36(2):84-7.
10. McGee H. Cabinet to finally consider ‘opt-out’ organ donation law. The
Irish Times. 2019. [Internet]. [cited 2019 January 3]. Available from:
https://www.irishtimes.com/news/health/cabinet-to-finally-consider-opt-out
-organ-donation-law-1.3746347.
11. Wakefield CE, Reid J, Homewood J. Religious and ethnic influences on
willingness to donate organs and donor behavior: an Australian
perspective. Prog Transplant. 2011;21(2):161-8.
12. Davis BD, Norton HJ, Jacobs DG. The Organ Donation Breakthrough
Collaborative: has it made a difference? Am J Surg. 2013;205(4):381-6.
13. O’Carroll RE, Foster C, McGeechan G, Sandford K, Ferguson E. The “ick”
factor, anticipated regret, and willingness to become an organ donor.
Health Psychol. 2011;30(2):236-45.
14. Cohen B, Wight C. A European perspective on organ procurement: breaking
down the barriers to organ donation. Transplantation. 1999;68(7):985-90.
15. Whiting JF, Kiberd B, Kalo Z, Keown P, Roels L, Kjerulf M. Cost-effectiveness
of organ donation: evaluating investment into donor action and other
donor initiatives. Am J Transplant. 2004;4(4):569-73.
16. Rasiah R, Manikam R, Chandrasekaran SK, Naghavi N, Mubarik S, Mustafa
R et al. Deceased donor organs: what can be done to raise donation rates
using evidence from Malaysia? Am J Transplant. 2016;16(5):1540-7.
17. Biehl M, Kashyap R, McCormick JB. Education of minorities to increase
organ donation consent rate. Crit Care Med. 2013;41(10):e298-9.
18. Roels L, Smits J, Cohen B. Potential for deceased donation not optimally
exploited: donor action data from six countries. Transplantation.
2012;94(11):1167-71.
19. Shaheen FA, Souqiyyeh MZ. Increasing organ donation rates from Muslim
donors: lessons from a successful model. Transplant Proc. 2004;36(7):1878-80.
20. Friele RD, Coppen R, Marquet RL, Gevers JK. Explaining differences
between hospitals in number of organ donors. Am J Transplant.
2006;6(3):539-43.
21. Tong A, Chapman JR, Wong G, Josephson MA, Craig JC. Public awareness
and attitudes to living organ donation: systematic review and integrative
synthesis. Transplantation. 2013;96(5):429-37.
22. Howard DH, Siminoff LA, McBride V, Lin M. Does quality improvement
work? Evaluation of the Organ Donation Breakthrough Collaborative.
Health Serv Res. 2007;42(6Pt1):2160-73; discussion 2294-323.
23. Wigmore SJ, Forsythe JL. Incentives to promote organ donation.
Transplantation. 2004;77(1):159-61.
24. Domen R. Paid-versus-volunteer blood donation in the United States: a
historical review. Transfus Med Rev. 1995;9(1):53-9.
25. Dalley K. Improving organ donation rates – what can be done? Nurs Crit
Care. 2008;13(5):221-2.
RCSIsmj
RCSIsmj
Page 132 | Volume 12: Number 1. 2019
Introduction
Cigarette smoking is the leading cause of preventable death and
disability worldwide.1 Electronic nicotine delivery systems (ENDS),
also known as e-cigarettes or ‘vapes’, have been gaining significant
traction and momentum in two main populations: adults attempting
smoking cessation, and adolescents experimenting with nicotine
products.1,2 ENDS are battery-operated devices, which form a vapour
– by heating liquids containing nicotine, flavouring, and other
chemicals – which is then inhaled by the user.3 This led to the term
‘vaping’ to describe the act of using an ENDS. There are three
generations of ENDS, with each improving on its predecessor by
offering the user new and enticing features such as rechargeability,
compact size, and personalised touches.3
While ENDS are less harmful than combustible cigarettes (CCs), which
contribute to the deaths of two in three long-term users, this does not
mean that ENDS are safe.4 Inaccurate perceptions of ENDS can lead to
harmful behaviour in those attempting to quit smoking and may even
encourage non-smokers to use them.2 ENDS entered the market without
government regulation as they were produced by small companies, and
they are currently not approved by the United States (US) Food and Drug
Administration (FDA) for smoking cessation.5
Since ENDS entered the market in the US and Europe in 2006,
their surging popularity raises questions regarding long-term safety
compared to CCs.4,6 As future healthcare workers, it is vital to be informed
about the risks associated with e-cigarettes in order to appropriately
counsel and care for patients seeking advice regarding these devices.
Adverse effects of ENDS
While there have been no studies or data showing the long-term effects of
vaping, it is generally accepted that ENDS are less harmful than traditional
smoking.7 Propylene glycol/glycerol are commonly used ingredients in
ENDS liquid. A recent study stated that glycol compounds formed during
ENDS liquid vapourisation generate pulmonary irritants; however, there
has been little research into the safety or carcinogenic effects of these
ingredients when heated and aerosolised, as they may form propylene
oxide, a potential carcinogen.8,9 The levels of toxic and probable
carcinogens vary depending on which ENDS device and liquid are used.10
As a result, there is significant unpredictability regarding potential side
effects associated with various ENDS products. This further highlights the
need to standardise the ingredients included in these products in order to
protect user health and safety. Of note, the Centers for Disease Control
and Prevention (CDC) website clearly states that “E-Cigarettes are not
safe for youth, young adults, pregnant woman or adults who do not
currently use tobacco products”,11 and the American Academy of
perspective
E-cigarettes: a new age of smoking
LAN-LINH TRUONG and SRI SANNIHITA
VATTURI discuss the recent surge in popularity
of ‘vaping’, amid concerns about e-cigarette
use, particularly among young people.
RCSIsmj
Volume 12: Number 1. 2019 | Page 133
Pediatrics states that “nicotine may disrupt the formation of circuits in the
brain that control attention and learning”.12
ENDS also carry a risk of burns or other injuries due to device malfunction.
This is owing to the combustible nature of the device, which may result
in burns to the thigh, inner groin, or hand during use or incorrect storage
of the product in users’ pockets.13 On occasion, routes other than
inhalation, such as oral, parenteral, or skin contact, have been known to
cause seizures, anoxic brain injury, lactic acidosis, and even death.13
Cross-sectional and longitudinal analyses have shown that symptoms
of depression were associated with ENDS use.14 One study explored a
dose-dependent relationship between ENDS use and depression, and
revealed that increasing nicotine concentrations within the product
were positively associated with depressive symptoms.14 This is
especially concerning given that many ENDS users are teenagers, and
this sudden increase in ENDS use may lead to an increased incidence
of depression in this population. Low self-esteem is one of the factors
associated with increased vulnerability to depression.15 This is of
particular concern in adolescents, who often become more
self-conscious of their image and perception by peers.
One study found that although 63% of individuals have accurate harm
perceptions about ENDS use relative to CCs, only 9% were accurately
able to identify nicotine as one of the less harmful chemicals found in
cigarettes.2 A separate study reported that a number of the compounds
used in ENDS liquid had various warning codes classified according to
the Danger Globally Harmonized System (GHS) of Classification and
Labelling of Chemicals.16 Forty-one of the most common compounds
used had warning codes, while 11 had danger codes, which further
demonstrates the harmful nature of compounds used in ENDS.16
Additionally, non-smokers exposed to second-hand vapour from
ENDS were found to have tobacco-specific nitrosamine in their
urine.17 This highlights that e-cigarettes affect not only the user, but
indeed those around them as well, similar to CCs. Furthermore, the
effect of flavourings on respiratory function is still unknown.18
One study found that users who previouslyused nicotine-free or low-dose nicotineENDS fluid were now using higher dosesof nicotine, suggesting that they may beon a path towards nicotine dependence.
What are ENDS used for?
Smoking cessation
ENDS have the ability to replicate the pharmacological and
sensorimotor aspects of CCs.4 As a result, one of the most commonly
reported reasons for using ENDS is to aid smoking cessation.1,4 The
majority of studies analysing ENDS use for smoking cessation have no
evidence to confirm or deny its effectiveness; therefore, it cannot be
recommended as a tool to aid those attempting to quit.1
Two separate studies have shown that ENDS users were twice as likely
to still be smoking at the time of follow-up compared to
non-users.19,20 Patients report using ENDS to manage nicotine
cravings and withdrawal; however, the majority of individuals
continue to use both products. The study suggested that ENDS users
were more nicotine dependent than non-users;21 this may be
explained, in part, by the variable nicotine content in ENDS products
and users’ belief that ENDS are less harmful than CCs, leading to
increased ENDS use. Despite decreased CC use, low levels of cigarette
smoking are still associated with significant health risk.21 In addition,
the nicotine contained in ENDS is vaporised and thus absorbed by the
body within seconds, similar to CCs and significantly faster than
nicotine replacement products intended to aid smoking cessation
such as chewing gum or patches.3,4 As a result, ENDS have potent
addictive properties, which are most pronounced in teenagers.3,5
Weight loss
Nicotine, as found in both CC and ENDS products, has
appetite-suppressant effects and can help curb cravings for sweet
foods.22 Smoking has been shown to increase resting energy
expenditure and induce an acute anorexic effect, with increasing doses
of nicotine being positively associated with satiety.23,24 In addition, the
behaviour of smoking e-cigarettes, like traditional cigarettes, can be
used as a distraction from or as a substitute for eating.22
Unique to ENDS is a wide array of flavoured vaping liquids, some of
which mimic high-calorie or high-fat foods and beverages that an
individual may crave; this may help those attempting to lose weight to
comply with a restricted diet.25 Some companies specifically market
ENDS for weight loss. For example, Vapor Diet is “USA’s hottest diet,”
enticing customers to purchase ENDS liquid in indulgent flavours such
as pizza, cupcakes, and cocktails, alongside phrases such as “zero
calories, zero guilt, tastes like you are eating your favourite foods”.25
Vaportrim encourages users to “inhale flavour, curb cravings, lose
weight”.25 These novel and unique features may be enticing to those
looking for an alternative to the traditional methods of weight loss.
The influence of e-cigarettes on young people
Pepper et al. reported that the most common reasons adolescents
experimented with ENDS were curiosity (54.4%), appealing flavours
(43.8%), and the influence of family or friends (31.6%).26 Peers are
the major source for ENDS procurement in teenagers.27 By design,
ENDS are easier to use and share among friends compared to CCs,
perspective
RCSIsmj
Page 134 | Volume 12: Number 1. 2019
further contributing to their rising popularity.28 In fact, teenagers
account for over 70% of ENDS sales in the US.29
In particular, the willingness of middle school and high school
students to experiment with ENDS was aided by the product’s easy
availability at malls and convenience stores.26 Adolescents are
increasingly exposed to ENDS in various locations including petrol
stations, shopping centres, magazines, and online resources.28 ENDS
companies have also commenced aggressive marketing campaigns
on popular websites such as Facebook, Instagram, and YouTube.30
This highlights the importance of closely monitoring how marketing
affects ENDS use, and the influence of social media on young people.
Other factors that drive vaping include ENDS being perceived as
“cool” and being associated with a positive social image through
experimenting with flavours or performing “smoke tricks”.26
Younger individuals are also at higher risk of being misinformed about
the harms associated with nicotine and ENDS. Indeed, young people
believe ENDS are less harmful, cheaper, and better smelling than
CCs.26 There are no ashes or ash trays to clean up, and because the
odour does not linger, a vaping habit can easily be hidden from
parents and teachers.26 While family use of ENDS is associated with
more accurate harm perceptions, it is also associated with increased
likelihood of ENDS use among young people.2
Young people believe ENDS are less harmful, cheaper, and better
smelling than CCs.
Krishnan-Sarin et al. discovered that many adolescents reported ENDS use
as their first tobacco product.28 ENDS are the leading tobacco product
used by young people in the US, with over 80% of these users claiming
they were initially drawn to the product by the wide variety of liquid
flavours.25,27 However, ENDS are perceived as being less satisfying than
CCs.26 Alarmingly, one study found that, upon follow-up, users who
previously used nicotine-free or low-dose nicotine ENDS fluid were now
using higher doses of nicotine, suggesting that they may be on a path
towards nicotine dependence.28,31 This further perpetuates the concern
that ENDS may be a gateway to traditional smoking.
Current guidelines and safeguards
At present, one must be at least 18 years of age to legally purchase ENDS
cartridges in the UK.32 The European Union (EU) limits the amount of
nicotine contained in each ENDS device, but the US does not, meaning
that a varying amount of nicotine may be contained in each device.33
However, following pressure from the FDA, Juul Labs (the company that
manufactures the popular Juul e-cigarettes) announced that it would
stop its social media promotions and suspend sales of certain flavours.34
In addition, the EU and Canada restrict ENDS advertising to young
people; the EU requires graphic health warnings, while Canada bans
confectionary ENDS flavours in an attempt to deter adolescents.35
Conclusion
Nearly everywhere you look, whether an ad on your phone or at the bus
stop, or a newspaper headline, the subject of ENDS will appear.
E-cigarettes are undeniably popular, as both a topic for debate and a
gadget for smoking, and there is a growing divide between ENDS
supporters and critics. Federal and local policies addressing adolescents’
access to ENDS, as well as programmes to prevent an exponential rise
in ENDS use among young people, are urgently needed.28 The rapid
emergence of the ENDS industry must be met with an informed public
health response. Research on product design, exposure to toxins and
carcinogens, metabolic effects, initiation of young people, and the role
of ENDS in smoking cessation is needed to counteract deceptive
marketing and inform regulatory changes.30
Concurrently, medical education should be updated to reflect and
accommodate the ever-changing trends in society and healthcare. As new
products and habits gain popularity, there must be concomitant
amendments to the medical curriculum to prepare future healthcare
professionals for scenarios they may encounter in everyday clinical practice.
perspective
References
1. Unger M, Unger DW. E-cigarettes/electronic nicotine delivery systems: a word
of caution on health and new product development. J Thorac Dis.
2018;10(Suppl. 22):S2588-s92.
2. East K, Brose LS, McNeill A, Cheeseman H, Arnott D, Hitchman SC. Harm
perceptions of electronic cigarettes and nicotine: A nationally representative
cross-sectional survey of young people in Great Britain. Drug Alcohol Depend.
2018;192:257-63.
3. Grana R, Benowitz N, Glantz SA. E-cigarettes: a scientific review. Circulation.
2014;129(19):1972-86.
4. Cahn Z, Siegel M. Electronic cigarettes as a harm reduction strategy for
tobacco control: a step forward or a repeat of past mistakes? J Public Health
Policy. 2011;32(1):16-31.
RCSIsmj
Volume 12: Number 1. 2019 | Page 135
perspective
5. Ebbert JO, Agunwamba AA, Rutten LJ. Counseling patients on the use of
electronic cigarettes. Mayo Clin Proc. 2015;90(1):128-34.
6. Hajek P, Etter JF, Benowitz N, Eissenberg T, McRobbie H. Electronic cigarettes:
review of use, content, safety, effects on smokers and potential for harm and
benefit. Addiction. 2014;109(11):1801-10.
7. Britton J, Arnott D, McNeill A, Hopkinson N. Nicotine without smoke –
putting electronic cigarettes in context. BMJ. 2016;353:i1745.
8. Erythropel HC, Jabba SV, DeWinter TM, Mendizabal M, Anastas PT, Jordt SE et
al. Formation of flavorant-propylene glycol adducts with novel toxicological
properties in chemically unstable e-cigarette liquids. Nicotine Tob Res. 2018;Oct 18.
9. Laino T, Tuma C, Moor P, Martin E, Stolz S, Curioni A. Mechanisms of propylene
glycol and triacetin pyrolysis. Journal Phys Chem A. 2012;116(18):4602-9.
10. Kosmider L, Sobczak A, Fik M, Knysak J, Zaciera M, Kurek J et al. Carbonyl
compounds in electronic cigarette vapors: effects of nicotine solvent and
battery output voltage. Nicotine Tob Research. 2014;16(10):1319-26.
11. Centers for Disease Control and Prevention. About Electronic Cigarettes
(E-Cigarettes). 2018. [Internet]. [updated 2018 October 23]. Available from:
https://www.cdc.gov/tobacco/basic_information/e-cigarettes/
about-e-cigarettes.html.
12. American Academy of Pediatrics. Section on Tobacco Control [press release].
Itasca, Illinois, 2018.
13. National Academies of Sciences. Public Health Consequences of E-Cigarettes.
Stratton K, Kwan LY, Eaton DL (eds.). Washington, DC; The National
Academies Press, 2018: 774p.
14. Wiernik E, Airagnes G, Lequy E, Gomajee R, Melchior M, Le Faou AL et al.
Electronic cigarette use is associated with depressive symptoms among
smokers and former smokers: Cross-sectional and longitudinal findings from
the Constances cohort. Addict Beh. 2018;90:85-91.
15. Masselink M, Van Roekel E, Oldehinkel AJ. Self-esteem in early adolescence as
predictor of depressive symptoms in late adolescence and early adulthood:
the mediating role of motivational and social factors. J Youth Adolesc.
2018;47(5):932-46.
16. Girvalaki C, Tzatzarakis M, Kyriakos CN, Vardavas AI, Stivaktakis PD, Kavvalakis
M et al. Composition and chemical health hazards of the most common
electronic cigarette liquids in nine European countries. Inhal Toxicol. 2018:1-9.
17. Martinez-Sanchez JM, Ballbe M, Perez-Ortuno R, Fu M, Sureda X, Pascual JA
et al. Secondhand exposure to aerosol from electronic cigarettes: pilot study
of assessment of tobacco-specific nitrosamine (NNAL) in urine. Gac Sanit. 2018.
18. Barrington-Trimis JL, Samet JM, McConnell R. Flavorings in electronic
cigarettes: an unrecognized respiratory health hazard? JAMA.
2014;312(23):2493-4.
19. QuickStats: Cigarette Smoking Status Among Current Adult E-cigarette Users,
by Age Group – National Health Interview Survey, United States, 2015. 2015.
Morb Mortal Wkly Rep. 2018;65(42):1177.
20. Borderud SP, Li Y, Burkhalter JE, Sheffer CE, Ostroff JS. Electronic cigarette use
among patients with cancer: characteristics of electronic cigarette users and
their smoking cessation outcomes. Cancer. 2014;120(22):3527-35.
21. Inoue-Choi M, Liao LM, Reyes-Guzman C, Hartge P, Caporaso N, Freedman
ND. Association of long-term, low-intensity smoking with all-cause and
cause-specific mortality in the National Institutes of Health-AARP Diet and
Health Study. JAMA Intern Med. 2017;177(1):87-95.
22. Kovacs MA, Correa JB, Brandon TH. Smoking as alternative to eating among
restrained eaters: effect of food prime on young adult female smokers. Health
Psychol. 2014;33(10):1174-84.
23. Hofstetter A, Schutz Y, Jequier E, Wahren J. Increased 24-hour energy
expenditure in cigarette smokers. N Engl J Med. 1986;314(2):79-82.
24. Jessen A, Buemann B, Toubro S, Skovgaard IM, Astrup A. The
appetite-suppressant effect of nicotine is enhanced by caffeine. Diabetes Obes
Metab. 2005;7(4):327-33.
25. Kong G, Morean ME, Cavallo DA, Camenga DR, Krishnan-Sarin S. Reasons for
electronic cigarette experimentation and discontinuation among adolescents
and young adults. Nicotine Tob Res. 2015;17(7):847-54.
26. Pepper JK, Ribisl KM, Emery SL, Brewer NT. Reasons for starting and stopping
electronic cigarette use. Int J Environ Res Public Health. 2014;11(10):10345-61.
27. Jamal A, Gentzke A, Hu SS, Cullen KA, Apelberg BJ, Homa DM et al. Tobacco
use among middle and high school students – United States, 2011-2016.
MMWR Morb Mortal Wkly Rep. 2017;66(23):597-603.
28. Krishnan-Sarin S, Morean ME, Camenga DR, Cavallo DA, Kong G. E-cigarette
use among high school and middle school adolescents in Connecticut.
Nicotine Tob Research. 2015;17(7):810-8.
29. Bright J. The price of cool: a teenager, a Juul and nicotine addiction. The New
York Times, November 16, 2018.
30. Noel JK, Rees VW, Connolly GN. Electronic cigarettes: a new ‘tobacco’
industry? Tob Control. 2011;20(1):81.
31. Siqueira LM. Nicotine and tobacco as substances of abuse in children and
adolescents. Pediatrics. 2017;139(1).
32. Rough E, Barber S. The regulation of e-cigarettes. House of Commons Library.
2017:4.
33. European Commission. Revision of the Tobacco Products Directive. 2016.
[Internet]. Available from:
http://ec.europa.eu/health/tobacco/products/revision/index_en.htm.
34. Kaplan S, Hoffman J. Juul suspends selling most e-cigarette flavors in stores.
The New York Times, November 13, 2018.
35. Prochaska JJ. The public health consequences of e-cigarettes: a review by the
National Academies of Sciences. A call for more research, a need for
regulatory action. Addiction. 2018; Oct 22.
Page 136 | Volume 12: Number 1. 2019
Introduction
Indoleamine 2,3-dioxygenase (IDO) is an enzyme that initiates
catabolism of the essential amino acid tryptophan. Early research on
IDO has focused on its role in preventing maternal T cell rejection of
the foetus in pregnancy by inducing T cell tolerance of the allogeneic
foetus.1 When applied to the study of immuno-oncology, it has been
observed that IDO is overexpressed in multiple tumour groups,
including ovarian cancer, colon cancer, and melanoma. Effectively,
cancer cells can manipulate IDO activity to promote local tolerance
and enable immune evasion.2 A limited number of IDO inhibitors
have been discovered. One IDO-1 inhibitor (epacadostat) is currently
in phase III trials for malignant melanoma.2 In research preceding this
study, novel compounds possessing IDO inhibition ability were
discovered, as depicted in Figure 1. Moieties of R1 and R2 were
explored chemically and a number of analogues prepared. The aim
of this study was to assess the inhibition rates of these compounds
on the IDO enzyme and their anti-proliferative effects on cancer cell
lines.
Methods
Enzymatic assays were performed to establish the inhibition rates of
the compounds on IDO during in vitro tryptophan metabolism. For
the cell-based analysis, epidermoid carcinoma cells were first
cultured and then treated with the test inhibitors. Following
treatment, cell viability assays were performed to determine the
cytotoxic and anti-proliferative effects of each compound on the
cancer cell lines.3
Results
The enzymatic assays demonstrated satisfactory IDO inhibition rates
in three of the new compounds tested. It was hypothesised that the
cell-based assays would demonstrate corresponding anti-proliferative
effects on the cell lines; however, the IC50 was not reached in the
cell-based assays for the inhibitors tested. Since IDO inhibition was
achieved with three different compounds in the enzymatic assays, it
is presumed that the viability of the cell cultures was suboptimal, thus
accounting for the inconsistent findings in the cell-based studies.
Conclusion
Three novel IDO enzyme inhibiting compounds were discovered,
with a proven satisfactory inhibition rate on the metabolism of
tryptophan in vitro. Further cell-based studies are needed in order to
ascertain the anti-proliferative effects of these compounds and to
expand upon the role of IDO inhibition therapy in immuno-oncology.
Acknowledgements
The author wishes to acknowledge the RCSI Research Summer School
for the funding provided to undertake this research.
abstract
Novel developments in cancer immunotherapy:an analysis of compounds that inhibitindoleamine 2,3-dioxygenaseClaire Conlon,1 Xu Lumei,2 Prof. Chunhua Qiao2
1RCSI medical student, 2College of Pharmaceuticals, Soochow University, China
FIGURE 1: Replica of IDO inhibitor under investigation.
References
1. Munn DH, Zhou M, Attwood JT, Bonarev I, Conway SJ, Marshall B et al.
Prevention of allogeneic fetal rejection by tryptophan catabolism. Science.
1998;281(5380):1191-3.
2. Yue EW, Sparks R, Polam P, Modi D, Douty B, Wayland B et al. INCB24360
(epacadostat), a highly potent and selective indoleamine-2,3-dioxygenase
1 (IDO1) inhibitor for immuno-oncology. ACS Med Chem Lett.
2017;8:486-91.
3. Van Meerloo J, Kaspers GJ, Cloos J. Cell sensitivity assays: the MTT assay.
Methods Mol Biol. 2011;731:237-45.
RCSIsmj
O
R1N
NH
R2
X
Volume 12: Number 1. 2019 | Page 137
abstract
Volume 12: Number 1. 2019 | Page 137
Introduction
Chronic wound infections fail to proceed through the normal phases
of healing in an orderly and timely manner. Evidence suggests that
chronic wound infections often involve biofilms produced by
Staphylococcus aureus and Pseudomonas aeruginosa.1 Bacterial biofilms
are innately resistant to many host immune responses and various
antibiotics. Cold air plasma (CAP) may be an effective therapeutic
alternative to eradicate biofilms within chronic wounds and accelerate
the tissue repair process.2 This study aimed to investigate the efficacy
of CAP for eradicating bacterial biofilms grown under simulated
wound conditions.
Methods
Biofilms of methicillin-sensitive S. aureus (MSSA) strain SH1000,
methicillin-resistant S. aureus (MRSA) strain USA300, and a P. aeruginosa
(PA) strain were grown in simulated wound fluid composed of 50%
foetal calf serum and 50% physiological NaCl in 0.1% peptone water.
The strains were grown in a 96-well microtire plate at 37ºC for either 24
or 48 hours. Each test well was preconditioned with collagen prior to
inoculation. Polymicrobial biofilms composed of MSSA plus PA, or MRSA
plus PA, were grown under the same conditions. Biofilms were stained
with crystal violet, and the absorbance of stained adhered cells was
measured at 492nm. Biofilms were either left untreated (control) or
treated with CAP for 90 seconds, ciprofloxacin (2mg/ml) for 24 hours,
or 30% ethanol (positive control) for 24 hours. Following treatments, a
Resazurin assay was used to determine biofilm viability.
Results
All bacterial strains were biofilm positive under these growth
conditions. However, polymicrobial biofilms required longer growth
incubation. CAP treatment for 90 seconds resulted in a 70% and 53%
reduction in MSSA and MRSA biofilm viability, respectively. In
comparison, a 23% and 33% reduction in MSSA and MRSA biofilm
viability, respectively, was seen following treatment with
ciprofloxacin. PA biofilms, mixed biofilms of MSSA plus PA, and
biofilms of MRSA plus PA were more susceptible to treatment with
ciprofloxacin than CAP.
Conclusion
CAP may be an effective alternative in the treatment of chronic
wound infections when MSSA or MRSA is the causative pathogen.
Ciprofloxacin appears to be more effective when P. aeruginosa
biofilms are involved. However, further investigations are required to
determine the benefits of using CAP to treat chronic wound infections
over routinely used antibiotics.
Eradicating bacterial biofilms with cold airplasma under simulated wound conditions
Elaine Tan,1 Dr Stephen Daniels,2 Prof. Hilary Humphreys,1,3 Dr Niall Stevens1
1Department of Clinical Microbiology, RCSI Education & Research Centre, Beaumont Hospital, Dublin
2National Centre for Plasma Science and Technology, Dublin City University
3Department of Microbiology, Beaumont Hospital, Dublin
References
1. Serra R, Grande R, Butrico L, Rossi A, Settimio UF, Caroleo B et al. Chronic
wound infections: the role of Pseudomonas aeruginosa and Staphylococcus
aureus. Expert Rev Anti Infect Ther. 2015;13(5):605-13.
2. Tanaka H, Nakamura K, Mizuno M, Ishikawa K, Takeda K, Kajiyama H et
al. Non-thermal atmospheric pressure plasma activates lactate in Ringer’s
solution for anti-tumor effects. Sci Rep. 2016;6:36282.
RCSIsmj
Page 138 | Volume 12: Number 1. 2019
abstract
Introduction
Cystic fibrosis (CF) is characterised by recurring pulmonary microbial
colonisation and neutrophil-mediated inflammation. This results in
formation of thick dehydrated mucus secretions, which impair mucociliary
clearance and obstruct airways, forming a breeding ground for
pathogens, namely Pseudomonas aeruginosa and Staphylococcus aureus.
However, the neutrophil-mediated inflammation fails to effectively
eradicate these microbes from the CF lung.1 It is hypothesised that this is
due to abnormal pH regulation within phagosomes in CF, causing
sustained alkalinisation following phagocytosis, which impairs bactericidal
activity.2 The aim of this study was to investigate the impact of this
environment on the antimicrobial activity of primary granule proteins,
α-defensin and neutrophil elastase (NE), against P. aeruginosa and S. aureus.
Methods
The effect of pH on α-defensin and NE activity was assessed usingbactericidal assays at various pH levels. P. aeruginosa (PAO1) was treated
with 2.5µg/ml or 10µg/ml of α-defensin, or 10µg/ml of NE, for 20minutes at 37°C. S. aureus (8325-4) was treated with α-defensin under thesame conditions. As NE is only effective against gram-negative bacteria, S.
aureus was not treated with NE.3 Serial dilutions were carried out before
plating the samples in triplicate onto lysogeny broth (LB) agar and
incubated overnight. Colony-forming units (CFUs) were counted and
percentage survival was calculated relative to controls at each pH tested. The
effect of pH on NE activity was determined using a fluorescence resonance
energy transfer (FRET) assay. Statistical significance was determined using a
one-way ANOVA and Bonferroni's post-test in Graph Pad Prism, version 5.0.
Results
The antimicrobial effect of α-defensin is optimal at a low pH (5.5) (n=3,p<0.0023). α-defensin was more effective against P. aeruginosa than
against S. aureus (Figures 1 and 2). Interestingly, NE showed significant
activity at a more alkaline pH, between 8.0 and 6.5 (n=3, p=0.0001), but
had no antimicrobial effect on P. aeruginosa.
Discussion
This study found α-defensin to be highly effective against P. aeruginosa atan acidic pH of 5.5. However, the alkaline pH of the CF phagosome does
not support α-defensin antimicrobial activity. S. aureus remains resistant toα-defensin even at increased dosage. While NE is most active at thepresent CF phagosome pH, it had no bactericidal effect on P. aeruginosa.
These findings demonstrate the significance of CF phagosomal pH on
bactericidal activity; future therapeutic strategies should consider
optimising neutrophil pH for enhanced antimicrobial activity.
An aberrant phagosomal pH impairs neutrophilmicrobial killing in cystic fibrosisSaw Le Er (Stephanie), Karen McQuillan, Dr Emer Reeves
Royal College of Surgeons in Ireland
References
1. Rada B. Interactions between neutrophils and Pseudomonas aeruginosa in
cystic fibrosis. Pathogens. 2017;6(1):10.
2. Browne N, White MM, Milner M et al. Reduced HVCN1 in neutrophils of
individuals with cystic fibrosis alters phagosomal pH and degranulation.
Am J Resp Crit Care Med. 2017;195:A1272.
3. Moore G, Knight G, Blann A. Haematology. New York; United States, 2010.
RCSIsmj
FIGURE 1: Survival of Pseudomonas aeruginosa (PAO1) treated with α-defensin(2.5µg/ml) at varying pH levels.
FIGURE 2: Survival of Staphylococcus aureus (8325-4) treated with α-defensin(10µg/ml) at varying pH levels.
100
80
60
40
20
0 Ctrl 5.5 6.0 6.5 7.5 8.0
PH
% survival of P. aerug
inos
a
*
100
80
60
40
20
0 Ctrl 5.5 6.0 6.5 7.5 8.0
PH
% survival of S. aureu
s
*
Volume 12: Number 1. 2019 | Page 139
abstract
Introduction
Over-encapsulation is a technique used to conceal the identity of a
tablet for blinding in randomised controlled trials. It involves inserting
the tablet into an opaque capsule shell along with a backfill excipient
to prevent rattling.1 Clinical investigators must provide evidence that
over-encapsulation does not alter stability for the duration of the
product’s shelf life.2 This study assessed the effect of
over-encapsulation on long-term product storage and active
medication release from modified penicillin V tablets.
Methods
Penicillin V tablets were over-encapsulated in hard gelatin capsules
with a set quantity of backfill talc powder. Batches of unmodified
tablets (positive control) and over-encapsulated tablets (OETs,
modified tablets) were stored for 0, 1, 2, 3, and 6 months (Time 0, 1,
2, 3, 6) at accelerated storage conditions (40 ± 2°C, relative humidity:
75 ± 5%), or for 0 and 12 months in ambient storage conditions.3
Disintegration and dissolution testing was performed according to
the European Pharmacopoeia. OETs modified with hydroxypropyl
methylcellulose (HPMC) backfill were used as a negative control.
Results
There was 60-90% release of penicillin V during dissolution testing
between 5 and 15 minutes from Time 0, 1, and 2 unmodified tablets
stored under accelerated conditions. There was a lag in release
between unmodified tablets and OETs. Release from the Time 0, 1,
and 2 OETs was nominal at 0 and 5 minutes, 30-50% at 15 minutes,
and >90% at 30 minutes. There was complete release of penicillin V
from both unmodified tablets and OETs stored for 12 months.
Interestingly, penicillin release was slower from unmodified tablets
compared to OETs at 15 minutes. Disintegration times for both
unmodified tablets and OETs were acceptable for compendial
standards at 5 and 11 minutes, respectively.
Discussion
Penicillin V release from unmodified tablets and OETs under
accelerated storage conditions was not significantly affected
according to regulatory specification. Release was more gradual from
the Time 0, 1, and 2 OETs compared to OETs stored for 12 months.
However, compendial release specification was met for both
formulations. It is possible that, under certain conditions, the slight
difference in release results in a lag time to onset of action. This study
shows, for the first time, that inappropriate selection of high-viscosity
backfill impedes optimal release of penicillin V from an
over-encapsulated formulation, highlighting the importance of
backfill selection as a critical pharmacologic parameter.
Effect of long-term storage on disintegrationand dissolution of over-encapsulated penicillin V tabletsJoseph Saleh,1 Dr Sam Maher,2 Dr Abel Wakai,3 Mr John C. Hayden2
1RCSI pharmacy student
2Department of Pharmacy, Royal College of Surgeons in Ireland
3Beaumont Hospital, Dublin
References
1. Wan M, Orlu-Gul M, Legay H, Tuleu C. Blinding in pharmacological trials:
the devil is in the details. Arch Dis Child. 2013;98:656-9.
2. Huynh-Ba K, Aubry A. Analytical development strategies for comparator
products. Am Pharm Rev. 2003;6(2):92-7.
3. Boland F, Quirke M, Gannon B et al. The Penicillin for the Emergency
Department Outpatient treatment of CELLulitis (PEDOCELL) trial: update
to the study protocol and detailed statistical analysis plan (SAP). Trials.
2017;18(1):391.
RCSIsmj
[email protected]@rcsi.iewww.rcsismj.com.
RCSIRoyal College of Surgeons in IrelandStudent Medical Journal
smj
Th!nkMedia