The Future of Biomedical Science

32
The Future of Biomedical Science i Mezencev R. Science has to be understood in its broadest sense, as a method for comprehending all observable reality, and not merely as an instrument for acquiring specialized knowledge. Dr. Alexis Carrel (1873-1944) Mezencev R. The Future of Biomedical Science. In: Hulín I, Ostatníková D, Mezencev R. et al. On the Scientific Observation in Medicine. Bratislava, Slovakia: AEPress, Ltd.; 2015: Chapter 25 (in press); ISBN 978-80-89678-07-5 This text will discuss current trends and future directions in biomedical sciences. In an attempt to provide a well-supported analysis of this topic I reviewed recent achievements and current issues in some areas of biomedical sciences and extrapolated this information to predict the future. As much as I tried to provide an objective and generalizable prediction of the future trends in biomedical sciences, my analysis is somewhat subjective and limited mostly to my area of expertise, which includes pharmacology, medicinal chemistry, experimental oncology, nanoscience and nanotechnology. Current biomedical sciences display specific trends that are likely to continue at least for some time in the future. These trends include, among others, (i) the use of methods that generate big data, (ii) experimental methods for analyses of single cells in large cell populations, (iii) computational modeling of complex biological systems, (iv) integration of the "omics" data, and (v) advanced understanding of the structure and function of biologically relevant molecules and their role in health and disease. Considering the progress recently achieved in the field of nanotechnology, one can safely predict that nanotechnology will play an important future role in sciences in general and in biomedical sciences in particular. Furthermore, certain trends in contemporary sciences strongly suggest that boundaries between biomedical sciences, delimiting one scientific discipline from another, will be less distinct and will possibly disappear in time. Consequently, multiple fields of biomedical science, as known today, will eventually converge into the limited number of highly multidisciplinary fields of biomedical science. A dominant position in biomedical sciences will be assumed by translational health research that

Transcript of The Future of Biomedical Science

The Future of Biomedical Sciencei

Mezencev R.

Science has to be understood in its broadest sense,

as a method for comprehending all observable reality,

and not merely as an instrument for acquiring

specialized knowledge.

Dr. Alexis Carrel (1873-1944)

Mezencev R. The Future of Biomedical Science. In: Hulín I, Ostatníková D, Mezencev R. et al. On the Scientific Observation in

Medicine. Bratislava, Slovakia: AEPress, Ltd.; 2015: Chapter 25 (in press); ISBN 978-80-89678-07-5

This text will discuss current trends and future directions in biomedical sciences. In an attempt

to provide a well-supported analysis of this topic I reviewed recent achievements and current

issues in some areas of biomedical sciences and extrapolated this information to predict the

future. As much as I tried to provide an objective and generalizable prediction of the future

trends in biomedical sciences, my analysis is somewhat subjective and limited mostly to my

area of expertise, which includes pharmacology, medicinal chemistry, experimental oncology,

nanoscience and nanotechnology.

Current biomedical sciences display specific trends that are likely to continue at least for some

time in the future. These trends include, among others, (i) the use of methods that generate big

data, (ii) experimental methods for analyses of single cells in large cell populations, (iii)

computational modeling of complex biological systems, (iv) integration of the "omics" data, and

(v) advanced understanding of the structure and function of biologically relevant molecules and

their role in health and disease. Considering the progress recently achieved in the field of

nanotechnology, one can safely predict that nanotechnology will play an important future role

in sciences in general and in biomedical sciences in particular. Furthermore, certain trends in

contemporary sciences strongly suggest that boundaries between biomedical sciences,

delimiting one scientific discipline from another, will be less distinct and will possibly disappear

in time. Consequently, multiple fields of biomedical science, as known today, will eventually

converge into the limited number of highly multidisciplinary fields of biomedical science. A

dominant position in biomedical sciences will be assumed by translational health research that

The Future of Biomedical Science

2

crosses barriers between basic and clinical research and applies findings from basic biomedical

sciences to prevent, predict or cure disease.

Methods that Generate Big Data

Methods that generate big data in biomedical research can be divided into two broad classes:

multiplex assays and high-throughput (or ultra-high-throughput) assays. While multiplex assays

generate much information (datapoints) from one subject or specimen by simultaneous

measurements of many different analytical signals, high-throughput methods generate one or a

few datapoints from many subjects (specimens) analyzed in parallel. Big data are characterized

by 3V: volume (amount of data), variety (diversity of data types) and velocity (speed of data

generation vs analysis).

Multiplex and high-throughput methods are becoming an everyday reality in contemporary

sciences owing to the technological advances that brought miniaturization, automatization,

integration and the ability to conduct highly parallel experiments in platforms known as a "lab-

on-a-chip". These platforms allow performing multiplex assays with a few specimens or high-

throughput assays with many specimens in small compact chips that contain serially connected

microfluidic units, each of which is dedicated to a specific laboratory operation, such as reagent

storage and release, homogenization, extraction, incubation and detection. Microfluidics, which

made these achievements possible, represents a multidisciplinary field that builds on the

advances in physics, chemistry, biotechnology and engineering with the aim to design and

develop systems for fully automated operations with very small volumes of liquids in micro-

sized channels with typical dimension of 1-100 µm.

It was not that long ago when experiments have been performed in test tubes with volumes of

around a few milliliters. As a result, the experiments in biomedical sciences were constrained

with respect to number of specimens that could be processed and analyzed in a single

experiment. These limitations had negative impact, e.g. on the number of compounds that

could have been tested for their biological and pharmacological properties. Some 20 years ago

pharmacologists could test only a tiny fraction of ever growing number of known natural and

synthetic compounds discovered by the advances in medicinal chemistry, organic synthesis and

sciences focused on natural products. Since then, this situation has changed considerably, as

the high-throughput and ultra-high-throughput methods have been introduced and established

in the routine biomedical research and development. These methods employ microplates and

nanoplates instead of test tubes, as well as microplate and nanoplate readers for the detection

and measurement of analytical signals, and robotic workstations for plate handling and liquid

dispensing.

The Future of Biomedical Science

3

The first step from test tubes to microplates was made by the invention of Hungarian physician-

scientist Dr. Gyula Takátsy, who designed first 96-well plates in the year of 1951. This

innovation, which was originally motivated by the urgent need to increase throughput of

virological diagnosis during an ongoing influenza epidemics, demonstrated a huge potential for

the future progress in biomedical sciences and in 1966 the 96-well microplates became

commercially available in Europe. Thereafter, the development of higher density microplates

followed and 384-well microplates, as well as 1536-well microplates, and 3456-well microplates

became available in 1992 and 1996, respectively. Eventually, the year of 1997 brought us the

first nanoplates, with density of 9,600 wells per plate. The motivation behind the development

of higher density microplates is, at least in part, related to the need to test many compounds

for their biological/pharmacological activity in highly parallel format. To give an example, a

successful development of one drug is usually preceded by tests of some 10,000 - 20,000

compounds on average (Ooms F. Curr Med Chem 2000; 7: 141-158). In the specific example of

the multikinase inhibitor sorafenib, which is approved by the FDA for the treatment of papillary

and follicular thyroid carcinoma, hepatocellular carcinoma and renal cell kidney cancers, the

development of this drug required to test by high-throughput screening some 200,000

compounds.

Higher densities of wells in microplates and nanoplates resulted in lower requirements for

sample volumes and considerably decreased amounts of compounds needed for biological and

pharmacological assays. While working volume of 96-well plate is 25-350 µL, it is reduced to 10-

150 µL in 384-well plates and to 1-15 µL in 1,536-well plates. The 9,600-well plates with

working volume of 0.2 µL represent real nanoplates. Such small working volumes allow to test

at reasonable concentrations biological activity of compounds with very limited availability. One

can expect that in the near future we will use formats that will enable us to perform even more

experiments simultaneously. This can be possibly achieved through higher density of wells in

nanoplates, for example, through the development of 11,616-well plates with working volume

of 80-100 nL. Alternatively, higher parallelization of experiments can be achieved in the future

via further development of microfluidic chips.

The development of microplate- and nanoplate-based experimental platforms was contingent

on the advances in biological and material sciences. Material sciences contributed to this effort

through the development of materials (plastics) with optimal optical properties and low

adsorption of tested materials from their solutions onto microplate well surfaces. Likewise,

biological sciences contributed with their insights that allowed sophisticated surface treatment

of microplate wells in order to achieve the optimal growth conditions for cells used in biological

and pharmacological assays, including solid tumor cells that often require high adherence to

the surface, leukemia cells that require little or no adherence to the surface, and cancer stem

cells that require surfaces with ultra-low adherence. The development of higher density

The Future of Biomedical Science

4

microplates would not be accomplished without advances in microelectronics and precision

technologies, since these microplates and nanoplates consist of addressed wells whose position

needs to be known exactly and the readers have to read the signals from these wells with

adequate speed and assign signal to specific wells without interference with signals from other

wells (signal spillover). Furthermore, while 96-well plates (and to some extent also 384-well

plates) could be handled and filled manually, the use of higher-density plates is absolutely

dependent on the automated liquid handling/dispensing systems without which their use

would not be possible. The advances in this field are associated with well-known Moore's law,

according to which the transistor count (density) of the integrated circuits doubles every 18-24

months resulting in the exponential increase in computing performance (Moore's law and

Dennard scaling). This law correctly describes the trend that started in 1965 and its validity is

predicted to last till about 2020, when we are likely to reach 7 nm physical constraints to

transistor scaling (the production of current 14 nm technology was reached in 2014).

Microplates and present-day devices for their handling and reading allowed the testing of

enormous numbers of compounds for their biological and pharmacological properties. For

example, the systems for ultra-high-throughput screening are capable of evaluating more than

100,000 compounds per day and this achievement considerably changed pharmacology,

medicinal chemistry but also biomedical sciences in general. In pharmacology and medicinal

chemistry these changes resulted in a new paradigm: Unlike in the past, limitations are no

longer in the capacities to test prospective biologically active compounds, but rather in the

lower availability of new natural or synthetic compounds that could be used in drug discovery

and development.

The impact of high-throughput and ultra-high-throughput methods on science in general is

related to the generation of big data, which science needs to address and process into new

insights. Increase in big data, and especially in their volume, generates the need to develop,

perfect and master new methods for their analysis, interpretation, storage, accessibility and

verification of their integrity. Additionally, the availability of big data, whether produced by our

own experiments or by other investigators and deposited in big data repositories, induced

critical changes in scientific method; specifically a shift from generating and testing hypotheses

(traditional approach) to the search for patterns and trends in big data.

Limited availability of new biologically active compounds for drug discovery and development

was, at least in part, addressed by combinatorial chemistry. Combinatorial chemistry allows us

to keep pace with the huge capacities of the current high-throughput and ultra-high-

throughput assay systems for evaluation of biological activities. Advances in combinatorial

chemistry were made possible by certain scientific breakthroughs that included the

development of Merrifield's automated solid-phase peptide synthesis accomplished in early

The Future of Biomedical Science

5

1980s, as well as the development of nucleic acid synthesizers in early 1990s, and advances in

organic synthesis, automated chemical synthesizers and software for their control. Modern

combinatorial syntheses that resulted from this development represent a molecular evolution

"in vitro", during which various strategies are employed to combine building blocks of new

molecules in a repetitive and systematic way leading to a huge amount of new diverse

molecules. The methods of combinatorial chemistry represent a paradigm shift and a huge leap

forward from traditional methods of synthesis of new chemical entities, during which each

chemical synthesis resulted in just one or a few new compounds (one-molecule-at-a-time). For

completeness, it is necessary to add that the development of new drugs is not limited by the

availability of new prospective compounds only, but also due to current limitations in our

understanding of drug targets.

In the past, a medicinal chemist in the pharmaceutical industry could usually synthesize about

four new potentially active chemical compounds in one month at the expenses of about 7,500

USD per compound. In contrast, a medicinal chemist that employs contemporary combinatorial

chemistry can synthesize approximately 3,300 new compounds in one month for 12 USD per

compound. Combinatorial chemistry made it possible for pharmaceutical companies to create

huge collections of new synthetic compounds also known as “chemical libraries” (~105

compounds per library), which are usually stored in 96-well microplates and used for the search

of new lead compounds in drug discovery.

However, the enthusiasm brought by seemingly unlimited ability to synthesize huge amounts of

new chemical compounds, and possibility of their subsequent evaluation by high-throughput

screening to identify suitable drug candidates in very short time, resulted in the over-estimation

of the potential of these methods to bring new drugs to clinical use. In 1990s, several big

pharmaceutical corporations fell for the misconception that huge chemical libraries of

compounds synthesized by combinatorial chemistry must necessarily contain perspective lead

compounds for drug development, and following this reasoning the pharmaceutical companies

lost their interest in compounds of natural origin and even terminated several programs

focused on the natural compounds. However, this trend was soon found to be faulty, and till

these days we do not have any drug that had been discovered by high-throughput screening

among compounds synthesized by combinatorial chemistry. Natural compounds (and naturally

inspired compounds) dominate among certain pharmacologic-therapeutic classes of drugs, e.g.

among anti-infective (78%) and anticancer (74%) agents (Rouhi AM. Rediscovering Natural

Products. Chem Engin News 2003; 10: 13). Natural compounds display unique and highly

diverse structures whose biological activity had been to some extent optimized in the course of

evolution. Taken together, the facts stated above strongly suggest that natural compounds will

continue to serve as lead compounds for future drugs, even though in the process of their

optimization we will use some modern methods, including combinatorial chemistry. In other

The Future of Biomedical Science

6

words, chemistry of natural compounds and ethnopharmacology will not lose their significance

in future drug discovery (Ortholand JY, Ganesan A. Natural products and combinatorial

chemistry: back to the future. Curr Opinion Chem Biol 2004; 8(3): 271-280).

Similar to high-throughput screening of big chemical libraries, current multiplex assay systems

also generate big data. For example, one can think of gene expression chips (DNA microarrays)

for parallel quantification of the expression of huge amount of genes in one sample. A specific

example of this technology, DNA microarrays Human Genome U133 Plus 2.0 from Affymetrix

Inc., allows to quantify the expression of over 47,000 different mRNA molecules transcribed

from 38,500 well-characterized human genes. For this purpose, these microarrays contain more

than 54,000 complex hybridization probe sets on a single chip. Another example of a multiplex

(and to some extent also high-throughput) technology is represented by "Infinium

HumanMethylation 450 BeadChip" from Illumina, Inc. that allows to determine methylation

status of cytosines in 485,764 precisely mapped positions of the human genome in 12 parallel

specimens of human genomic DNA per single chip. By means of this technology we can describe

the methylation status of the human genome (also known as DNA methylome) in a great detail,

and this technology also generates big data, which we can presently understand and interpret

only to a limited extent. Nevertheless, the future will undoubtedly bring better understanding

of the relationships between methylation status of specific DNA sequences and their biological

or medical consequences. A recently published research reported the use of this method in

order to identify and evaluate differences in methylation status of specific DNA sequences

isolated from frontal cortex specimens of patients with schizophrenia and matched healthy

controls. The results of this investigation supported the existence of significant differences

between DNA methylation in patients and controls, as well as the specific involvement of

differential methylation in CpG islands mapping to promoters of 817 genes, including NOS1,

AKT1, DTNBP1, PPP3CC and SOX10 that have been previously associated with schizophrenia

(Wockner LF, Noble EP, Lawford BR et al. Genome-wide DNA methylation analysis of human

brain tissue from schizophrenia patients. Translational Psychiatr. 2014; 4: e339). This specific

example demonstrates the future potential of multiplex methods that map the human "ome"

(e.g. genome, transcriptome, proteome, methylome and metabolome) in research on diseases

whose molecular pathology has not yet been elucidated.

Big data generated by high-throughput or multiplex methods have some unusual features and

complexities and their processing and interpretation requires advanced mathematical and

statistical methods that are currently still under development and improvement. The new and

improved methods are necessary for (i) identification of systemic errors (bias) in big data, (ii)

data normalization that allows mutual comparisons of big data, (iii) data visualization, and (iv)

data mining that employs statistical, informatics and machine learning tools to recognize

patterns and identify trends in big data.

The Future of Biomedical Science

7

Big data generated by multiplex methods are step-by-step assuming their role in contemporary

diagnostics, especially in case of highly heterogeneous diseases whose classification into

meaningful therapeutic and prognostic classes is very complex. One example of a large and

heterogeneous group of diseases is represented by lymphomas. The first attempts towards a

diagnostic classification of lymphomas, which date back to 1960s, were based solely on the

histological evaluation of tissue sections and they suffered from very low classification

resolution. For instance, the Rappaport classification of lymphomas could distinguish only 9

types of non-Hodgkin lymphomas (NHL), and its clinical relevance was rather limited

considering the fact that the same classes defined by Rappaport included various lymphomas

with very distinct biology and clinical course. During the next 40 years of lymphoma research,

the classification of lymphomas evolved into current WHO lymphoma classification system from

2008, which distinguishes more than 60 distinct non-Hodgkin lymphomas based on their

histology, cell origin (B/T/NK), immunophenotyping, cytogenetics, clinical data and case history.

Nevertheless, a growing body of new findings indicates that even this modern and

comprehensive system is unlikely the last word in the classification of lymphomas. Various

multiplex research methods, e.g. whole genome expression analysis, may contribute to the

modifications, changes and subsequent improvement of this existing classification. For

instance, differential diagnosis between Burkitt lymphoma (BL) and diffuse large B-cell

lymphoma (DLBL), which require different therapeutic approaches, is not always possible using

the criteria defined in the WHO 2008 classification. This is due to the fact that the translocation

t(8;14)(q24;q32), which is found in the majority cases of BL and considered to be a

pathognomonic anomaly and a diagnostic biomarker for BL, can also be found in about 5-10%

of DLBCL cases. Since DLBCL is diagnosed about 20-times more often than the BL, the

probability that a case of an aggressive B-cell lymphoma with t(8;14)(q24;q32) positivity is

DLBCL will be very high (about 33-50%). In these cases, the differential diagnosis based on gene

expression profiling demonstrated promising potential to correctly classify diagnostically

unclear cases between the BL and the DLBCL classes. More specifically, molecular profiling of

lymphomas using Human Genome U133 Plus 2.0 microarrays identified by statistical methods

that 217 out of 38,500 profiled genes form a molecular signature that allows for proper

diagnostic classification of these two types of non-Hodgkin lymphomas in cases that could not

be distinguished by other diagnostic criteria. Whole-genome multiplex methods that include

gene expression analysis by microarrays or RNAseq, copy number analysis by comparative

genomic hybridization, and next generation DNA sequencing (exome or whole-genome) will

likely become routinely used tools not only in biomedical research but also in clinical

applications. Integration of big data generated by these methods with the information on the

clinical course and response to treatment will most probably contribute to the development of

personalized medicine, which will identify and use therapies optimized to individualized

molecular profiles of specific patients.

The Future of Biomedical Science

8

Current focus on the generation and analysis of big data brings new needs and requirements

imposed on practicing scientists. In the not so distant past, the majority of the data generated

by biomedical experiments rarely required more than the Student t-test and ANOVA to test for

statistical significance of differences between experimental and control groups; however,

biomedical science working with big data requires still more sophisticated mathematical and

statistical methods. Owing to the high-throughput and multiplex experimental methods in

biomedical sciences we have already accumulated large and information-rich datasets that

have been analyzed and interpreted only to a limited context so far. In other words, we still

have not mined the gold from the data that had been already collected.

While new methods for data processing are being developed, new big data continue to be

produced and their flux will likely increase in the future. Consequently, the future use of high-

throughput and multiplex methods and the surplus of big data will likely cause a shift in

qualifications needed for biomedical scientists. In the future, we may need less

"experimentalists" who generate the data and, conversely, we may need more computational

biomedical scientists who can extract relevant biomedical insight from accumulated big data.

Methods that generate big data are gradually changing the scientific method. Contemporary

science is hypothesis-driven, since it explains and predicts phenomena using hypotheses as its

working tool. Hypotheses are produced as tentative and plausible answers to specific questions,

which are subsequently tested by appropriate experimental or observational methods. The gold

standard in the present-day scientific inquiry is known as the "strong inference", which is based

on parallel evaluation of series of alternative hypotheses suggested to explain certain

phenomenon using series of crucial experiments. Ideally, each of these crucial experiments

should be able to rule out (falsify) one of these alternative hypotheses (if it is false indeed) and

the only one hypothesis, which remains not falsified, is considered to be supported (Platt JR.

Strong Inference: Certain systematic methods of scientific thinking may produce much more

rapid progress than others. Science 1964; 146(3642): 347-353). This approach reminds us of the

famous quotation attributed to Sherlock Holmes: "...when you have excluded the impossible,

whatever remains, however improbable, must be the truth." (Doyle AC. The Adventure of the

Beryl Coronet. Strand Magazine, 1892). Intriguingly, this hypothesis-driven paradigm appears to

change, at least in part, towards the analysis of big data without a priori formulated hypotheses

(data-driven research) and this trend will likely continue and become more prevalent in future

biomedical sciences. Thus, science returns in a way to its descriptive past, when scientists

selected objects of their interests and probed or otherwise examined them by any means

available to them (without formulating and testing hypotheses) in order to extract as much

information as possible. Modern data-driven research also starts from big data collected

without a priori formulated hypotheses and attempts to uncover insight hidden in these data

and by doing so contribute to the advances of science.

The Future of Biomedical Science

9

Experimental Methods for Analysis of Single Cells in Populations

Research focused on whole populations of cells, instead of single cells forming these

populations, has been historically the typical approach in biomedical sciences and this approach

is still prevalent at the present time. However, since many cell populations are highly

heterogeneous and in many cases biologically relevant cells are "overshadowed" by large

subpopulations of other cells, the examination of heterogeneous cell populations using

methods that detect weighted averages of signals from individual cellular subpopulations has

limited resolution. As a result, these methods often fail to reveal information that is necessary

for advanced understanding of many diseases on a cellular and molecular level. A typical

example of the disease, in which a key role is assumed by small populations of special cells, is

Hodgkin lymphoma (HL). In two major types of Hodgkin lymphoma (NLPHL and cHL), the real

malignant cells are the special cells (Hodgkin and Reed/Sternberg cells, and LP cells) that

represent only 1-5% of all cells, which form bulk of the tumor tissue, and majority of the tumor

mass-forming cells represents a complex mixture of lymphocytes, plasma cells, histiocytes,

eosinophils and other non-malignant cells. Quite predictably, it would not be possible to

understand molecular pathology of Hodgkin lymphomas without isolation and characterization

of these special cells and it would not be possible to identify important molecules relevant to

their diagnostics and targeted therapy, such as the membrane receptor CD30 in classical

Hodgkin lymphoma (cHL).

Cellular heterogeneity of solid tumors and leukemias implies the necessity to examine distinct

subpopulations of cancer cells or individual cells, which has become more apparent since 1997,

when the evidence for the existence of tumor-initiating cells (or cancer stem cells) was reported

for the first time (Bonnet D, Dick JE. Human acute myeloid leukemia is organized as a hierarchy

that originates from a primitive hematopoietic cell (Nature Medicine 1997; 3: 730–737). In this

seminal paper, cancer stem cells have been identified as tiny but critically important

subpopulation of acute myeloid leukemia (AML) cells representing <0.02% of total peripheral

blast cell population. Cancer stem cells have been later isolated from most solid tumors,

including mammary carcinomas, glioblastoma, colorectal carcinoma, as well as pancreatic and

prostate adenocarcinomas. Research focusing on cancer stem cells has high priority, because

these cells display self-renewal potential, capacity to differentiate into different cell lineages,

high invasive potential, high tumorigenicity, as well as resistance against cancer chemotherapy

and radiotherapy. Since chemotherapy and radiation therapy are less effective against cancer

stem cells than against their more differentiated and biologically less relevant progeny, cancer

stem cells are considered to be the insidious subpopulation of malignant cells that is

responsible, at least in part, for the disease recurrence even after complete clinical or

The Future of Biomedical Science

10

pathologic response to cancer chemotherapy and/or radiotherapy has been achieved

(Mezencev R, Wang L, McDonald JF. Identification of inhibitors of ovarian cancer stem-like cells

by high-throughput screening. J Ovarian Res 2012; 5(1): 30). Insight from the scientific research

on special cell subpopulations and on individual cells has already brought about fundamental

changes in our understanding of various cancers and this trend is likely to continue in the

future. High priorities assigned to the single cell research, analyses and technologies are

supported by the fact that the U.S. National Institute of Health dedicated in 2012 more than 90

million dollars for funding the Single Cell Analysis Program (SCAP) with the aim to examine the

unique properties of individual cells and their relationship to disease. Majority of projects

funded by the SCAP program use molecular profiling of single cells in order to identify special

cells, or evaluate the molecular and functional changes in single cells induced by disease,

environmental changes or effects related to tissue architecture. For instance, one project

supported by the SCAP program attempts to produce spatial maps reflecting transcriptional

heterogeneity and diversity of the visual, pre-frontal and temporal cortex with the aim focused

on RNA expression profiles (mRNAs, miRNAs, piRNAs) in 10,000 spatially defined single cells.

Comprehensive spatial map of single cell transcriptome in human cerebral cortex will

unquestionably produce big data that can substantially contribute to our understanding of

normal and pathological processes in the central nervous system and to the discovery of new

diagnostic and prognostic markers of brain diseases.

Single cell analysis is an important approach to study biological processes that display an

asynchronous character, which means that the cells in a population do not undergo specific

biological processes as one cohort, but distinct cells display distinct phenotypes corresponding

to different stages of some biological process. Examples of known asynchronous processes

include the differentiation of precursor cells of oligodendrocytes, B-lymphocytes and

osteoblasts, and the future use of single cell analysis will likely discover new asynchronous

biological processes that could not have been discovered when biological processes were

examined on the level of whole cell populations. These single cell analysis methods will be more

often than today based on non-destructive physical methods that will allow to evaluate the

status of single cells in real time and separate cells meeting predefined criteria from other cells

for their use in other experiments. For instance, the differentiation of mesenchymal stem cells

to osteoblasts is accompanied by changes in the actin cytoskeleton and consequently by

changes in the mechanical properties of single cells that can be identified by the Atomic Force

Microscopy (AFM). It has been recently shown that changes in the mechanical properties of

single cells are a much better marker of osteoblastic differentiation than the previously used

protein markers BSP and OCN that had been detected by immunofluorescence on the whole

cell population level. A new microfluidic platform that is presently under development will

allow to separate cells from heterogeneous populations based on their different mechanical

compliance (deformability) and facilitate their advanced research in various biomedical fields,

The Future of Biomedical Science

11

including experimental oncology, because special subpopulations of cells, which displayed

higher deformability (elasticity), demonstrated also higher tumorigenicity, invasive and

migratory potential and ability to form metastases than mechanically less compliant cells from

the same population (Xu W, Mezencev R, Kim B. et al. Cell Stiffness Is a Biomarker of the

Metastatic Potential of Ovarian Cancer Cells. PLoS ONE 2012; 7: e46609). Another example of

an innovative analysis of single cells by physical methods focuses on monitoring the

morphology of magnetically labeled single cells during rotation in a magnetic field

(magnetorotation). These cells become magnetic after the endocytic uptake of

superparamagnetic nanoparticles and their rotation speed and morphological changes on a

single cell level are subsequently evaluated by analysis of microscopic images. Morphological

changes are further interpreted in the context of cell viability (viable cells/apoptotic

cells/necrotic cells) and cell phenotype - epithelial vs mesenchymal (Elbez R, McNaughton BH,

Patel L. et al. Nanoparticle Induced Cell Magneto-Rotation: Monitoring Morphology, Stress and

Drug Sensitivity of a Suspended Single Cancer Cell. PLoS ONE 2011; 6(12): e28475.). This very

promising method, that will likely enhance our insight into special cells, including (e.g.

circulating cancer cells), and their fate upon environmental changes (e.g. the effect of

anticancer agents) is currently under development funded by the Innovative Molecular Analysis

Program (IMAT) of the U.S. National Cancer Institute (NCI).

Physical and physicochemical analytical techniques found their way to biomedical research and

medical applications much later after they were firmly established in chemistry. This is due to

the fact that technologies available earlier were adequate for the analysis of the chemical

composition and structure of molecules in pure compounds, or noncomplex mixtures, but they

were inadequate for the analyses of complex biological systems that contain many structurally

diverse molecules present in very different concentration ranges. The situation has however

improved owing to the technological advances that increased the analytical detection limits and

dynamic range of these methods. This progress would not have been achieved without new

algorithms and information systems that allowed to process big data and deconvolute complex

analytical signals typically generated by physical and physicochemical methods when applied to

biological systems. The enormous advantage of these methods is in their ability to analyze

biological systems without need for prior knowledge or assumptions on their compositions. For

example, histopathological examination of tissue sections by means of immunohistochemistry

(IHC) requires that a scientist or a clinical pathologist a priori selects antigens that he or she

aims to detect or quantify in the examined tissue sections and subsequently he or she needs to

apply one or a few specific probes (antibodies) for low throughput detection of one or a few

selected analytical targets. While this method can be, to some extent, modified to reach high-

throughput format by means of "tissue microarrays" and detect in parallel hundreds of antigens

or other targets in a given specimen, this innovation does not address all the limitations of

traditional IHC. Nevertheless, current trends in biomedical sciences imply that future

The Future of Biomedical Science

12

biomedical research, diagnostics and therapeutic monitoring will routinely employ mass

spectrometry imaging that will allow spatial mapping of qualitative and quantitative

information on endogenous biomolecules (proteins, peptides and low-molecular-weight-

metabolites) but also drugs and their metabolites in tissues and organs. Likewise, current

trends allow us to safely predict that biomedical sciences will more widely utilize other

physicochemical methods that generate information-rich analytical signals when applied to

biological systems without the need for specific probes whose selection requires certain prior

knowledge or assumptions on the specimen composition (e.g. Raman spectroscopy).

A unique position among methods for single cell analysis is assumed by high-content analysis

(HCA). This method is based on a parallel analysis of multiple quantitative parameters in many

single cells by means of automated microscopy and image processing, which generates data

with temporal and spatial resolution on the cellular and subcellular level. Evaluation of many

parameters on the single cell level is advantageous when compared to traditional methods of

cell biology and pharmacology, which usually evaluate a single parameter (e.g. metabolic

activity of viable cells) on the whole population of cells without single cell resolution.

Substantial progress achieved in this field recently resulted in the development of high-content

analysis platforms in a high-throughput format (many specimens analyzed in parallel) that

represents a multiplex and at the same time a high-throughput method, also known as "high

content screening" (HCS). HCS generates huge multidimensional and information-rich data. For

example, parallel screening of the effect of several thousands of prospective anticancer agents

from a chemical library on cancer cells in vitro generates for each and every tested compound

many single-cell parameters for many cells (e.g. spatially-mapped protein expression,

mitochondrial membrane potential or metabolic activity) at several timepoints. The enormous

volume of these data necessitates the use of high performance analytical instruments and

information technology in the HCS systems. Nevertheless, the wealth of information generated

by the HCS systems gives them competitive advantage over the HTS (high-throughput

screening) and HCS will likely replace the HTS in future drug discovery.

Advanced understanding of structure of biological systems and its associations

with disease and development of new therapeutic approaches

This subsection starts with a short anecdote depicting a real incident: Valery Soyfer, who is now

a professor of molecular genetics at the George Mason University in Manassass, Virginia, visited

in 1956 a well-known professor of agricultural sciences, a member of the Acedemy of Sciences

and at the same time a staunch denier of Mendel-Morgan genetics Trofim Denisovich Lysenko.

During this visit, Soyfer who was a student then, informed Professor Lysenko about an article

The Future of Biomedical Science

13

published in 1953 by Watson and Crick in the journal Nature titled "A structure for deoxyribose

nucleic acid". However, Professor Lysenko was not impressed and concluded that "this is not

biology but just some kind of chemistry". This short anecdote demonstrates a somewhat

extreme case of ignorance towards the chemical structure of biologically relevant molecules.

However, while reality in biomedical (or agricultural) sciences is no longer as bad as in this

story, we can still identify many gaps and ignorance with respect to the considerations of

chemical structure in some fields of biomedical sciences. These gaps persist in spite of the fact

that without understanding the structure of DNA we would not be able to understand its

replication and the molecular basis of inheritance and genetic variability. Likewise, without

insight into the structure of proteins we would not understand the pathogenesis of many

diseases, including sickle cell anemia and transmissible spongiform encephalopathies (e.g.

Creutzfeldt-Jakob disease and kuru). An immense contribution to the advancement of

biomedical sciences was brought by determination of the structure of the prokaryotic ribosome

at high resolution of 3 Å (0.3 nm), for which the Nobel Prize for Chemistry was awarded to Ada

Yonath, Venkatraman Ramakrishnan and Thomas A. Steitz in 2009. The prokaryotic ribosome is

a structurally complex organelle that consists of 3 molecules of RNA and about 55 molecules of

proteins. The detailed description of its structure by the scientists named above lead to an

unexpected discovery that protein synthesis in the ribosome is not catalyzed by a

proteinaceous enzyme, as would be expected, but the addition of amino acid units to growing

peptide chains is in fact catalyzed by RNA, that is, ribosome works in the same way as a

ribozyme (ribonucleic acid enzyme). Moreover, the detailed structural information on

complexes formed between ribosome and 20 different ribosome-interacting antibiotics

enhanced our understanding of molecular mechanisms responsible for their antibacterial

activity and drug resistance. Another example of a success story in structural biology is the

recently reported elucidation of the molecular structure of the cleaved envelope protein (Env)

of HIV-1 virus at the resolution of 5.8 Å, which will likely facilitate structure-based rational

vaccine development in future. We can reasonably expect that future structural biology will

successfully determine complicated structures of other protein complexes and elucidate

mechanisms of their assembly in vivo, conformational states, and structural changes in

response to cellular environment or while performing their biological functions. A great

challenge to structural biology is posed by the eukaryotic chromosome, nuclear pore complex

(NPC), spliceosome, as well as various membrane proteins, but their structures, including

functional interpretations, will eventually be determined in the future.

Slightly more than 10 years ago, the general public but also many scientists cheerfully

welcomed the determination of the complete sequence of human DNA as the beginning of a

colossal breakthrough in biomedical sciences. The Human Genome Project (HGP), a great

exploration that cost about 3 billion dollars, culminated by a ceremonial announcement in 2000

that the draft of human genomic DNA sequence was completed. Even earlier, in 1999 the

The Future of Biomedical Science

14

director of the National Human Genome Institute Dr. F.S. Collins (presently the director of the

U.S. National Institute of Health) proclaimed that the results achieved by the HGP will by 2010

undoubtedly produce tests for genetic risks and personalized methods to prevent and cure the

majority of diseases of civilization, including cardiovascular disease and cancer. However, while

the sequence of human genomic DNA contributed to several important discoveries, including

the discovery of biological significance of "junk DNA", the expectations of gigantic outcomes of

the HGP proved to be overly optimistic and unrealistic. In fact, the knowledge of human DNA

genome sequence has not yet generated any breakthrough in diagnostics or in therapeutic

approaches to common diseases. The reason behind this lack of success lies, at least in part, in

complexity of relationships between gene sequences and their biological or medical

consequences, which is often complicated by the influence of other genes and gene-

environmental interactions. In far too many genes we still do not know how the changes in

their DNA sequences influence gene expression on mRNA or protein level, how these changes

affect the structure of encoded proteins, and how the structural changes in proteins translate

to functional changes and their biomedical consequences.

The complexity of DNA sequence-function relationships was further supported by the

unexpected discovery of the biological consequences of synonymous substitutions.

Synonymous mutations represent DNA sequence variants, in which one or more nucleobases in

protein-coding gene regions (exons) are replaced by other nucleobases, resulting in

synonymous codons that code for the same amino acids in encoded proteins. As a result,

synonymous substitutions do not change amino acid sequences of encoded proteins.

Consequently, these substitutions would be traditionally considered as silent mutations

(without phenotypic effects), since they do not change the primary structure of encoded

proteins and, according to traditional views, the primary structure of proteins (the sequence of

amino acids in polypeptide chains) uniquely determines their secondary and tertiary structures

and consequently also their biological properties. One can only expect that the synonymous

substitutions may induce some changes in gene expression due to the codon usage bias, that is

the preference of organism to specific codons over other synonymous codons, and this may

result in an altered rate of translation of proteins coded by less frequently used codons.

Nevertheless, the biological consequences of synonymous substitutions have been reported to

be much more profound, with the surprising finding that P-glycoprotein (P-gp, also known as

multidrug resistance protein MDR1) encoded by ABCB1 gene with 2-3 synonymous mutations

displays considerably different properties than P-gp encoded by wild-type ABCB1 gene, even

though both P-glycoproteins have exactly the same amino acid sequences. This paradoxical

discovery can be explained by differences in the conformations (spatial arrangements of

polypeptide chains) of these P-glycoproteins with identical primary structures. This was caused

by unusual codons originating from the synonymous substitutions. While they coded for the

same amino acids as their more frequent synonyms, they changed the rate of translation of

The Future of Biomedical Science

15

mRNA to the protein and since translation is coupled to protein folding (the concept of co-

translational folding), these proteins with identical sequences folded differently and assumed

considerably different conformations, which affected their properties and function (Kimchi-

Sarfaty C, Oh JM, Kim, IW. et al. A "silent" polymorphism in the MDR1 gene changes substrate

specificity. Science 2007; 315: 525-528).

This example illustrates the complex relationships between DNA sequence and biological or

medical consequences, but also emphasizes the key role played by molecular structure and

particularly by conformation in biological properties and functions of biologically relevant

molecules. While traditional view considered the primary structure of proteins to be a major (if

not unique) determinant of their native 3D structures (Anfinsen's dogma) and consequently

their properties and biological functions, now we already know that these relationships are

much more complex and that the 3D structure of many proteins is determined also by the

action of other special proteins (protein chaperons) and by the influence of intracellular

environment (macromolecular crowding). The enormous complexity of this problem will have

to be addressed by structural and functional genomics in the future.

Structural genomics focuses on the prediction and/or determination of the 3D structures of all

proteins encoded by the genome, which makes it different from traditional structural biology

that focuses on the determination of structures of selected functionally important proteins.

Thus, structural biology starts with known functions of specific proteins and by determination

of their structures attempts to elucidate the mechanisms involved in their biological functions,

and modulate their activity for therapeutic purposes. In contrast, structural genomics starts

from DNA sequences and using the combination of bioinformatics tools, computational

modeling and various experimental methods, structural genomics tries to determine the

structures of all proteins, and subsequently predict and validate their biological functions.

Structural genomics in the post-genomic era is expected to fulfill at least some of the outcomes

that the Human Genome Project hoped to accomplish.

It is estimated that the number of protein coding genes in the human genome is 22,500±2,000

and that these genes code for about 300,000 different proteins, which according to some

predictions include some 3,000 proteins that play a critical role in human disease and some

3,000 proteins that could be modulated for therapeutic purposes by low-molecular-weight

compounds. The overlap between these two groups represents approximately 1,500

"druggable" proteins that can serve as prospective targets for the discovery and development

of new drugs. On the other hand, the estimated number of proteins with presently known 3D

structure (including fragments) is only about 10,200 (of which only 5,580 proteins at X-ray

resolution <2.5 Å). Furthermore, only about 500 drug targets have been structurally

characterized and 300 drug targets are known to be modulated by currently approved drugs.

The Future of Biomedical Science

16

Nevertheless, we can expect that more drug targets and more drugs that modulate the activity

of these targets will be discovered and characterized in the future.

Elucidation of the structure of all human proteins, expected to be achieved by structural

genomics, will considerably shape future biomedical sciences. This achievement will

substantially contribute to the structure-driven rational drug design of many new drugs;

however, knowledge of the structure of most (or all) human proteins will still not be enough to

comprehend all molecular processes involved in human health and disease. Likewise, this

achievement will not address the problems of diseases in which non-druggable proteins are

known to play important roles. Therapeutic targeting may not be straightforward even in many

cases of druggable proteins, since they function in interaction networks of various complexities.

Disabling one or a few components of these networks may be under some circumstances

compensated by activation of other networks components, which rewires signal transduction

pathways. This concept can be illustrated by one specific mechanism of resistance of malignant

melanoma cells against the new targeted drug vemurafenib. This drug inhibits the enzymatic

activity of oncogenic serine/threonine protein kinase B-Raf that carries a missense mutation

V600E and relays signals that support the survival of melanoma cells. Inhibition of this mutant

B-Raf enzyme usually triggers death of melanoma cells; however, B-Raf removed from the

network of interacting components may be functionally replaced by increased activity of

PDGFRβ (platelet-derived growth factor receptor β) that activates an alternative pro-survival

pathway for melanoma cells. As a result, we can expect more focus on "network pharmacology"

in the future aiming to attack disease networks at the systems level, that is targeting functional

modules of several interacting network components (interactome subnetworks) rather than

targeting single network components.

Unfulfilled expectations from the Human Genome Project are also consequent to the fact that

many human diseases, or predispositions to diseases, do not solely depend on gene sequence

variants discovered by genomics, but they are also influenced by epigenetic changes that

cannot be identified from DNA sequence data. Obviously, genome sequence data are not

enough, and this conclusion can be exemplified by the fact that cancer cell resistance to

traditional cytotoxic drugs, but also to modern targeted anticancer therapeutics is more often

associated with changes in gene expression than with changes in the DNA sequence of specific

genes. The inherent complexity and importance of these relationships implies that the future

biomedical science will attempt to uncover these relationships to a much greater detail using

systems biology approaches and integration of genomics data with other information-rich big

data produced by "omics" sciences. Among them one can specifically mention transcriptomics

(expression of mRNAs, miRNAs and other RNAs on the whole genome scale), proteomics

(expression of all proteins), epigenomics (e.g. methylation of DNA on the genome-wide scale)

and metabolomics (concentration of all low-molecular-weight metabolites). Interpreting

The Future of Biomedical Science

17

associations among these integrated sets of diverse big data combined with clinical data will in

all probability make it possible to identify the molecular profiles of diseases to find better

diagnostic methods and therapeutic interventions even for highly heterogeneous diseases and

diseases that have so far resisted our efforts to uncover their etiology and molecular

pathogenesis.

Development and the use of more advanced computational and experimental

models of human disease

The biological and biomedical research community is traditionally divided into the two major

groups: experimentalists (also known as "wet-lab scientists"), who formulate and test

hypotheses by means of experimental methods, and computational scientists, who search for

patterns in existing data or develop new methods for data analysis (e.g. bioinformaticians,

pharmacoinformaticians, systems biologists, computational biologists and biomathematicians).

Due to substantial differences between their methods and approaches, the experimentalists

and the computationalists evolved into very different communities that are distinctly separated

from each other. Experimentalists have been, more often than not, skeptical about the models

built by computational scientists arguing by the immense complexity of biological systems that

cannot be in, their views, adequately reflected by computational models. This skepticism was

somewhat justified in the past considering the low volume of available data, limited biological

insight and lack of the powerful tools needed for data analysis, which limited the capabilities of

computational science to build robust models of biological systems. On the other hand, this

skepticism was in part fuelled by the gaps in training of experimental scientists in mathematics

and statistics, and this was often the major reason behind their inability to understand scientific

value of computational models. As a result, experimentalists could rarely benefit from the

computational models and use them to support their own research efforts in a model-driven

discovery.

It is often a neglected fact, that experimental biomedical sciences also use models as their

major working tool. In the same way as other models, these experimental models also

represent just simplified representations of real world objects, phenomena or processes. Thus,

experimental models are fundamentally not different from computational models in that they

are both incomplete representations true only to a limited extent and their complexity and

explanatory or predictive power depend on their underlying assumptions and selection of

features considered as relevant for the real world representation. In addition, computational

models (not unlike experimental models) have been evolving and improving since their early

times and that is why the sophisticated contemporary computational models of biological

The Future of Biomedical Science

18

systems should not be underappreciated arguing by the low performance of their earlier

versions.

It needs to be mentioned for the sake of fairness, that many older experimental models would

be found grossly inadequate nowadays. This can be illustrated using an example of the early

models of cancer cells used in 1940s and 1950s that were represented by respiratory deficient

mutants of yeast cells and, even more surprisingly, respiratory deficient bacteria (prokaryotic

organisms!). Selection of these models was supported by an assumption that the Warburg

effect (metabolism of glucose by glycolysis without progressing through the citric acid cycle and

respiratory chain even in the presence of oxygen) represents the crucial difference between

cancer and non-cancer cells and that its cause lies in the defective respiratory chain in

mitochondria. Now we already know that this assumption does not hold, since the Warburg

effect has been shown to result from oncogenic signaling that up-regulates specific components

of glycolytic pathway and glucose transporters and not from mutations in the genes coding for

components of the respiratory chain. Moreover, these old models were highly inadequate due

to the fact that they mimicked phenotypes of cancer cells only in a very limited context of some

metabolic similarities. Thus, these models of cancer cells were incredibly distant from the real-

world objects that they tried to represent; nevertheless, at that time better models were not

available, since in vitro culture of cancer cells was accomplished for the first time in 1951 from

cells isolated from a clinical specimen of a cervical adenocarcinoma case (the well-known HeLa

cells from the patient Ms. Henrietta Lacks).

Present-day experimental models represent malignant tumors much more realistically than the

inadequate models discussed above; nevertheless, they still offer different levels of complexity

and external validity, i.e. the ability to generalize findings from these models to the real world.

The level of complexity of each model depends on how comprehensive was the selection of

features of the real world object that were built into its model. For example, a very simple

model of the human disease - high grade serous ovarian carcinoma (HGSOC) is a culture of

ovarian cancer cell lines derived from a clinical case of HGSOC (e.g. OVCAR-4) growing in an

appropriate growth medium on the surface of a tissue culture-treated flask. This 2D cell culture

represents some but certainly not all features of HGSOC and somewhat better representation

of HGSOC is an in vitro model based on the ovarian cancer cells growing in suspension as the

tumor spheroids. This spheroid model brings an additional complexity by reflecting to some

extent the realistic interactions between cancer cells. Furthermore, when the spheroids achieve

certain size (> 200 µm), this model also represents biologically and clinically relevant hypoxic

and necrotic central tumor regions and peripheral regions with higher proliferative activity and

invasive front. More sophisticated 3D models, which employ the co-culture of HGSOC cells and

other non-malignant cells (e.g. normal fibroblasts) in vitro, also reflect the interactions between

cancer cells and the tumor-associated stromal cells. Since these heterotypic cell interactions

The Future of Biomedical Science

19

play critical role in the pathogenesis of malignant diseases, models that reflect these

interactions are more realistic than less complex 3D models based on axenic (pure) cultures of

cancer cells. In vivo models of HGSOC are more advanced than the in vitro models discussed

above, but they substantially differ in their complexity and extent to which they reflect the

modeled disease. The most simple in vivo model is represented by hollow fibers filled with the

suspension of HGSOC-derived cells and implanted into the peritoneal cavities of laboratory

mice. This model is still notably distant from the human ovarian cancer; nevertheless, it allows

to perform preliminary pre-clinical tests of new anticancer agents, because it reflects, to a

certain degree, host pharmacokinetic compartments and allows the study of drug efficacy as

well as drug absorption, distribution, metabolism, elimination and toxicity (ADMET). A

somewhat more realistic model of the HGSOC is based on human ovarian cancer cells

implanted subcutaneously into the immunocompromised mice (tumor xenografts). These

ectopic tumors grow as solid non-metastasizing subcutaneous masses and as such they do not

mimic the growth pattern of human ovarian carcinoma, but they are still more realistic than the

hollow fiber model, since they include cell-cell homotypic and heterotypic interactions, tumor

angiogenesis, hypoxic core, necrotic areas and to a various extent they resemble the HGSOC in

their histological architecture and expression of relevant proteins. As previously mentioned,

these subcutaneous xenografts are ectopic and they do not reflect the anatomic context of the

real-world primary tumor site and tumor dissemination; therefore, they lack the benefits of

orthotopic models, in which tumors grow in their natural anatomical sites and interact with

adjacent tissues in a more similar way to the real disease. An example of the orthotopic model

of the HGSOC is represented by human ovarian cancer cells (e.g. Hey-A8 cells) implanted into

the peritoneal cavity of immunocompromised mice (e.g. imbred strain of mice with severe

combined immune defficiency NOD.CB17-Prkdcscid/J). Models like this one realistically reflect

human disease with respect to peritoneal dissemination, growth of solid tumor masses in the

abdominal cavity, peritoneal carcinomatosis, formation of malignant ascites and abdominal

extension; however, while these models are appropriate for many research applications, they

are in many ways different from the HGSOC. First - these models represent xenotransplanted

human cancer cells growing in animals where human signaling molecules relevant for the

modeled disease are not available. Second - these animals are immunocompromised and

depending on the animal strain, they lack one or more types of immune cells (lymphocytes, NK-

cells, macrophages), which are known to be involved in the host-tumor interactions and play

role in the pathogenesis of the human disease. And lastly, these models do not mimic the

natural mechanisms involved in malignant transformation and early disease development.

These problems could be addressed by a model that would employ natural disease in animals

that would be highly similar to HGSOC; however, with a notable exception of the chicken

(Gallus gallus domesticus), epithelial ovarian cancer has not yet been identified as a natural

diseases in any animal species. Since the model based on the aging hens would be impractical

The Future of Biomedical Science

20

for ovarian cancer research, considerable effort has been dedicated to the development of

modern murine models of the HGSOC that would reflect many molecular, morphological and

clinical aspects of the human disease with high veracity. Among the most recent advances in

ovarian cancer model development, the model based on transgene mice with double knock-out

of DICER and PTEN genes (DICER-PTEN DKO) proved to be very prospective. Mice with these

genetic lesions consistently develop ovarian carcinoma that is very similar to that of HGSOC not

only in its histological pattern and clinical course, but in part also in its molecular profile and

cellular origin of the malignant disease. Recent advances in ovarian cancer research produced a

growing body of evidence that HGSOC arises from the fallopian tube secretory epithelial cells

and not from the ovarian surface epithelial cells as we have previously believed. And since the

DICER-PTEN DKO mice develop early serous carcinomas in the fallopian tube that subsequently

spread, envelope the ovaries, and metastasize through the peritoneal cavity, this animal model

is consistent with the most recent insight on the cell of origin of the HGSOC and mimics the

course of the human disease with high veracity including early stages of disease development.

The purpose of the discussion focused on various models of high grade serous ovarian cancer

(HGSOC) was to illustrate the intricacies and complexities, which we face en route to robust

models of human disease. Disease models represent the major working tool of biomedical

sciences and their evolving complexity reflects our growing insight and refined understanding

of a disease at various hierarchical levels ranging from molecular pathology to disease ecology

on the population and community scale. Consequently, models of human disease will evolve

and the future biomedical science will use more advanced models than we use nowadays. For

example, the most advanced murine model of the HGSOC discussed in the previous paragraph

is most definitely not the last word on that matter and one can reasonably expect that more

veracious murine models of the HGSOC will be developed in the future. For instance, future

models may feature the same genetic lesions as human disease, including mutations of TP53,

BRCA1 and/or BRCA2 genes and recurrent copy number variations of several genes but, unlike

some existing models, not somatic mutations that are atypical for the human disease. In

addition, murine genes relevant for the disease should be replaced by human genes in these

advanced models of HGSOC, and so they would likely be based on humanized mice carrying

functioning human genes.

Many computational (in silico) models in contemporary biomedical sciences reached high

degree of predictive and explanatory power. Computational modeling is used as a research

method to address an enormous range of questions in basic and applied biomedical sciences.

For instance, one can build an epidemic model describing the transmission of a communicable

disease to estimate the vaccination coverage needed to prevent sustained human-to-human

transmission in a given population. Likewise, one can model the growth of solid tumors,

response of tumors to therapeutic interventions, interactions between drugs and drug targets,

The Future of Biomedical Science

21

to name a few applications of computational modeling in biomedical sciences. The cutting edge

of computational modeling represents attempts to model the distribution of all molecules and

their interactions in cells of whole organisms in order to improve our understanding of the

biological mechanisms and functioning of living systems. This type of modeling is a typical

working tool of systems biology that attempts to explain the functioning of living things

considering complex interactions among various components within biological systems (genes,

RNAs, proteins, metabolites, cells, tissues etc.). This holistic approach of contemporary systems

biology is conceptually opposite to the traditional reductionist approach that attempts to

explain the behavior of complex biological systems through the studies of their isolated

components (individual genes, metabolites etc.). This reductionist approach is based on an

assumption that the explanatory power of the system components allows to explain the whole

systems and, unlike systems biology, it rarely employs computational modeling. While we have

to admit that reductionist biomedical research produced many spectacular discoveries in the

past, its limitations became more apparent with the growing body of biomedical knowledge

and accumulation of questions that could not be answered without insight into the complexity

of real-world systems. Knowledge that developed from studies on single components of

complex systems is considerably limited. Even the synthesis of knowledge summarizing what is

known about parts of the system cannot explain the system as whole, because the system is

always more than just a sum of its parts. Systems display properties that result from

interactions among their components and these "emergent properties" cannot be examined on

the level of system components as they only manifest on higher hierarchical levels. Thus, the

reductionist approach in biomedical sciences could not answer certain questions that were not

amenable to research using simple experimental models and had to be addressed on the

systems level. For example, the complex relationships between mutations of TP53 gene,

expression of p53 protein and the sensitivity of cancer cells to anticancer drugs require systems

level examination of the influence of many other genes and complex associations among DNA

damage, cell cycle arrest and the induction of apoptosis in cancer cells. Current trends in

biomedical research strongly suggest the future role of systems biology, which is expected to

dominate over the reductionist approaches in future biomedical sciences. This is facilitated,

among other things, by an increasing availability of big "omics" data produced by multiplex and

high-throughput methods. Big data generated by various methods will be integrated and used,

together with the improving mechanistic insight, to build models that explain properties of

biological systems and predict their behavior upon various perturbations, such as those induced

by mutations, environmental changes or therapeutic interventions. These models will be used

for simulations whose results will be experimentally validated and subsequently used for

iterative building of more advanced models. As an example of a highly sophisticated systems

biology model one can mention a computational model of a cell of Mycoplasma genitalium, a

human pathogen etiologically associated with urethritis, cervicitis and probably also with pelvic

The Future of Biomedical Science

22

inflammatory disease (PID). This pathogen represents a living organism with the smallest

known genome, which at the length of 0.58 Mbp contains 525 genes that encode for 475

proteins. The model, which was built on 1900 experimental parameters previously published in

900 scientific articles, consists of 28 interacting sub-modules, each of which independently

models key biological processes that include DNA replication and repair, ribosome assembly,

transcription, translation, and cytokinesis. Using this model one can simulate the dynamics of

important cellular processes, such as cell growth and division but also predict the distribution of

all biomolecules in 1-second intervals. The validity of this model is supported by its ability to

estimate with adequate accuracy cell doubling times, cell composition, replication of cellular

organelles, genome-wide gene expression and identify the genes essential for growth and

replication of cells. Perhaps the most significant achievement brought about by this model is

the generation of knowledge that was not available before, such as new findings on the

metabolic regulation of M. genitalium cell cycle, dynamics of DNA-protein associations and

kinetics of DNA replication (Karr JR, Sanghvi JC, Macklin DN et al. A whole-cell computational

model predicts phenotype from genotype. Cell 2012; 150: 389-401).

Highly sophisticated models, like the one introduced above, are built by computational

biologists specialized in the modeling of complex biological systems and it will probably remain

so in the future. Nevertheless, the continuing growth of the volume of experimental data and

ever increasing complexity of biological concepts are likely to force future experimentalists to

build in silico models as tools to understand their data and generate new hypothesis for

experimental validation. This has already become feasible thanks to the increasingly available

and user-friendly modeling software. An example of a relatively simple mathematical model,

which immensely contributed to the interpretation of experimental data, is a model of tumor

growth based on the kinetics of proliferation of hierarchically organized cancer cells, specifically

cancer stem cells, more differentiated transit-amplifying cells and terminally differentiated cells

(Molina-Peña R, Álvarez MM. A Simple Mathematical Model Based on the Cancer Stem Cell

Hypothesis Suggests Kinetic Commonalities in Solid Tumor Growth. PLoS ONE 2012; 7(2):

e26233).

This compartmental pseudo-chemical mathematical model of the tumor growth described the

proliferation and differentiation of different types of cancer cells and allowed to derive non-

trivial and somewhat unexpected conclusions about the roles of different subpopulation of

cancer cells in driving tumor growth. For instance, this model implies that therapeutic targeting

of cancer stem cells, while definitely superior to an unselective targeting of bulk tumor cells,

cannot be a generally effective therapeutic avenue without simultaneous targeting of transit-

amplifying (progenitor) cancer cells. Furthermore, the model suggests an even less intuitive

therapeutic approach – the stimulation of proliferation of transit-amplifying cells that would

result in tumors richer in these cells over cancer stem cells and eventually in decreased tumor

The Future of Biomedical Science

23

mass. This surprising finding was supported by another mathematical model that found the

most aggressive tumor growth at a certain "optimal" rate of proliferation of progenitor cells

and decreased tumor growth if the rate of proliferation of these cells was higher or lower than

the optimal rate. The fact that there is an optimal rate of proliferation of progenitor cells, which

results in the most aggressive disease progression, is rather counterintuitive, since intuitively

one would expect that tumor growth increases with increasing rate of proliferation of all the

different malignant cell types forming the tumor.

Similar to the problem discussed above, modeling of tumor response to anticancer treatment

by oncolytic viruses leads to some counterintuitive predictions that would be difficult, if not

impossible, to make without computational modeling. Oncolytic viruses display anticancer

properties through multimodal mechanisms, which include direct cytocidal effect due to their

selective replication in cancer cells without harming normal tissue. An example of an oncolytic

virus is represented by Talimogene laherparepvec, a modified herpes simplex type 1 virus (HSV-

1), which is currently under review by the U.S. Food and Drug Administration for the treatment

of patients with regionally or distantly metastatic melanoma. Mathematical modeling allows

the identification of optimal conditions for the eradication of tumors by oncolytic viruses and

the results of a simulation performed by one of these models suggest the existence of an

optimal cytolytic activity that leads to the highest reduction of tumor mass, while the lower or

the higher cytolytic activity can lead to disease progression.

An impressive scientific work that exemplifies the modern scientific approach combining in vitro

and in vivo experiments, multiplex gene expression profiling, and the use of computational

models, was published by Michael J. Lee et al. from Massachusetts Institute of Technology and

Harvard University. This paper demonstrated and molecularly interpreted the synergistic

effects of a combination chemotherapy by one targeted anticancer drug followed by

subsequent administration of another conventional (cytotoxic) drug in triple negative breast

cancers (TNBC), while the same combination was antagonistic in case of different subtypes of

breast cancer. Findings reported in this paper are consistent with the fact that the mode of

action of various drugs cannot be fully explained by single drug-target interactions. Instead, it

has to be interpreted on a higher hierarchical level that considers the status of the whole

network of interacting molecules involved in the transduction of signals from the extracellular

to intracellular environment (network pharmacology). The first administered anticancer drug

(targeted) inhibited the activity of the EGFR protein, which re-wired signal transduction

pathways bringing cells to a new state in which they were more vulnerable to the cytotoxic

action of the second anticancer drug (Lee MJ, Ye AS, Gardino AK et al. Sequential application of

anticancer drugs enhances cell death by rewiring apoptotic signaling networks. Cell 2012; 149:

780-794). This research employed various approaches that are expected to dominate future

The Future of Biomedical Science

24

biomedical sciences: systems biology, the use of computational models, analysis of "omics"

data and the use of in vivo models of human disease.

Nanoscience and nanotechnology

An essay about the future of science would be incomplete without mentioning nanoscience,

nanotechnology and their possible role in the future. Nanotechnology is considered by some to

be the driving force behind a new technological revolution in the same way as the steam engine

drove the industrial revolution in 1750s, steel, oil and electricity triggered the second industrial

revolution in late 1800s, and digital logic circuits set off the digital revolution in late 1950s.

Nanoscience/nanotechnology is a cross-disciplinary field that involves physics, chemistry,

material science, biology, medicine and engineering to study and use materials of particle sizes

between 1 and 102 nm at least in one dimension. Particles of that size, exemplified by some

macromolecules and many viruses, possess unique mechanical, optical, electrical and chemical

properties not displayed by particles above nanoscale.

These special properties reflect quantum mechanical properties of electrons confined in

nanoparticles with dimensions approaching to the electron wavelength (quantum size effects),

while these properties do not manifest in materials composed of larger particles. In addition to

quantum size effects, unique behavior of nanoparticles is also induced by surface effects, which

reflect considerable increase in the number of atoms on the surface of particles as their size

decreases. Consequently, reduction of particle size to nanoscale does not represent "simple

miniaturization" and nanoscale materials are much different from microscale materials that do

not display these fascinating properties.

Nanomaterials are also peculiar in that their properties can be finely tuned by changing their

particle size. This tunability of properties can be demonstrated by particle size-controlled

fluorescence of semiconductor nanocrystals, also known as "quantum dots", which can be used

to identify and track these particles, for instance, when used as special probes to explore

biological systems. Another unique property of some nanomaterials is the ability of their

particles to self-assemble in a way similar to self-assembly of ribosomes, spliceosomes and

some other megamolecular complexes. This special property of nanoscale systems supports

futuristic concept of molecular nanotechnology (MNT), which is expected to control and

manipulate molecules to develop new smart materials, build molecular computers and possibly

even nanorobots using top-down or bottom-up fabrication.

The concept of molecular nanotechnology dates back to the visionary lecture "There's Plenty of

Room at the Bottom" given by Nobel Prize-winning physicist Richard Feynman at the California

The Future of Biomedical Science

25

Institute of Technology in December 1959. In this lecture Feynman envisioned positionally-

controlled mechanosynthesis guided by direct manipulation of individual atoms by nanoscale

machines and even some early ideas that were later developed by Robert A. Freitas Jr. into the

concept of medical nanorobots. These microscale machines comprised of nanocscale

components may give us future capability to perform preventive, curative and reconstructive

interventions at the cellular and molecular levels. For instance, an artificial red blood cell was

proposed by Freitas as a spherical 1 µm device featuring glucose-powered pumps able to

deliver 236 times more oxygen per unit volume than natural erythrocytes (Freitas RA Jr.

Exploratory design in medical nanotechnology: a mechanical artificial red cell. Artif. Cells Blood

Substit. Immobil. Biotechnol. 1998; 26: 411-430). Moreover, self-assembled nanoparticles have

been experimentally demonstrated through biologically-controlled assembly of quantum dot

nanowires on the template of M13 bacteriophage (Mao C, Flynn CE, Hayhurst A. et al. Viral

assembly of oriented quantum dot nanowires. PNAS 2003; 100: 6946-6951).

These technologies and unique properties of nanomaterials suggest wide range of future

applications to problems that cannot be solved by currently available means (e.g. continuing

growth of computing performance beyond 2020, when Moore's law is expected to demise).

Due to its extensive and diverse nature, a summary of the current research and development in

the field of medical nanotechnology (nanomedicine) is far beyond the scope of this chapter.

Nevertheless, I will present three specific examples of nanotechnology under development for

the future treatment of cancer. All these three methods are based on selective uptake of

special nanomaterials by tumor tissue and/or cancer cells followed by selective destruction of

tumors with little or no damage to adjacent tissues.

Nanoparticles with largest dimension of 10-400 nm (optimally 25-200 nm) can selectively pass

through tumor vasculature that often displays discontinuous endothelium and therefore porous

(leaky) character. Due to the absence of functioning lymphatic networks in tumor tissues, these

nanoparticles are not efficiently drained and accumulate in the interstitial space of tumor

tissue. This process, known as the enhanced permeability and retention effect (EPR), can be

utilized for passive targeting of tumors by diagnostic or therapeutic nanoparticles. Furthermore,

nanoparticles with specially treated surfaces can be selectively concentrated on the cell surface

or taken-up by cancer cells via various mechanisms (e.g. receptor-mediated endocytosis) and

upon this internalization they can exert their therapeutic effects (active targeting).

A therapeutic approach based on passive targeting can be exemplified by the use of gold

nanoshells that upon intravenous administration selectively accumulate in tumor tissues of

experimental animals via the EPR mechanism. These particles upon exposure to the near-

infrared (NIR) laser produce heat and eradicate tumors by thermal ablation with no significant

damage to adjacent tissues (O'Neal DP, Hirsch LR, Halas NJ et al. Photo-thermal tumor ablation

The Future of Biomedical Science

26

in mice using near infrared-absorbing nanoparticles. Cancer Lett. 2004; 209: 171-176). Later

experiments also demonstrated the feasibility of surface modification of these nanoshells by

immunoconjugation with anti-HER2 antibodies to target HER2-positive breast cancer cells more

selectively, which represents a combined passive and active targeting of cancer cells by

nanoparticles.

Another example of a study that demonstrates active targeting (without passive targeting) used

magnetic nanoparticles whose surface was functionalized by YSA-peptide that allowed their

specific uptake by EphA2-positive ovarian cancer cells disseminating in the peritoneal cavity of

experimental animals. Upon uptake of these nanoparticles, the cancer cells became magnetic

and could be removed by a strong magnetic field that substantially decreased the risk of

disease progression (Scarberry KE, Mezencev R, McDonald JF. Targeted removal of migratory

tumor cells by functionalized magnetic nanoparticles impedes metastasis and tumor

progression. Nanomedicine 2011; 6: 69-78).

The third method, which demonstrates the development of nanoparticle-based cancer therapy,

is based on core/shell nanogels loaded with therapeutic molecules of siRNA that specifically

knock-down the expression of the EGFR gene, which is important for the proliferation and

survival of certain types of cancer cells. These nanogels have versatile surface chemistry that

allows their bioconjugation with various ligands for more specific targeting of cancer cells. For

instance, bioconjugation with YSA-peptide allowed selective uptake of nanogels by EphA2-

positive cancer cells, which was followed by the intracellular release of encapsulated siRNA

molecules and the down-regulation of cancer-relevant EGFR gene via the RNA interference

mechanism (RNAi). Upon down-regulation of EGFR gene many types of cancer cells stop

proliferating, die or become more sensitive to conventional anticancer drugs. In general, cancer

cells "addicted" to a specific oncogene usually undergo cell death when the oncogene protein

becomes unavailable due to, for example, specific knock-down by siRNAs. This fact makes the

use of siRNAs highly promising for future treatment of cancers; however, the use of siRNA

molecules in human medicine would not be possible without nanotechnology, as naked siRNA

molecules not supported by nanoparticles are sensitive to degradation by serum nucleases,

rapid renal clearance and are unable to cross cell membranes. Thus, nanotechnology makes the

future use of highly promising therapeutic RNAs (e.g. siRNAs and miRNAs) possible.

The examples discussed above demonstrate only a tiny fraction of the new opportunities

brought to medical applications by nanotechnology. Nanotechnology will unquestionably bring

enormous advances to the prevention, diagnosis and therapy of various human diseases in

future.

The Future of Biomedical Science

27

Translational Health Research

Translational health research is a highly emphasized endeavor in contemporary biomedical

sciences. Considering its current scope and the amount of funding allocated to translational

health research, one can safely predict that this approach will dominate in biomedical sciences

in the future. This can be convincingly illustrated, for instance, by the Clinical and Translational

Science Award (CTSA) program. The CTSA program was launched in 2006 under the

administrative support by the National Center for Advancing Translational Sciences (NCATS)

within the U.S. National Institute of Health. The CTSA budget, which in 2012 amounted to

almost 574 million USD, reflects the large scale of the translational research in the United

States. In addition, translational health research in the United States is also funded by other

sponsors, which adds additional resources and implies high priorities assigned to this research

effort.

Translational research can be defined as an integration of basic and clinical biomedical research

that aims to produce discoveries in basic sciences, which display diagnostic or therapeutic

potential, and move them to clinical trials and eventually to clinical applications. While such

integration may appear obvious and trivial at first sight, the reality is much more complex. This

is caused in part by complex administrative and regulatory issues, but also by considerable

differences in culture, mindset, research approaches and policies adopted by communities

involved in basic and clinical biomedical research. Basic scientists may see their hypothesis-

driven research fundamentally more rigorous than goal-directed applied research in clinical

setting. And indeed, many discoveries, which later moved to clinical applications, were

produced by hypothesis-oriented and curiosity-driven scientists who were not trying to invent

new applications. For example, the Nobel Prize Laureate Thomas Steitz, who co-discovered the

structure of ribosome, which has huge clinical implications, clearly stated that "the only kind of

translation I have worked on is that orchestrated by the ribosome". Stating that, this highly

accomplished scientist implied his adherence to the basic science and reluctance to be

associated with the fashionable translational research. On the other side, clinical researchers

may believe that their work is somehow superior and more relevant to human health and

disease. Basic scientists are sometimes discouraged to move to the translational research,

because they may find it difficult to publish translational research articles in some highly

recognized journals that prefer rigorous hypothesis-driven basic science over goal-directed

applied science. At the present time, these perceived differences may be slowing the progress

of translational research to some extent; however, the experience from institutions with

successfully implemented translational research programs justifies an optimistic prediction that

the cultural differences between basic and clinical research communities will be overcome and

eventually disappear in time. The integration of these two communities will produce

The Future of Biomedical Science

28

translational researchers who will be able to develop and employ basic and applied scientific

perspectives embracing both hypothesis testing and goal-oriented research.

It may seem challenging to develop the capabilities needed to conduct translational research by

individual scientists or even at the level of scientific teams; nevertheless, the history of science

provides many inspiring examples of those who demonstrated the vision and capabilities to see

across many disciplines and perform basic and clinical research from bench-to-bedside and

back. To name a few, the list of early translational health scientists includes Professor Louis

Pasteur (chemist), Dr. Paul Ehrlich (physician and public health officer), Dr. Jonas Salk (virologist

with medical training), Mr. Denis Burkitt (surgeon) and Dr. Barnett Rosenberg (biophysicist).

Indeed, Professor Pasteur stated that “there is science and the applications of science, bound

together as the fruit is bound to the tree that bore it”, which implies, that while he saw the

differences between science and its applications, he saw them lying on a continuum.

Considering his combined goal-oriented and hypothesis-driven research on rabies, Professor

Pasteur has certainly pioneered translational research long before it became a fashionable

buzzword embraced by today’s media.

While the first two named legendary scientists worked in the relatively distant past when the

body of scientific knowledge was relatively limited, major contributions of the other three

named translational researchers date back to the more recent 1950s, 1960s and 1970s,

respectively. Among them, the somewhat less widely known, but equally inspiring

accomplishment of Dr. Rosenberg started with biophysical experiments, which examined the

effects of electric current on the growth of bacterial cultures and eventually resulted in

discovery, pre-clinical and clinical evaluation of cisplatin, a spectacular anticancer drug that

saved untold numbers of patients with testicular germ cell tumors (TGCT) and provided clinical

benefits to millions of patients with ovarian cancers, cervical cancers, lung cancers, head and

neck cancers, and lymphomas.

The past translational research, exemplified by Dr. Rosenberg and the story of cisplatin, often

involved a serendipitous discovery that was readily moved to clinical applications by talented

individuals who were ready to identify the translational potential of their basic research

discoveries and cross the boundaries between basic and clinical research. However, in future

translational health research, the translational potential will most likely be considered at the

early stage of any biomedical research (at its basic side) and projected into research aims,

strategies and approaches.

A specific example of the research with translational potential is represented by the use of

bioinformatics, pharmacoinformatics and available "omics" datasets (e.g. transcriptomics)

aiming to identify and subsequently evaluate prospective anticancer drugs among hundreds of

existing drugs, which have been approved and used for the treatment of other diseases. Drugs

The Future of Biomedical Science

29

identified by these "omics" methods as possibly having also anticancer properties may, upon

confirmation of their predicted activity in vitro and in vivo, rapidly enter clinical trials, since

their ADMET data have been previously collected in lengthy pre-clinical evaluation, and the

extensive clinical experience with these drugs have already accumulated, albeit in different

setting of clinical use. This prior experience considerably simplifies administrative and

regulatory processing and approval of new clinical trials, since these drugs have already been

found safe for the use in human medicine.

This approach known as drug repositioning (or drug repurposing) has already resulted in the

approval and successful use of some drugs in new indications. For example finasteride, a drug

that was originally developed and used for the treatment of benign prostatic hyperplasia, was

later repurposed for the treatment of androgenic alopecia (male pattern baldness), and

evaluated also as a prospective chemopreventive agent in prostate cancer. Another example

that illustrates drug repositioning is represented by tricyclic antidepressant desipramine, which

has been used in the USA since 1964 for the treatment of depressive disorders. While next

generations of antidepressants almost completely replaced desipramine in modern clinical

psychopharmacology, the trends identified in transcriptomics data for cancer cells exposed to

various drugs suggested that desipramine may exert activity against some malignancies.

Subsequently, after pre-clinical validation using in vitro and in vivo models, desipramine

entered phase II clinical trial for the treatment of small cell lung cancer (Jahchan NS, Dudley JT,

Mazur PK et al. A drug repositioning approach identifies tricyclic antidepressants as inhibitors of

small cell lung cancer and other neuroendocrine tumors. Cancer Discov 2013; 3: 1364-1377).

Importantly, drug repositioning is just one of many possible examples of the translational

health research.

Progressing integration of basic and clinical biomedical sciences will bring new translational

research centers that will have to be staffed with investigators capable of working in highly

multidisciplinary area, which will include chemistry, molecular and cell biology, pharmacology,

immunology, bioinformatics, biostatistics, molecular pathology, systems biology and various

clinical disciplines. Compartmentalization of biomedical sciences, even at their basic side, will

most likely be reduced and boundaries delimiting one discipline from another will disappear in

time, because they historically developed from the past limitations that did not allow seeing

knowledge in its interdisciplinary context.

Another historical reason that forced compartmentalization of science can be found in

substantial difficulties that accompanied the use of various methods in scientific practice. These

difficulties considerably limited the range of experimental or computational methods that could

be mastered by individual scientists and applied to solve their specific scientific problems. In

spite of the continuously growing body of scientific findings and increasing complexity of their

The Future of Biomedical Science

30

interpretations, recent technological advances paradoxically simplified the use of highly

sophisticated scientific methods, which thus became more available and user-friendly (albeit

not trivial). This development made it possible for individual practicing scientists to master on

the user level several methods across historically defined boundaries of various scientific

disciplines. In contrast to the past times, when the majority of advanced experimental

equipment required highly trained and dedicated users, including in-house support staff, the

situation is considerably different nowadays, when individual scientists can use, if the need

arises, many sophisticated methods and generate data across various disciplines of biomedical

science (for example flow cytometers, diagnostic MRI systems, next generation sequencing

systems for exome and transcriptome sequencing and liquid-handling workstations for high-

throughput experiments). At the present time scientists in Academia who work in the field of

drug discovery and development not only design, synthesize and determine the structures of

prospective new drugs, but also evaluate their pharmacological activity and toxicity using cell

cultures in vitro and animal models in vivo, which was essentially unthinkable in the past,

forcing medicinal chemists to outsource tests of compounds to more biologically oriented

researchers.

Thus, blurring boundaries between biomedical sciences has become more possible owing to the

interdisciplinary context of new discoveries and availability of user-friendly advanced

experimental systems. As a result, these boundaries, which were delineated due to past

limitations, are now becoming unsubstantiated and biomedical sciences seem to converge into

a smaller number of highly multidisciplinary sciences. This development will possibly help us

overcome obstacles caused by over-specialization and over-compartmentalization of

biomedical science, which resulted in the undesirable phenomenon that scientists “came to

know more and more about less and less” (quote by Dr. John Higginson (1922-2013), the first

director of the International Agency for Research on Cancer).

In conclusion, recent advances in those biomedical sciences, in which I had the privilege and

pleasure to work, suggest that their future will be increasingly based on big data produced by

modern multiplex and high-throughput assay systems. These methods will generate big data

often resolved in a temporal scale for the description of dynamics of biological systems, and on

a spatial scale to map the data on a cellular or subcellular level, allowing correlations between

biologically relevant properties and special cell subpopulations. Big data generated by various

experimental platforms will reflect various hierarchical levels of biological systems (genome,

epigenome, transcriptome, proteome, metabolome and interactome) that will be integrated

and interpreted by means of data mining in order to uncover their biological meaning and

relevance with respect to the prevention, diagnosis and treatment of human disease. These

interpretations will serve as the basis for generating new hypotheses that will be validated by

more advanced experimental models and used as explanations for mechanisms of biologically

The Future of Biomedical Science

31

and medically relevant processes, sometimes as inputs for various computational models.

Research aims and priorities will be often defined with consideration to their translational

potential that will integrate basic and clinical research of new methods in prevention, diagnosis

or treatment of human disease in a continuous bench-to-bedside but also bedside-to-bench

and back again process. Basic and clinical research will become more integrated and various

biomedical sciences will amalgamate into more multidisciplinary and less distributed

disciplines.

We however know for sure that science will not be able to explain and predict everything even

in future. While answering certain questions will always be outside of bounds of science, the

immense complexity of the world and biological systems in particular also puts constraints on

scientific questions that can be answered at any point of time. For example, as of today we

cannot conclusively differentiate perimenopausal women that will benefit from hormone

replacement therapy (HRT) from those that will exercise side effects such as vascular events

and breast cancer. Likewise, we presently do not know which factors determine the long-term

survival observed in about 3% of men of the same age group diagnosed with advanced

pancreatic adenocarcinoma, which is otherwise highly malignant disease with almost invariably

poor prognosis. These and similar questions can be answered in future, once our understanding

of underlying complexities improves, it is just hard to predict when this is going to happen.

In 1963 a group of distinguished French intellectuals (that included Jean Paul Sartre) suggested

a full-scale scientific assault on cancer using the world-wide resources and the budget of just

0.5% of the total military expenditures of the USA, USSR, Great Britain and France, which at

that time represented about 700 million USD per year. However, this enthusiastic proposal,

which materialized only to a limited extent, was doomed to fail. Since it was inspired by the

Manhattan project, which produced the first atomic bomb in less than 4 years at the cost of 20

billion USD, it was inherently built on a wrong assumption that a well-funded crash program in

cancer research must necessarily succeed in finding cure for cancer in a short time. The

assumption was wrong because scientific research is not the same thing as technological

development. In case of the Manhattan project, basic research and revolutionary discoveries

needed for its success had already been accomplished by the time the project was initiated,

and only advanced development and production stages remained to be completed (for

illustration: Klaproth discovered uranium in 1789, Chadwick discovered neutron in 1932,

Dempster discovered uranium-235 in 1935, Hahn, Meitner and Frisch discovered nuclear fission

in 1938, Szilárd and Fermi discovered nuclear chain reaction in 1939, Seaborg discovered

plutonium in 1940 - and Manhattan project run between 1942 and 1945).

In contrast, finding simple solutions for many human diseases resists such an approach

regardless funding and workforce and each discovery just opens the door to a much vaster and

The Future of Biomedical Science

32

complex world. Certainly, this is not only a problem of cancer research. Another appealing

example is American trypanosomiasis (Chagas disease). While we know more and more about

fascinating and incredibly complex biology of its etiological agent Trypanosoma cruzi, we still

have just two old drugs for the treatment of Chagas disease – nifurtimox since 1960s and

benznidazole since 1970s, both of which have very limited therapeutic potential. Each

advancement in biomedical research, exemplified above by the research on cancer and Chagas

disease, seems to be just another brick in a mysteriously designed construction and none of us

can say how many bricks that construction is going to require. Unlike development, research

follows its own course and discoveries seldom obey project timelines and predefined

milestones. We can expect spectacular discoveries and paradigm shifts in future biomedical

sciences but they will be rare in comparison with enormous amount of unspectacular and

publicly underappreciated spadework that will always be needed to lay out roadways and put

up road signs on this perhaps longest journey with no end in sight.

i Dedicated to my Mother Ľuboslava Mezencevová, M.D.