Destination image, image at destination. Methodological aspects
Generation of digital phantoms of cell nuclei and simulation of image formation in 3D image...
Transcript of Generation of digital phantoms of cell nuclei and simulation of image formation in 3D image...
Generation of Digital Phantoms of Cell Nuclei and
Simulation of Image Formation in 3D Image Cytometry
David Svoboda, Michal Kozubek, Stanislav Stejskal
� AbstractImage cytometry still faces the problem of the quality of cell image analysis results.Degradations caused by cell preparation, optics, and electronics considerably affectmost 2D and 3D cell image data acquired using optical microscopy. That is why imageprocessing algorithms applied to these data typically offer imprecise and unreliableresults. As the ground truth for given image data is not available in most experiments,the outputs of different image analysis methods can be neither verified nor comparedto each other. Some papers solve this problem partially with estimates of ground truthby experts in the field (biologists or physicians). However, in many cases, such a groundtruth estimate is very subjective and strongly varies between different experts. To over-come these difficulties, we have created a toolbox that can generate 3D digital phan-toms of specific cellular components along with their corresponding images degradedby specific optics and electronics. The user can then apply image analysis methods tosuch simulated image data. The analysis results (such as segmentation or measurementresults) can be compared with ground truth derived from input object digital phantoms(or measurements on them). In this way, image analysis methods can be comparedwith each other and their quality (based on the difference from ground truth) can becomputed. We have also evaluated the plausibility of the synthetic images, measured bytheir similarity to real image data. We have tested several similarity criteria such as vis-ual comparison, intensity histograms, central moments, frequency analysis, entropy,and 3D Haralick features. The results indicate a high degree of similarity between realand simulated image data. ' 2009 International Society for Advancement of Cytometry
� Key termsdigital phantom; synthetic image; procedural texture; point spread function; fluores-cence optical microscope; 3D image cytometry
CURRENT biomedical research relies strongly on computer-based evaluation, as the
vast majority of commonly used acquisition techniques produce large numbers of
numerical or visual data. Each individual technique (optical microscopy, PET, MRI,
CT, ultrasound, etc. (1)) has its advantages that make it suitable for use in selected
fields of research. However, each technique also has its own drawbacks (blur, noise,
various aberrations) (2–4). In this sense, one should keep in mind that biomedical
data supplied by scientific or diagnostic instruments always suffer from some imper-
fection and therefore cannot be directly used for further analysis, evaluation or mea-
surement. To guarantee more accurate results, some preprocessing is required.
Let us focus on the area of image reconstruction. Here, the search for ground
truth approximation is called an image restoration (5–7), and it constitutes the best
possible method for image recovery. It is done in time-reverse order; that is, the
image defect caused by the phenomenon that appeared in the whole acquisition pro-
cess as the first one must be eliminated as the last one. The same holds for the sec-
ond, the third and the following defects. As soon as the captured image is restored, it
can be further submitted to measurement or segmentation algorithms. If all the
defects and the noise were removed during image restoration, we should get the
Centre for Biomedical Image Analysis,Botanicka 68a, Faculty of Informatics,Masaryk University, 602 00 Brno, CzechRepublic
Received 16 May 2008; Revision Received10 October 2008; Accepted 3 February2009
Additional Supporting Information may befound in the online version of this article.
Contract grant sponsor: Ministry ofEducation of the Czech Republic;Contract grant numbers: LC535, 2B06052.
*Correspondence to: D. Svoboda, Centrefor Biomedical Image Analysis,Botanicka 68a, Faculty of Informatics,Masaryk University, 602 00 Brno, CzechRepublic
Email: [email protected]
Published online 16 March 2009 inWiley InterScience (www.interscience.wiley.com)
DOI: 10.1002/cyto.a.20714
© 2009 International Society forAdvancement of Cytometry
Original Article
Cytometry Part A � 75A: 494�509, 2009
original unaffected ground truth image data as well as derived
ground truth analysis results. In practice, this is not possible.
The task is too complex, so the imperfection is diminished
rather than totally pruned away. Moreover, due to the non-
existence of ground truth image data, it is difficult to judge
whether the restoration has been done correctly and hence
whether the final evaluation is valid.
Let us avoid searching for inverse sequences and instead
try to follow the chronological order, that is, the acquisition
process. First, let us create a digital phantom of an object that
we want to observe. Then let us submit this object to all the
phenomena that appeared during the real acquisition. The
final synthetic image will be very similar to the image acquired
under real conditions. In this way, we design a simulator: a
tool for digital phantom generation and simulation of its
acquisition. A user can then use the simulator to evaluate the
correctness of various image restoration (or image analysis)
algorithms by comparing their results with ground truth
image data (or ground truth analysis results) derived from the
digital phantom (see Fig. 1).
It is noteworthy that the quality of each simulator
depends strongly on the demands of its designer. The simula-
tor can be simple or very sophisticated. At present, the simula-
tors appear in nearly every field of biomedical research.
Although many authors designed their simulators as compact
toolboxes, the simulation process can always be split into three
principal parts: (I) digital phantom object generation, (II)
simulation of signal transmission, and (III) simulation of sig-
nal detection and image formation.
(I) Phantom Generation
Macroscopic objects like the kidneys, heart, brain, mus-
cles, or blood vessels are examples of objects that are relatively
easy to model, as their shape and behavior are well under-
stood. The problem is to choose the model that fits the
observed data to the greatest extent possible. Generation of
the static models has already been studied extensively when
creating the reference phantom image for the brain (8–12).
Jensen et al. proposed simple models of the kidneys and the
fetus (13). When handling these types of image data, no move-
ment is expected. Completely different object types represent
the heart or blood vessels, where activity is studied and kinetic
models (14–17) are needed.
In contrast with the macroscopic world, when handling
microscopic objects like cells, the main issue is that their com-
ponents and DNA structure cannot be simply observed with
the naked eye. The observer needs a special technique in order
to acquire at least some visual draft of the studied objects. As
there are many unwanted phenomena causing image imper-
fection in this case, the final image is strongly affected by these
degradations, and some amount of image restoration is
required. Unfortunately, the process of restoration is not
exactly reciprocal to acquisition; one must admit that the
restored image is an estimate rather than the original image
data. That is why no one knows the exact shape of microscopic
structures; hence, there is no ground truth describing the exact
shape of these objects. For many microscopic objects, either
they are expected to have a very simple structure, or their
shape is deliberately simplified.
Regarding the simplest point-like objects such as FISH
(18,19) spots, Grigoryan et al. proposed a simulation toolbox
for generating large sets of spots (20). Each spot was repre-
sented by a sphere randomly placed in 3D space. Two indivi-
dual spheres were allowed to overlap, but only under specific
conditions. Manders et al. also addressed the issue of virtual
spot generation (21). They verified a novel region-growing
segmentation algorithm over a large set of Gaussian-like 3D
objects arranged in a grid.
When creating cell-like or nucleus-like objects, the algo-
rithm uses the most popular shapes, such as circles and ellipses
in 2D and spheres and ellipsoids in 3D space. To check the
quality of the new cell nuclei segmentation algorithm, Lockett
et al. (22) generated a set of artificial spatial objects in the
shape of curved spheres, ellipsoids, discs, bananas, satellite
discs, and dumbbells. Lehmussola et al. (23) designed a more
complex simulator called SIMCEP that could produce large
cell populations. However, the toolbox was designed for 2D
images only, and its extension to higher dimensions was not
straightforward. Later, Svoboda et al. (24) extended this model
to manipulate fully 3D image data, but with a limited number
of generated objects.
Up to now, the majority of authors have focused on the
design of spots or nuclei. Recently, Zhao et al. (25) designed
an algorithm for generating an entire 2D digital cell phantom.
Here, they modeled the whole cell, including the nucleus, pro-
teins, and cell membrane, using machine learning from real
data. This is a different approach than that of other studies
that use basic shapes and their deformations. The advantage
of the machine learning approach is that it might extract a
more precise shape model from real data, but the disadvantage
is that the model cannot be described in precise mathematical
terms.
Although the previously mentioned papers and articles
were interested in individual object modeling, Graner et al.
(26) focused on generating large cell populations. They
adopted the statistical large-Q Potts model to simulate the
reorganization of uniformly distributed cell-like objects to
guarantee the natural shape and distribution of the cells.
(II) Signal Transmission
The second stage covers the period during which a signal
is transmitted through the environment. One of the most typi-
cal environment characteristics is the impulse response of the
system, often called the point spread function (PSF) (2),
which is common in optical microscopy as well as in PET and
ultrasound imaging. This is the pivotal phenomenon drasti-
cally affecting the quality of the final results, and it is usually
simulated (27) by convolving the incoming signal with the
given PSF. However, the vast majority of the authors reduced
this simulation to the simple Gaussian kernel (20,21,28,29), as
it is commonly understood to be a good approximation of any
PSF. This phenomenon is not the only one affecting the trans-
ORIGINAL ARTICLE
Cytometry Part A � 75A: 494�509, 2009 495
mitted signal. In optical microscopy, for example, an incorrect
position of the light source, uneven illumination (24), chro-
matic (30) and monochromatic aberrations (31), or some
reflection or refraction on lens surfaces may create some arti-
facts as well. One should keep in mind that signal reflection
and refraction happen in other modalities as well. Ultrasound
imaging is an example of a technique using these phenomena.
(III) Signal Detection and Image Formation
The final stage corresponds to the detection of the signal
with the device sensors and its conversion to the digital repre-
sentation, which naturally has some problems. The use of
sensors introduces Poisson noise (22,23,27,28), which is an
artifact typically seen even with the naked eye. Aside from
Poisson noise, the A/D converter and amplification electronics
introduce a certain level of additive white Gaussian noise
(22,29). If the equipment is not properly cooled, the level of
dark current noise increases. The CCD detectors, for example,
are well known for their fixed-pattern noise and blooming
effect. Finally, one should keep in mind that the arrangement
of the signal sensors determines the final image sampling.
Examples of Simulators
Depending on the acquisition device, simulators can eas-
ily be split into several groups: optical microscopy, CT, PET,
MRI, and ultrasound imaging (see Table 1).
Figure 1. Quality control (QC) in biomedical image data processing and the advantage of using simulators rather than expert know-
how for this purpose. From the image processing point of view, ground truth derived from expert knowledge is imprecise and labori-
ous to obtain, whereas phantom-based ground truth is exact and easy to generate in large quantities. On the other hand, from the
biological point of view, expert-based approximate ground truth might be more precise because it corresponds to real objects, not
synthetic digital phantoms. In this article, however, we concentrate on image processing QC, not QC related to modeling a biological
system.
ORIGINAL ARTICLE
496 Generation of Digital Phantoms of Cell Nuclei and Simulation of Image Formation in 3D Image Cytometry
In the area of ultrasound imaging, Jensen et al. (13,39)
designed a very complex and flexible engine called Field II,
based on manipulation with a 3D linear acoustic field. An
alternative approach is Ultrasim (38), which could generate at
most 2D data. Unlike other authors, Wojcik et al. (41)
designed a nonlinear model of acoustic wave propagation in
tissues. In the field of elastography, D’hooge (37) focused par-
ticularly on simulating and tracking heart movement.
In the MRI area, Cocosco et al. designed a web interface
called BrainWeb (36), which also included a very useful brain
phantom database (11). Waks designed a complex cardiac
motion simulator (14), focusing on tagged MRI data.
Ma (28) found MRI phantom data very useful when
designing a PET simulator of brain images. In CT imag-
ing, Rosenberg designed a general purpose CT simulation
toolbox (34).
In optical microscopy, great progress in simulating image
data has been made mainly during the past 3 years, during
which several papers (20,23,24,33) have given straightforward
recipes or even offered useful software toolboxes. However, the
development of simulators in this area is still beginning;
hence, great progress might be expected.
In this article, we present a novel fully 3D technique
simulating the whole process of image acquisition from optical
microscopes. The whole process is split into three main inde-
pendent consecutive sub-processes: digital phantom genera-
tion, simulation of signal transmission, and simulation of
signal detection and image formation, outlining the structure
of section 3. Section 4 presents the results. In the figures, we
present the quality of our method as well. In the conclusion,
we discuss the goals that we have achieved.
REFERENCE DATA
Before starting to develop any simulation toolbox, one
should have expert knowledge or at least access to a suffi-
ciently large database of real reference data. Such a source of
information is very important: digital phantom generation is
pointless without knowledge of the nature and the structure of
the images of real objects. In particular, this knowledge is
helpful when tuning the final simulator behavior.
Equipment
In this article, the optical microscope Zeiss Axiovert
200M with Yokogawa CSU-10 confocal unit and Andor
iXon 887 back illuminated EM CCD camera connected to
standard PC were used to acquire real image data. Con-
cerning the microscope settings, the alpha Plan-Fluar objective
(1003/1.45 NA) was used.
For the time being, the simulator (developed in GNU
C/C11) can imitate an optical system comprising four differ-
ent CCD cameras (see Table 2) and four optical microscopes
(see Table 3) with two types of confocal units and various
objectives. In general, there is no restriction put on the whole
optical system; that is, the component list can be extended
Table 1. An overview of the recent toolboxes and papers dealing with the generation of phantoms of biomedical objects and
their acquisition
SIMULATED
TOOLBOX NAME AUTHOR(S) STAGES DIMS REFERENCE YEAROBJECT TECHNIQUE
Spots Fluorescence microscopy – Manders et al. I,III 3D (21) 1996
Spots Fluorescence microscopy – Grigoryan et al. I,II,III 3D (20) 2002
Nuclei Fluorescence microscopy – Lockett et al I,III 3D (22) 1998
Nuclei Fluorescence microscopy – Solorzano et al. I,II,III 3D (29) 1999
Nuclei Fluorescence microscopy – Svoboda et al. I,II,III 3D (24) 2007
Cells Fluorescence microscopy SIMCEP Lehmussola et al I,II,III 2D (23,32) 2005
Cells Fluorescence microscopy – Dufour et al. I,II,III 3D1time (33) 2003
Cells Fluorescence microscopy – Zhao et al I 2D (25) 2007
* Fluorescence microscopy SVI Huygens Voort et al. II,III 3D (27) 1995
Cells CompuCell3D Graner and Glazier I 3D1time (26) 1992
* CT CTSIM Rosenberg II,III 2D (34) 2002
Brain PET PETSIM Ma et al. II,III 3D (28) 1993
Brain MRI – Collins et al. I 3D (12) 1998
Brain MRI – Tofts et al. I 3D (8) 1997
Heart motion MRI – Waks et al. I,II,III 3D (14) 1996
Brain MRI – Rexilius et al. I 3D (9,10) 2003
Brain MRI Brain Web Kwan et al. I,II,III 3D (35,36) 1996
Tissue motion Ultrasound – Schalaikjer et al. I 3D (16,17) 1998
Heart motion Ultrasound – Rabben et al. I 3D (15) 1993
Heart motion Ultrasound – D’hooge et al. I,II,III 2D (37) 2003
* Ultrasound Ultrasim Holm II,III 2D (38) 2001
* Ultrasound Field II Jensen et al. I,II,III 3D (13,39,40) 1996
* Ultrasound – Wojcik et al. II,III 3D (41) 1997
ORIGINAL ARTICLE
Cytometry Part A � 75A: 494�509, 2009 497
arbitrarily. The only requirement is that each component has
to be described properly.
We evaluated the results with two servers. The first one
had two Intel Core 2 Quad 2.0 GHz processors, 8GB RAM
and the SUSE Linux operating system. The latter one had an
Intel Core 2 Quad 2.4 GHz processor, 4 GB RAM, and the
Microsoft Windows Server 2003 operating system. We pro-
grammed the system in Matlab (Mathworks) with the DIP-
image toolbox (Delft University of Technology, Netherlands).
Biological Material
Microspheres. TetraSpeckTM microspheres (Molecular Probes,
Eugene, OR) of 0.1 lm (T-7279), 0.2 lm (T-7280), and
0.5 lm (T-7281) diameter were used in this study. Each
microsphere is stained with four different fluorescent dyes
simultaneously: blue (365/430 nm), green (505/515 nm),
orange (560/580 nm), and dark red (660/680 nm). In this
study, we observed only the green channel. For illustration, see
Figure 2, in which the given 3D image data are visualized.
HL-60. One cell line representing tissue cultures was used:
human promyelocytic leukemia cells HL-60 (European Collec-
tion of Cell Cultures) maintained between 1 and 5 3 105 cells/
ml (see Fig. 3). These cells grow in RPMI-1640 medium con-
taining 10% fetal bovine serum, penicillin (100 U/ml), and
streptomycin (100 mg/ml) at 378C in a 5% CO2 atmosphere.
Dense suspension of cells in PBS (50–100 ll) was dropped
onto poly-L-lysine coated microscopic slides. After attachment
to the slide (about 15 min), cells were fixed with 3.7% parafor-
maldehyde in 250 mM HEPEM (RT;12 min) and washed in
13 PBS (3 3 5 min). After washing, chromatin in the cell
nuclei was stained with DAPI.
Granulocytes. Granulocytes are a category of terminally dif-
ferentiated white blood cells characterized by the presence of
granules in their cytoplasm. Granulocytes are also called poly-
morphonuclear leukocytes (PML) because of the varying
shapes of the nucleus. Human leukocytes that contain granu-
locytes (see Fig. 4) were isolated from heparinized peripheral
donor blood from healthy patients using density gradient sedi-
mentation composed from Telebrix (Guerbert, France) and a
3.8% solution of dextran (Sigma) in a 2:8 ratio. Granulocytes
were isolated from the upper layer with leukocytes using His-
topaque-1077 double gradient density centrifugation (Sigma).
Residual erythrocytes were removed by an erythrocyte lysis
buffer. Concerning fixation and staining of the granulocytes,
the same method was used as in the case of HL-60 cells.
METHOD
Following the introduction, the whole toolbox is split into
three independent parts: digital phantom generation, signal
transmission, and signal detection and image formation. Each
one represents a specific stage appearing during the final image
synthesis. The text of this section will follow this structure.
Digital Phantom Generation
Initially, the toolbox selects the type of the object to be
simulated. Currently, the toolbox can simulate three types of
objects.
HL-60. In the experiments using the HL-60 cell line, investi-
gators are usually interested in studying the cell nucleus that
occupies most of the cell volume. Therefore, the following text
will focus on modeling nucleus shape and texture.
When simulating the appearance of the HL-60 standard
cell line, we can presume the shape of the initial object to be
spherical, as this object type is topologically equivalent to a
sphere. However, the basic objects like spheres or ellipsoids are
too simple and regular. Because the aim is to simulate real
Table 2. List of simulated cameras
VENDOR CAMERA NAME
Andor iXon 887 BI
Photometrics CoolSNAP HQ
Photometrics Quantix KAF-1400
Princeton Instruments MicroMax 1300-YHS
Table 3. List of simulated microscopes
VENDOR MICROSCOPE NAME OPTIONAL EXTENSION
Zeiss Axiovert 200M CSU-10 confocal unit
Zeiss Axiovert 100S CARV confocal unit
Leica DMRXA CSU-10 confocal unit
Leica DMIRE2 –
Figure 2. A sample image containing microspheres of 0.5 lmdiameter with peak excitation/emission wavelengths of 560/
580 nm. This 3D figure consists of three individual images: the
top-left image contains a selected xy-slice, the top-right imagecorresponds to a selected yz-slice, and the bottom one depicts a
selected xz-slice. Three mutually orthogonal slice planes are
shown with ticks.
ORIGINAL ARTICLE
498 Generation of Digital Phantoms of Cell Nuclei and Simulation of Image Formation in 3D Image Cytometry
objects, a certain amount of irregularity is required. For this
purpose, we used the PDE-based method to distort the object
shape. The idea is based on viewing the object boundary as a
deformable surface. The deformation is realized with fast level
set methods (42) using artificial noise (43) as a speed function
(44). An example of such a deformation process result can be
seen in Figures 5a and 5b.
Besides the shape, the texture of the nucleus image profile
reveals important information about the cell activity. In each
stage of the cell cycle, chromatin has different properties, and
hence when stained, it looks different. For these purposes, the
study and the measurement of heterogeneity of chromatin is
an important task (45). Essentially, there are two ways to gen-
erate synthetic texture: algorithms for texture synthesis (46–
48) and methods for procedural texture modeling (43). Here,
we decided to use the latter one. The texture function is
defined as a sum of several Perlin’s noise functions (43):
Texture x; y; zð Þ ¼XN�1
i¼0
noise ðx; y; zÞ � bi� �ai
ð1Þ
where b controls flickering of the texture, a is responsible for
smoothness, and N controls whether the result is still coarse
(for N\ 5) or fine enough (see Fig. 5c). However, certain nu-
cleus parts may not contain chromatin and hence may remain
unstained. These locations are either left blank (without any
texture) or defined as very dark. The latter case corresponds to
an unwanted staining effect. The nucleoli (see Fig. 3) might be
an example of such an object type that typically appears as a
dark (not stained) place in the image of a nucleus. It was dis-
covered empirically that there is only one nucleolus per
healthy nucleus in human cells. As for cancerous cells, there
might be more than one nucleolus. The shape of such a
nucleolus is mostly spherical or slightly deformed. Because of
this property, its generation follows the same idea as the gen-
eration of the whole HL-60 nucleus.
Granulocyte. Granulocyte is a type of cell containing a
nucleus with highly condensed chromatin. Such a nucleus
typically has three to five lobes connected by very narrow,
barely perceivable channels. The shape of each lobe resembles
a slightly deformed sphere. Concerning the inner structure,
each lobe contains a cavity, as the chromatin is mostly concen-
trated in the nucleus periphery. Similarly to the HL-60 cell
line, we are mainly studying the nucleus context, and hence
Figure 3. An example of four real HL-60 cell nuclei stained with DAPI. White arrows show the location of one selected nucleolus. Each 3D
figure consists of three individual images: the top-left image contains a selected xy-slice, the top-right image corresponds to a selectedyz-slice, and the bottom one depicts a selected xz-slice. Three mutually orthogonal slice planes are shown with ticks.
ORIGINAL ARTICLE
Cytometry Part A � 75A: 494�509, 2009 499
the digital phantom designed in the following text is expected
to represent only the nucleus.
First of all, note that the lobes are distributed within a
very small area and are sometimes even tightly pressed to
each other. When generating the granulocyte structure, a cen-
ter of mass is either given by the user or generated randomly
within the image space. Subsequently, the position of each
lobe follows the Gaussian distribution with a mean in the
given center and a very low standard deviation. Because of
the similarity of the basic lobe profile with the HL-60 cell nu-
cleus, the same approach as in the previous paragraph was
used to generate each lobe. Finally, the lobes were connected
with spatial cubic splines simulating the channels between the
individual lobes. The simulation of chromatin concentration
near the nucleus periphery was managed with a distance
transform weighted by the procedural texture (see Fig. 6c).
Microsphere. Microspheres are spherical objects of known
diameter without any texture or distortion. Hence, a simple
sphere with a given diameter can represent each digital bead
phantom. The microspheres are typically used when calibrat-
ing the optical system. Here, they can be used to verify the cor-
rectness of the selected PSF and hence the plausibility of the
final synthetic image.
Notation. To summarize the previous paragraphs, the genera-
tion of any digital phantom can be described simply as:
Iphantoms ¼ Ibackground þX
p2P MakePhantom pð Þ ð2Þ
where Ibackground is an initial empty image with the nonspecific
staining effect, and P is a multiset of all the available phantom
types. When staining the objects using fluorescent dyes, some
parts of the specimen or cover glass may unwillingly be
stained. This phenomenon manifests itself as a barely percepti-
ble veil covering the whole glass. Finally, MakePhantom(.) is
any function responsible for creation of a selected digital
phantom. As three types of objects are currently ready for
simulation, the variable p can be substituted for HL-60, granu-
locyte, or microsphere.
Signal Transmission
Light conditions. In traditional light microscopy, both laser
and mercury lamps are used for visualization purposes, but
neither type can spread light within the specimen uniformly.
Figure 4. An example of four real granulocyte nuclei stained with DAPI. Each 3D figure consists of three individual images: the top-left
image contains a selected xy-slice, the top-right image corresponds to a selected yz-slice, and the bottom one depicts a selected xz-slice.Three mutually orthogonal slice planes are shown with ticks.
ORIGINAL ARTICLE
500 Generation of Digital Phantoms of Cell Nuclei and Simulation of Image Formation in 3D Image Cytometry
Therefore, the distribution of the light intensity is assumed to
be quadratic (49) over each xy-slice:
Ilight x; yð Þ ¼ a0 þ a1x þ a2y þ a3xy þ a4x2 þ a5y
2 ð3Þ
where vector a 5 (a0, a1, a2, a3, a4, a5) defines the shape of a
quadratic surface typically widely opened. In some cases, there
may even be two quadratic profiles in the image. This happens
if the mercury lamp arc and its mirrored image are located far
from each other. Very often, the quadratic surface peak is not
centered with respect to the image center. This is usually the
case if the camera or lamp is shifted relative to optical axis of
the microscope.
This phenomenon makes objects that are placed farther
from the optical axis look darker than the others. Cropping
such an object from this large image usually leads to neglect-
ing this effect because within this small sub-image the effect is
hardly recognized with the naked eye. Submitting such an
image to the discrete Fourier transform (DFT), one can reveal
the uneven illumination again, though this is not seen with
the naked eye. The explanation is based on DFT properties.
Because the Fourier domain is strictly periodic, one should
keep in mind that the right side of an image is virtually joined
with the left side and the upper side with the lower one. If
there is at least a small amount of light gradient across the
image, due, for example, to uneven illumination, the left side
does not join the right side smoothly. This phenomenon man-
ifests itself as a centered cross in the Fourier domain. The
stronger the light gradient, the stronger the cross appears (see
Fig. 7).
When eliminating this phenomenon, a quadratic surface
is typically fitted to the original image. The result is then
obtained as the pixel-wise ratio of the original image and the
given surface. Because in the case of simulation we solve the
inverse problem, we can apply the estimated quadratic surface
to the given image simply by multiplying.
Figure 5. HL-60 nucleus digital phantom generation: (a) the original ellipsoid representing the rough primitive mask of a generated cell nu-
cleus and (b) the same object after the 3D PDE-based deformation. (c) Object equipped with texture defining the internal structure. (d) The
image after passing through the optical system. Each 3D figure consists of three individual images: the top-left image contains a selected
xy-slice, the top-right image corresponds to a selected yz-slice, and the bottom one depicts a selected xz-slice. Three mutually orthogonalslice planes are shown with ticks.
ORIGINAL ARTICLE
Cytometry Part A � 75A: 494�509, 2009 501
Impulse response
Each optical system can be described by a point spread
function (PSF) that is the impulse response of this system
determining the amount and the characteristics of image blur.
The PSF can either be measured empirically (real PSF) or esti-
mated (theoretical PSF (50)). The theoretical PSF is usually
based on prior knowledge of the optical system properties
(confocal or wide-field microscope, objective, wavelength,
etc.). Here, we used a real PSF. The PSF was subsequently used
as a convolution kernel applied to the given image (see Fig. 5d
or 6d).
Notation
The consecutive processes of this stage can be described
formally as follows:
Iblurred ¼ Iphantoms � Ilight� � � PSF ð4Þ
where Iphantoms is the image containing pure digital phantoms,
and Ilight is the image representing the light decay effect. The
operator ‘‘.’’ corresponds to pixel-wise multiplication, and the
operator ‘‘*’’ corresponds to convolution.
Signal Detection and Image Formation
Signal detector resolution. One should keep in mind that
there are several factors affecting the image resolution. The
signal detector contributes with its pixel size, which, together
with total magnification of the optical system, defines spatial
sampling frequency. In our application, we establish the cor-
rect image resolution based on this information.
Dark current signal. There is a signal generated internally in
the detector even if no photon is coming, for example, in the
darkroom. This phenomenon is called dark current signal and
is linearly proportional to the exposure time. The detector
vendor specifies dark current generated per pixel per unit time
(in electrons/pixel/s). For example, the best CCD cameras gen-
erate less than 0.1 electrons/pixels/s. If this constant is multi-
plied by the exposure (acquisition) time, the total dark current
Figure 6. Granulocyte digital phantom generation: (a) the initial group of ellipsoids representing the rough primitive estimate of lobes; (b) each
lobe slightly deformed; (c) the lobes equipped with cavities and nonhomogeneous structure of chromatin; (d) image after passing through the
optical system. Each 3D figure consists of three individual images: the top-left image contains a selected xy-slice, the top-right image correspondsto a selected yz-slice, and the bottom one depicts a selected xz-slice. Three mutually orthogonal slice planes are shown with ticks.
ORIGINAL ARTICLE
502 Generation of Digital Phantoms of Cell Nuclei and Simulation of Image Formation in 3D Image Cytometry
signal is obtained. This type of signal is strongly dependent on
the detector temperature: the lower the temperature, the lower
the number of unwillingly generated electrons.
Fixed pattern noise. Every CCD chip produces a small num-
ber of hot pixels, which appear as very bright pixels in the final
image matrix, and dead pixels, which appear as very dark pix-
els. The first case can be simulated by filling corresponding
pixel positions with an overwhelming number of electrons. The
dead pixels are simulated as pixel positions with no electrons.
Quantification uncertainty. The most important noise in
low-light imaging (which is typical for fluorescence micros-
copy) is shot noise, which is usually modeled with a Poisson
distribution. This phenomenon stems from physical laws that
say that the relative measurement error increases as signal in-
tensity decreases. This noise, also known as Poisson noise, is
neither additive nor multiplicative. Its value depends directly
on the input signal in each position. Mean noise level equals
the square root of the quantified signal (number of electrons).
The higher the value of the input signal, the lower is the rela-
tive amount of noise present.
Readout noise. The signal amplification process produces a
certain amount of readout noise, which is modeled as additive
white Gaussian noise. The amount of this noise is also speci-
fied by the detector vendor (in electrons per pixel). The best
CCD cameras generate less than 10e2/pixel. Subsequently, the
A/D converter is used; that is, the electrons representing the
signal are quantized into analog-digital units (ADU). Each
such ADU defines a particular intensity level in the final digi-
tal image.
Notation. To summarize the processes of this stage, we can
write:
Ifinal ¼ ADC gp r Iblurredð Þ þ Idð Þ þ ggh i
ð5Þ
where r(.) stands for signal sampling, gp(.) defines the Poissonnoise, gg defines the additive Gaussian noise, Id defines the
dark current signal including hot/dead pixels, and ‘‘ADC[.]’’
stands for the quantization process of conversion from elec-
tron units to ADUs. Theoretically, Iblurred should be a continu-
ous signal, and r(.) should be a real sampling function defined
by the detector properties. However, we cannot handle a con-
tinuous signal. To simulate this, we can up-sample the initial
image Ibackground to a higher resolution. Afterwards, the r(.)
function is simply a down-sampling filter. This approach
allows generating images with sub-pixel precision.
RESULTS AND DISCUSSION
Currently, the simulator can generate three different types
of objects. The simplest one is the microsphere, which is usu-
ally used to measure optical system properties. The latter two
objects are HL-60 cell nuclei (see Fig. 8) and granulocyte
nuclei (see Fig. 9). As these two types of objects are quite com-
plex, one needs a lot of experience to set all the required simu-
lator parameters correctly in order to get the expected shape
and the internal structure. Close collaboration with biologists
helped us tune the parameters of individual methods in the
process of digital phantom generation. In this way, we
obtained the most suitable values. They are part of the source
code, which is freely available under the GNU GPL.
The plausibility of the results can be assessed in many
ways. The most common method for biomedical image data
comparison is visual inspection by an expert. On one hand,
this approach is important for a coarse estimate when decid-
ing whether the given image is visually similar to the class of
real images. On the other hand, it is tedious and cumbersome,
and the human eye may be simply deceived. This is especially
true with spatial (3D) image data.
Another way of comparison is more straightforward,
more exact and faster. The idea is based on image descriptors.
Such a descriptor can, for example, characterize the objects
contained within the image, measure some local texture prop-
erties, or evaluate some simple statistics over the whole image.
In the past, various measures (also called descriptors or fea-
tures) have been introduced, and some of them have been in
principle accepted as a standard. The measures have been
designed either for generic image data (51,52) (especially the
texture measures) or for segmented image data (51,53). The
latter include, for example, the number of objects within an
image, the circularity or size of the objects, or the distance
between the objects. Until recently, the vast majority of
descriptors have been designed and consequently evaluated
only over 2D image data. Lately, due to progress in the devel-
opment of computer hardware, most of the features have been
extended to 3D (54,55).
In this work, we focused on evaluating a selected set of
global texture features that includes 2nd to 6th central
moment (51) and 3D Haralick features (55). We compared the
features computed from real image data with those computed
from simulated image data directly, whereas a recent study
(25) compared the real and simulated data indirectly based on
classification results inferred from the features.
We also studied image intensity histograms and entropy.
Aside from these measurements, we also validated some of the
Figure 7. Uneven illumination of the specimen that occurs during
image acquisition causes the Fourier transform counterpart to
contain a cross centered in the position of zero frequency: (a) the
original image; (b) its Fourier power spectrum.
ORIGINAL ARTICLE
Cytometry Part A � 75A: 494�509, 2009 503
optical system properties with a Fourier analysis (we already
discussed this issue in Section 3.2).
Measurements
Central moments. The nth central moment is a statistical
measure evaluated over a given discrete random variable.
ln ¼XL�1
i¼0
zi �mð Þnp zið Þ ð6Þ
where
m ¼XL�1
i¼0
zipðziÞ
Here, the variable is marked as zi and denotes the particular
intensity level present in the image. Hence, the sum covers the
range of all the image intensity levels (L). Each moment (ln)has its own specific meaning. The second moment is known as
the variance of the given data. The third moment expresses
the skewness of the measured data, and the fourth moment
describes the relative flatness.
Entropy. Entropy [see Eq. (7)] comes from information theory
and defines the amount of uncertainty (the amount of informa-
tion) hidden in the measured data. The uniform area has zero
entropy, whereas scattered data carry more information and
hence can be characterized with a higher entropy level.
H ¼ �XL�1
i¼0
pðziÞ log pðziÞ ð7Þ
Haralick features. The Haralick features (52,55) are one of
the most popular texture feature sets. They are a collection of
statistics calculated from a so-called co-occurrence matrix.
Haralick features include angular second moment, contrast,
correlation, variance, inverse second difference moment, etc.
Each of these statistics relates to a certain image feature. For
example, the statistic ‘‘contrast’’ measures the amount of local
variation in an image.
Intensity histograms. The intensity histogram (56) is a graph
specifying how many pixels within a given image have the
same intensity. Typically, the zero (leftmost) position indicates
the number of pixels with the lowest intensity, and the right-
Figure 8. An example of four synthetic images of HL-60 cell line. These images should be compared with those in Fig. 3. Each 3D figure
consists of three individual images: the top-left image contains a selected xy-slice, the top-right image corresponds to a selected yz-slice,and the bottom one depicts a selected xz-slice. Three mutually orthogonal slice planes are shown with ticks.
ORIGINAL ARTICLE
504 Generation of Digital Phantoms of Cell Nuclei and Simulation of Image Formation in 3D Image Cytometry
most histogram position indicates the number of pixels with
the highest intensity. It is clear that similar images should have
nearly the same histogram. Moreover, rotating or shifting the
objects within the image does not affect the histograms.
Hence, slightly shifted or rotated objects still produce the
same histograms.
Evaluation
It is common to evaluate a large number of features over
each image to get a so-called feature vector. These feature vec-
tors are typically computed for each individual image and sub-
mitted to a suitable decision tool. Such a tool can either be a
classifier (e.g., a neural network, support vector machine, clus-
tering methods) or a statistical method (e.g., distribution plot,
goodness-of-fit test). In this work, we used the latter approach.
To show that the synthetic images produced by the simu-
lation toolbox are almost the same as the real images acquired
from the optical microscope, we evaluated all the introduced
measurements over 500 synthetic images and real images
(individually for each object). Tables 4 and 5 show the results.
To illustrate that both the real and the synthetic data sets come
from populations with a common distribution, we use quan-
tile–quantile plots (57) (see Figs. 10 and 11). Regarding quan-
tile–quantile plots, if the two sets come from a population
with the same distribution, then the points should fall
approximately along the 45-degree reference line. The greater
the deviation from this reference line, the greater the evidence
for the conclusion that the two data sets came from popula-
tions with different distributions.
Figure 9. An example of four synthetic images of granulocytes. These images should be compared with those in Fig. 4. Each 3D figure con-
sists of three individual images: the top-left image contains a selected xy-slice, the top-right image corresponds to a selected yz-slice, andthe bottom one depicts a selected xz-slice. Three mutually orthogonal slice planes are shown with ticks.
Table 4. Evaluation of central moments and entropy over the real
and simulated image data representing HL-60 cell line
MEASURE REAL IMAGE SIMULATED IMAGE
2nd c.m. 0.1304 � 7.6 3 1024 0.1569 � 5.1 3 1024
3rd c.m. 0.0069 � 1.7 3 1025 0.0083 � 1.9 3 1025
4th c.m. 0.0012 � 0.8 3 1026 0.0013 � 1.0 3 1026
5th c.m. 0.0004 � 1.3 3 1027 0.0004 � 1.7 3 1027
6th c.m. 0.0001 � 1.9 3 1028 0.0001 � 2.4 3 1028
Entropy 5.384 � 0.22 5.682 � 0.12
Each measure is expressed as a mean value over the whole
dataset including standard deviation.
ORIGINAL ARTICLE
Cytometry Part A � 75A: 494�509, 2009 505
The individual quantile–quantile plots in Figure 10 show
that the feature vectors for both real and synthetic HL-60 data
follow very similar distributions. In the case of granulocyte
nuclei (see Fig. 11), the same is valid except for the last quan-
tile–quantile plot, which is characterized by a slight deviation
from the reference 45-degree line.
Concerning 3D Haralick texture features, we evaluated 19
features in total. In (55), where the evaluation was done
locally, some texture features were found to be highly corre-
lated. Therefore, only 15 features were considered, and later in
(58), the number of independent features was further reduced
to eight. In our study, we simplified the method presented in
(55) to get global texture features. In that way, we found 9 fea-
tures to be uncorrelated. These features include: sum average,
texture entropy, texture contrast, texture correlation, texture
homogeneity, maximum probability, sum entropy, difference
entropy, and information measure of correlation. There is a
close correspondence between our selection and the previously
(58) mentioned subset of eight Haralick features (six of them
are common to both studies).
Similarly to the evaluation of central moments over the
image data, we generated nine quantile–quantile plots for each
image dataset. For HL-60 dataset, eight of nine plots exhibited
similarity between individual features for both real and syn-
thetic images. Regarding the granulocyte dataset, five of nine
plots also showed that the feature vectors for both real and
synthetic data follow very similar distributions. The rest of the
quantile–quantile plots did not follow the expected 45-degree
line, which was also the case for the last plot in Figure 11. In
future work, we will search for the differences between the real
and the synthetic data that caused the imperfection of some
quantile–quantile plots.
Figures 12 and 13 show the intensity histograms illustrat-
ing the similarities between the two image datasets. Notice that
the corresponding histogram pairs exhibit a similar shape.
Figure 10. Comparison of descriptors computed from real and synthetic HL-60 images. Quantile—quantile plots illustrate whether the
measured datasets come from populations with similar distributions. Generally, if the two sets come from a population with the same dis-
tribution, the points should fall approximately along the reference line.
Table 5. Evaluation of central moments and entropy over the real
and simulated image data representing granulocytes
MEASURE REAL IMAGE SIMULATED IMAGE
2nd c.m. 0.4978 � 1.2 3 1023 0.4667 � 1.0 3 1023
3rd c.m. 0.0140 � 1.7 3 1025 0.0126 � 3.0 3 1025
4th c.m. 0.0031 � 1.6 3 1026 0.0024 � 1.7 3 1026
5th c.m. 0.0012 � 3.0 3 1027 0.0010 � 4.0 3 1027
6th c.m. 0.0004 � 1.0 3 1027 0.0003 � 1.0 3 1027
Entropy 6.134 � 0.07 6.295 � 0.13
Each measure is expressed as a mean value over the whole
dataset including standard deviation.
ORIGINAL ARTICLE
506 Generation of Digital Phantoms of Cell Nuclei and Simulation of Image Formation in 3D Image Cytometry
Figure 11. Comparison of descriptors computed from real and synthetic granulocyte images. Quantile—quantile plots illustrate whether
the measured datasets come from populations with similar distributions. Generally, if the two sets come from a population with the same
distribution, the points should fall approximately along the reference line.
Figure 12. Mean of log intensity histograms for (left) real HL-60 images, (right) synthetic HL-60 images.
Figure 13. Mean of log intensity histograms for (left) real granulocyte images, (right) synthetic granulocyte images.
CONCLUSION
In this article, we presented a very complex and efficient
tool for generating phantom biomedical images. Furthermore,
it simulates the whole process of image acquisition from an
optical microscope, from the very beginning when preparing
the specimen, to the very end when converting the analog sig-
nal into its digital form and storing it to a hard drive. This
toolbox generates 3D objects but does not have any limita-
tions; hence, it can be extended to any higher dimension (time
sequences or spectral imaging) and generate other types of
cells. The whole simulation toolbox can be split into three in-
dependent parts: digital phantom generation, signal transmis-
sion, and signal detection and image formation.
For the time being, the toolbox generates three types of
objects (HL-60 nuclei, granulocyte nuclei, and microspheres).
In the future, we intend to generate more types of objects: cells
(nucleus, cell shape, and selected sub-cellular components)
and cell clusters that typically appear in tissue scans.
The simulator proposed in this article is freely available
under the GNU GPL at: http://cbia.fi.muni.cz/simulator/.
ACKNOWLEDGMENTS
Thanks are also due to Martin Maska for his implementa-
tion of PDE-based methods, Honza Skalicky for the micro-
scope management, and the HCILAB from the Faculty of
Informatics at Masaryk University for computational support.
We also greatly appreciated the help of Ludvık Tesar from the
Institute of Information Theory and Automation (Czech
Academy of Sciences, Prague) with the 3D Haralick feature
implementation. A talk based on this paper has been given at
the ISAC XXIVCongress in Budapest (May 2008). The simula-
tion toolbox is freely available under GNU GPL at http://cbia.
fi.muni.cz/simulator/.
LITERATURE CITED
1. Wasserman R, Acharya R, Stevens J, Hinojosa C. Biomedical imaging modalities: Anoverview. In: Singh A, Goldgof D, Terzopoulos D, editors. Deformable Models inMedical Image Analysis. Los Alamitos, CA: IEEE Computer Society; 1998. pp 20–44.
2. Pratt WK. Digital Image Processing. New York: Wiley; 1991. ISBN: 0–471-37407–5.
3. Castleman KR. Digital Image Processing. Upper Saddle River, NJ, USA: Prentice HallPress; 1996.
4. Muller M. Introduction to Confocal Fluorescence Microscopy, 2nd ed. Bellingham,Washington (USA): SPIE Press; 2005.138 p, ISBN 0–8194-6043–5.
5. Jan J. Digital Signal Filtering, Analysis, and Restoration. London: IEE; 2000. ISBN:0852967608.
6.. van Kempen G, van Vliet L. The influence of the regularization parameter and thefirst estimate on the performance of Tikhonov regularized non-linear image restora-tion algorithms. J Microsc 2000;198:63–75.
7. Verveer PJ.Computational and Optical Methods for Improving Resolution andSignal Quality in Fluorescence Microscopy. PhD Thesis, Technical University Delft,1998.
8. Tofts P, Barker G, Filippi M, Gawne-Cain M, Lai M. An oblique cylinder contrast-adjusted (OCCA) phantom to measure the accuracy of MRI brain lesion volume esti-mation schemes in multiple sclerosis. Magn Reson Imaging 1997;15:183–192.
9. Rexilius J, Hahn HK, Bourquain H, Peitgen HO. Ground truth in MS lesion volume-try—a phantom study; Vol. 2879: Lecture notes in computer science. In MICCAI (2).Berlin: Springer, 2003; pp 546–553.
10. Rexilius J, Hahn HK, Schluter M, Kohle S, Bourquain H, Bottcher J, Peitgen HO. Aframework for the generation of realistic brain tumor phantoms and applications;Vol. 3217: Lecture notes in computer science. In: MICCAI (2). Berlin: Springer; 2004.pp 243–250.
11. Aubert-Broche B, Griffin M, Pike GB, Evans AC, Collins DL. Twenty new digitalbrain phantoms for creation of validation image data bases. IEEE Trans Med Imaging2006;25:1410–1416.
12. Collins DL, Zijdenbos AP, Kollokian V, Sled JG, Kabani NJ, Holmes CJ, Evans AC.Design and construction of a realistic digital brain phantom. IEEE Trans Med Ima-ging 1998;17:463–468.
13. Jensen JA, Munk P. Computer phantoms for simulating ultrasound B-mode andCFM images. In: Lees S, Ferrari LA, editors. Acoustical Imaging, Vol. 23. New York:Plenum Press; pp 75–80.
14. Waks E, Prince JL, Douglas AS. Cardiac motion simulator for tagged MRI. In:MMBIA ’96: Proceedings of the 1996 Workshop on Mathematical Methods in Biome-dical Image Analysis (MMBIA ’96). Washington, DC, USA: IEEE Computer Society;1996. p182.
15. Rabben SI, Haukanes AL, Irgens F. A kinematic model for simulating physiologicalleft ventricular deformation patterns—a tool for evaluation of myocardial strain ima-ging. Honolulu, Hawaii: IEEE Symposium on Ultrasonics; 2003. Vol. 1, pp 134–137.ISBN: 0–7803-7922–5.
16. Schlaikjer M, Torp-Pedersen S, Jensen JA. Simulation of RF data with tissue motionfor optimizing stationary echo canceling filters. Ultrasonics 2003;41:415–419.
17. Schlaikjer M, Torp-Pedersen S, Jensen J, Stetson P. Tissue motion in blood velocityestimation and its simulation. Sendai, Japan: Proceedings of 1998 IEEE UltrasonicsSymposium 1495; 1998.
18. Netten H, Young IT, van Vliet J, Tanke HJ, Vrolijk H, Sloos WCR. FISH and chips:automation of fluorescent dot counting in interphase cell nuclei. Cytometry1997;28:1–10.
19. Kozubek M. FISH imaging. In: Diaspro A, editor. Confocal and Two-Photon Micros-copy: Foundations, Applications and Advances. New York: Wiley-Liss, Inc.; 2001. pp389–429. ISBN: 0–471-40920–0.
20. Grigoryan AM, Hostetter G, Kallioniemi O, Dougherty ER. Simulation toolbox for3D-FISH spot-counting algorithms. Real Time Imaging 2002;8:203–212.
21. Manders EMM, Hoebe R, Strackee J, Vossepoel AM, Aten JA. Largest contour seg-mentation: A tool for the localization of spots in confocal images. Cytometry1996;23:15–21.
22. Lockett SJ, Sudar D, Thompson CT, Pinkel D, Gray JW. Efficient, interactive, andthree-dimensional segmentation of cell nuclei in thick tissue sections. Cytometry1998;31:275–286.
23. Lehmussola A, Selinummi J, Ruusuvuori P, Niemist A, Yli-Harja O. Simulating fluo-rescent microscope images of cell populations. In: Proceedings of the 27th AnnualInternational Conference of the IEEE Engineering in Medicine and Biology Society(EMBC’05); 2005. pp 3153–3156.
24. Svoboda D, Kasık M, Maska M, Hubeny J, Stejskal S, Zimmermann M. On simulating3D fluorescent microscope images. In: Kropatsch WG, Kampel M, Hanbury A, edi-tors. CAIP. Berlin: Springer; 2007; Vol. 4673 of LNCS, pp 309–316.
25. Zhao T, Murphy RF Automated learning of generative models for subcellular loca-tion: Building blocks for systems biology. Cytometry Part A 2007;71A:978–990.
26. Graner F, Glazier JA. Simulation of biological cell sorting using a two-dimensionalextended Potts model. Phys Rev Lett 1992;69:2013–2016. Available at: http://www.compucell3d.org/.
27. van der Voort H, et al. SVI Huygens Software. Hilversum, Netherlands: Scientific Vol-ume Imaging B.V.; 1995. Available at: http://www.svi.nl/.
28. Ma Y, Kamber M, Evans AC. 3D simulation of PET brain images using segmentedMRI data and positron tomograph characteristics. In: Computerized Medical Ima-ging and Graphics; 1993. Vol. 17, pp 365–371.
29. Solorzano COD, Rodriguez EG, Jones A, Pinkel D, Gray JW, Sudar D, Lockett SJ. Seg-mentation of confocal microscope images of cell nuclei in thick tissue sections.J Microsc 1999;193:212–226.
30. Kozubek M, Matula P. An efficient algorithm for measurement and correction ofchromatic aberrations in fluorescence microscopy. J Microsc 2000;3:206–217.
31. Born M, Wolf E. Principles of Optics: Electromagnetic Theory of Propagation, Inter-ference and Diffraction of Light. Cambridge, UK: Cambridge University Press; 1999.
32. Lehmussola A, Ruusuvuori P, Selinummi J, Huttunen H, Yli-Harja O. Computationalframework for simulating fluorescence microscope images with cell populations.IEEE Trans Med Imaging 2007;26:1010–1016. Available at: http://www.cs.tut.fi/sgn/csb/simcep/tool.html.
33. Dufour A, Shinin V, Tajbakhsh S, Guillen-Aghion N, Olivo-Marin JC, Zimmer C.Segmenting and tracking fluorescent cells in dynamic 3-D microscopy with coupledactive surfaces. IEEE Trans Image Process 2005;14:1396–1410.
34. Rosenberg KM. CTSim-the open source computed tomography simulator. TechnicalReport 2002. Available at: http://www.ctsim.org/.
35. Kwan RKS, Evans AC, Pike GB. An extensible MRI simulator for post-processing eva-luation. In: Hoehne KH, Kikinis R, editors. Visualization in Biomedical Computing.London: Springer-Verlag; 1996. Vol. 1131 of LNCS, pp 135–140.
36. Cocosco CA, Kollokian V, Kwan RKS, Evans AC. BrainWeb: Online interface to a 3-DMRI simulated brain database. In: Neuroimage. Copenhagen; 1997. Vol. 5. Part 2/4,S425, Available at: http://www.bic.mni.mcgill.ca/brainweb/.
37. D’hooge J, Rabben SI, Claus P, Irgens F, Thoen J, de Werf FV, Suetens P. A virtualenvironment for the evaluation, validation and optimization of strain and strain rateimaging. Leuven, Belgium: IEEE Symposium on Ultrasonics; 2003. Vol. 2, pp 1839–1842. ISBN: 0–7803-7922–5.
38. Holm S. Ultrasim—a toolbox for ultrasound field simulation. In: Nordic MatlabConference, Oslo, Norway, 2001.
39. Jensen J.Field: A program for simulating ultrasound systems. In 10th NordicBalticConference on Biomedical Imaging; 1996. Vol.4, pp 351–353. Part 1, Available at:http://server.oersted.dtu.dk/personal/jaj/field/.
40. Jensen JA, Nikolov S. Fast simulation of ultrasound images. San Juan: IEEE Ultraso-nics Symposium in Puerto Rico, October 2000.
41. Wojcik G, Fornberg B, Waag R, Mould J, Driscoll TA, Nikodym L. Pseudospec-tral methods for large-scale bioacoustic models. In IEEE Ultrasonics Symposium, 1997.
42. Nilsson B, Heyden A. A fast algorithm for level set-like active contours. PatternRecogn Lett 2003;24:1331–1337.
ORIGINAL ARTICLE
508 Generation of Digital Phantoms of Cell Nuclei and Simulation of Image Formation in 3D Image Cytometry
43. Perlin K. An image synthesizer. In: SIGGRAPH ’85: Proceedings of the 12th annualconference on Computer graphics and Interactive Techniques. New York, USA: ACMPress; 1985. pp 287–296.
44. Sethian JA. Level Set Methods and Fast Marching Methods. Cambridge, UK: Cam-bridge University Press; 1999.
45. Rousselle C, Paillasson S, Robert-Nicoud M, Ronot X. Chromatin texture analysis inliving cells. Histochem J 1999;31:63–70.
46. Zhu SC, Wu Y, Mumford D. Frame: Filters, random fields, and minimax entropy—towards a unified theory for texture modeling. San Francisco, CA: CVPR, 1996. p686.
47. Nealen A, Alexa M. Hybrid texture synthesis. In EGRW ’03: Proceedings of the14th Eurographics workshop on rendering. Leuven, Belgium: Eurographics Associa-tion; 2003. pp 97–105.
48. Portilla J, Simoncelli EP. A parametric texture model based on joint statistics of com-plex wavelet coefficients. Int J Comput Vis 2000;40:49–70.
49. Klein AD, van den Doel LR, Young IT, Ellenberger SL, van Vliet L. Quantitative eva-luation and comparison of light microscopes. In Optical Investigation of Cells InVitro and In Vivo. San Jose, CA: Proceedings of SPIE Progress in Biomedical Optics;1998. Vol. 3260, pp 162–173.
50. van der Voort H, Strasters KC. Restoration of confocal images for quantitative imageanalysis. J Microsc 1995;178:165–181.
51. Costa FdL, Cesar RM. Shape Analysis and Classification: Theory and Practice. Or-lando, USA: CRC Press; 2001.
52. Haralick RM, Shanmugam K, Dinstein I. Textural features for image classification.IEEE Trans Syst Man Cybernetics 1973;3:610–621.
53. Murphy RF, Velliste M, Porreca G. Robust numerical features for description andclassification of subcellular location patterns in fluorescence microscope images.J VLSI Signal Process Syst 2003;35:311–321.
54. Velliste M, Murphy RF. Automated determination of protein subcellular locationsfrom 3D fluorescence microscope images. Washington, DC: 2002 IEEE InternationalSymposium on Biomedical Imaging (ISBI-2002); 2002. pp 867–870.
55. Tesar L, Smutek D, Shimizu A, Kobatake H. 3D extension of Haralick texture featuresfor medical image analysis. In SPPR’07: Proceedings of the Fourth Conference onIASTED International Conference. Anaheim, CA: ACTA Press; 2007. pp 350–355.
56. Sonka M, Hlavac V, Boyle R. Image Processing Analysis and Machine Vision. Lon-don: Chapman and Hall Publishing; 1986.
57. Wilk MB, Gnanadesikan R. Probability plotting methods for the analysis of data. Bio-metrika 1968;55:1–17.
58. Tesar L, Shimizu A, Smutek D, Kobatake H, Nawano S. Medical image analysis of 3DCT images based on extension of Haralick texture features. Comput Med ImagingGraph 2008;32:513–520.
ORIGINAL ARTICLE
Cytometry Part A � 75A: 494�509, 2009 509