3D Slicer as a platform for the automatic segmentation of real ...

79
3D Slicer as a platform for the automatic segmentation of real-time MRI data Bachelorarbeit zur Erlangung des Bachelor-Grades Bachelor of Science im Studiengang Technische Informatik an der Fakultät für Informations-, Medien- und Elektrotechnik der Technischen Hochschule Köln Vorgelegt von: Jonas Levin Weber Matrikelnummer: 11134601 Adresse: Eichstr. 60 50733 Köln [email protected] Eingereicht bei: Prof. Dr. rer. nat. Dipl.-Inform. René Wörzberger Zweitgutachter: M.Eng. Philipp Rosauer Köln, 27. März 2022

Transcript of 3D Slicer as a platform for the automatic segmentation of real ...

3D Slicer as a platform for the automaticsegmentation of real-time MRI dataBachelorarbeit zur Erlangung des Bachelor-GradesBachelor of Science im Studiengang Technische Informatikan der Fakultät für Informations-, Medien- und Elektrotechnikder Technischen Hochschule Köln

Vorgelegt von: Jonas Levin Weber

Matrikelnummer: 11134601

Adresse: Eichstr. 6050733 Kö[email protected]

Eingereicht bei: Prof. Dr. rer. nat. Dipl.-Inform. René Wörzberger

Zweitgutachter: M.Eng. Philipp Rosauer

Köln, 27. März 2022

rosa_ph
Rechteck

Selbstständigkeitserklärung

Ich erkläre an Eides statt, dass ich die vorgelegte Abschlussarbeit selbständig undohne fremde Hilfe verfasst, andere als die angegebenen Quellen und Hilfsmittel nichtbenutzt und die den benutzten Quellen wörtlich oder inhaltlich entnommenen Stellenals solche kenntlich gemacht habe.

Ort, Datum rechtsverbindliche Unterschrift

I

Abstract

Recent accomplishments in the field of medical imaging methods lead to the inventionof real-time magnetic resonance imaging (MRI), a method that allows the recording ofmovies of, for example, the pumping heart. With real-time MRI, new insights into thephysiology of the human heart can be gained but the method comes with a challenge:Different from conventional MRI, real-time MRI produces many more images thatphysicians and researchers possibly cannot evaluate manually. An ongoing researchproject at the German Aerospace Center evaluates machine learning methods toallow the automatic segmentation of cardiac real-time MRI images. A part of thatproject is the development of a software solution that researchers and medical staffcan use to access the features of the automatic segmentation.This work aims at the evaluation of available software tools and frameworks in orderto find a suitable tool for the development of the mentioned software solution. Toaccomplish this, different tools and frameworks are compared in terms of their featuresets, extendability, development workflows and licence restrictions. The MedicalImaging Interaction Toolkit and 3D Slicer will be compared in more depth leading tothe selection of 3D Slicer. To prove that 3D Slicer is suitable as a software frameworkfor the real-time MRI software solution, an example extension is developed thatimplements one of the software requirements. It will be concluded that 3D Slicer canbe used to both efficiently implement the requirement of the example extension andbe used as the base software framework to implement the other software requirementsthat are not developed as part of this work.

III

Kurzfassung

Jüngste Errungenschaften auf dem Gebiet der medizinischen Bildgebungsmethodenführten zur Erfindung der Echtzeit-Magnetresonanztomographie (MRT), einer Me-thode, die die Aufzeichnung von Filmen z.B. des pumpenden Herzens ermöglicht. Mitder Echtzeit-MRT können neue Erkenntnisse über die Physiologie des menschlichenHerzens gewonnen werden, aber die Methode birgt auch eine Herausforderung: Andersals bei der herkömmlichen MRT entstehen bei der Echtzeit-MRT viel mehr Bilder,die ärztliches Fachpersonal und Forschende unmöglich manuell auswerten können.Ein laufendes Forschungsprojekt am Deutschen Zentrum für Luft- und Raumfahrtevaluiert Methoden des maschinellen Lernens, um die automatische Segmentierungvon Echtzeit-MRT-Bildern des Herzens zu ermöglichen. Ein Teil dieses Projekts ist dieEntwicklung einer Softwarelösung, mit der Forschende und ärztliches Fachpersonalauf die Funktionen der automatischen Segmentierung zugreifen können.Ziel dieser Arbeit ist es, verfügbare Softwaretools und Frameworks zu evaluieren,um ein geeignetes Tool für die Entwicklung der besagten Softwarelösung zu finden.Zu diesem Zweck werden verschiedene Tools und Frameworks im Hinblick auf ihrenFunktionsumfang, ihre Erweiterbarkeit, ihre Entwicklungsabläufe und ihre Lizenz-beschränkungen verglichen. Das Medical Imaging Interaction Toolkit und 3D Slicerwerden genauer verglichen, wobei die Wahl auf 3D Slicer fällt. Um zu zeigen, dass3D Slicer als Softwareframework für die Echtzeit-MRT-Softwarelösung geeignet ist,wird eine Beispielerweiterung entwickelt, die eine der Softwareanforderungen imple-mentiert. Es wird die Schlussfolgerung gezogen, dass 3D Slicer sowohl zur effizientenUmsetzung der Anforderungen der Beispielerweiterung als auch als grundlegendesSoftwareframework für die Umsetzung der anderen Softwareanforderungen verwendetwerden kann, die nicht im Rahmen dieser Arbeit umgesetzt wurden.

V

Contents

List of Figures XI

List of Tables XIII

List of Acronyms XV

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Domain ‘Real-time Magnetic Resonance Imaging’ 52.1 Nuclear Magnetic Resonance for Medical Imaging . . . . . . . . . . 52.2 The FLASH Method . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Cardiac Imaging Planes . . . . . . . . . . . . . . . . . . . . . . . . 72.4 Segmentation of Medical Images . . . . . . . . . . . . . . . . . . . . 8

3 Technical Foundations 93.1 Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2 The Visualization Toolkit . . . . . . . . . . . . . . . . . . . . . . . 93.3 Qt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Problem Statement 114.1 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 11

4.1.1 Functional Requirements . . . . . . . . . . . . . . . . . . . . 114.1.2 Non-functional Requirements . . . . . . . . . . . . . . . . . 13

4.2 Example Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.3 Project at the German Aerospace Center . . . . . . . . . . . . . . . 15

5 Evaluation of Visualization Frameworks 175.1 Lower Level Frameworks . . . . . . . . . . . . . . . . . . . . . . . . 17

5.1.1 VTK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.1.2 ITK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

VII

Contents

5.2 Visualization Tools for more General Problems . . . . . . . . . . . . 205.2.1 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.2.2 ParaView . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5.3 Visualization Tools for Medical Image Data Specifically . . . . . . . 215.3.1 cvi42 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.3.2 MITK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.3.3 3D Slicer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5.4 Insufficiencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.5 Comparison of MITK and 3D Slicer . . . . . . . . . . . . . . . . . . 25

5.5.1 User’s Perspective . . . . . . . . . . . . . . . . . . . . . . . . 255.5.2 Developer’s Perspective . . . . . . . . . . . . . . . . . . . . . 255.5.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5.6 Existing Extensions in 3D Slicer . . . . . . . . . . . . . . . . . . . . 265.6.1 Segmentations . . . . . . . . . . . . . . . . . . . . . . . . . . 265.6.2 Segment Editor . . . . . . . . . . . . . . . . . . . . . . . . . 275.6.3 Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

6 Solution Concept 296.1 Overview of a Real-time MRI Dataset . . . . . . . . . . . . . . . . . 296.2 Architecture of 3D Slicer . . . . . . . . . . . . . . . . . . . . . . . . 31

6.2.1 Model View Controller . . . . . . . . . . . . . . . . . . . . . 316.2.2 3D Slicer Extensions . . . . . . . . . . . . . . . . . . . . . . 32

6.3 Medical Reality Modelling Language . . . . . . . . . . . . . . . . . 336.3.1 Data Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 346.3.2 Sequence Nodes . . . . . . . . . . . . . . . . . . . . . . . . . 346.3.3 View Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 356.3.4 Display Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . 36

7 Proof of Concept 377.1 Proof of Concept Extension for 3D Slicer . . . . . . . . . . . . . . . 37

7.1.1 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377.1.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . 397.1.3 Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

7.2 Deployment of a 3D Slicer Extension . . . . . . . . . . . . . . . . . 42

8 Results and Discussion 458.1 Implementation of Requirement R10 . . . . . . . . . . . . . . . . . 45

8.1.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

VIII

Contents

8.1.2 Loading Times of the Timeline . . . . . . . . . . . . . . . . 468.2 3D Slicer as a Platform . . . . . . . . . . . . . . . . . . . . . . . . . 48

9 Time Planning and Work Packages 51

10 Outlook 5310.1 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . 5310.2 Next Steps in the Overarching Research Project . . . . . . . . . . . 54

Bibliography 55

IX

List of Figures

2.1 Real-time MRI images . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Schematic and examples for the cardiac imaging planes . . . . . . . 72.3 Example for medical image segmentation . . . . . . . . . . . . . . . 8

4.1 A common workflow that the software solution depicts . . . . . . . 14

5.1 An exemplary visualization pipeline depicted as a data flow diagram 195.2 Screenshot of cvi42 . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.3 Screenshot of ‘MITK Workbench’ . . . . . . . . . . . . . . . . . . . 225.4 Screenshot of 3D Slicer . . . . . . . . . . . . . . . . . . . . . . . . . 23

6.1 Screenshots of video editing software . . . . . . . . . . . . . . . . . 306.2 The thumbnail grid of cvi42 . . . . . . . . . . . . . . . . . . . . . . 316.3 Architecture of 3D Slicer . . . . . . . . . . . . . . . . . . . . . . . . 326.4 The three anatomical planes used in medical imaging and how

they are visualized in 3D Slicer . . . . . . . . . . . . . . . . . . . . 35

7.1 The timeline provided by the proof of concept extension . . . . . . 387.2 Directory structure of the proof of concept extension . . . . . . . . 417.3 Deployment diagram of the deployment of a 3D Slicer extension . . 43

8.1 An example segmentation as it is displayed in 3D Slicer and thetimeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

8.2 Time measurements of the timeline loading . . . . . . . . . . . . . . 47

9.1 Work breakdown structure . . . . . . . . . . . . . . . . . . . . . . . 529.2 Precedence diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

XI

List of Tables

4.1 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . 12

5.1 Summary of the features and insufficiencies of the presentedtools and frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . 24

8.1 Coverage of the software requirements by 3D Slicer and its extensions 48

9.1 Effort spent for and degree of fulfillment of the work packages . . . 51

XIII

List of Acronyms

Acronym Expansion

API Application programming interfaceCLI Command line interfaceCT Computer tomographyCTK Common ToolkitDCMTK DICOM ToolkitDICOM Digital Imaging and Communications in MedicineDLR German Aerospace CenterDLR-ME Institute of Aerospace MedicineDLR-SC Institute for Software TechnologyFLASH Fast low-angle shotGDCM Grassroots DICOM libraryGUI Graphical user interfaceIDE Integrated development environmentITK Insight ToolkitMATLAB MATrix LABoratoryMITK Medical Imaging Interaction ToolkitMRI Magnetic resonance imagingMRML Medical Reality Modeling LanguageMVC Model View ControllerNIfTI Neuroimaging Informatics Technology InitiativeNMR Nuclear magnetic resonanceOSGi Open Services Gateway initiativePET Positron emission tomographyRAM Random access memorySTL StereolithographyVTK Visualization ToolkitXML Extensible Markup Language

XV

1 Introduction

The most efficient way for humans to gain insight into scientific data is by consumingit visually. It is thus necessary for medical imaging data to be visualized becausesuch data are generally large and otherwise difficult or impossible to comprehend.Visualization does not have to be photorealistic, however. It is sufficient to presentthe data in a way that reveals insight into it and allows the drawing of conclusionsabout the subject that the data was measured from.

1.1 MotivationImaging technology revolutionized medical diagnosis and patient treatment. Byliterally seeing inside a patient’s body, physicians can draw conclusions about apatient’s state of health more quickly and more precisely.The discovery of X-rays by Röntgen in 1895 [1] formed the basis for medical imaging.Later, sonar technology and computer tomography (CT) were invented. CT scansare still widely used nowadays yet the technology of magnetic resonance imaging(MRI) introduced new ways of imaging the inside of a patient without relying onX-rays but only on magnets.The improvement of imaging technologies including CT and MRI enables evenbetter possibilities for diagnosis and patient treatment. Real-time MRI is one suchimprovement which, instead of single images, produces movies of the inside of apatient’s body. This is especially useful in diagnosis and treatment of medical fieldsthat deal with moving parts of the body like cardiology or orthopedics.

1.2 BackgroundIn short, MRI is the process of generating images from an anatomic structure bymeasuring the distribution of hydrogen atoms in that structure. Real-time MRI is animproved version of conventional MRI that enables the rapid production of imagesas fast as multiple images per second.

1

1 Introduction

Segmentation is the method of labeling regions in an image that belong to a particularfeature of interest. For example, in the case of medical image segmentation suchfeatures could be inner organs or bones and their relating structures.In medical practice, segmentation is among other things used to analyze images inregard to the functionality and shape of an organ. The results of these analyses canthen be supporting in diagnosis and treatment.The images from conventional MRI methods are segmented manually by an expertin the domain, e.g. a physician. Real-time MRI introduces new challenges to theseprocesses, though. Since the amount of images produced by real-time MRI is multipleorders of magnitudes higher than by conventional MRI, new methods have to beconsidered for the analysis of these images.At the German Aerospace Center (DLR), machine learning approaches are evaluatedfor the automatic segmentation of real-time MRI data. The goal of this thesisis to provide a framework for the development of a software application that canincorporate the machine learning approaches into a graphical user interface (GUI).The resulting application is aimed to be used by both researchers and medical staff.

1.3 OutlineThis work first explains domain-specific terms like real-time MRI and medicalimage segmentation before defining the problem that shall be solved. The problemdefinition is supported by the listing of the software requirements that were compiledin cooperation with the stakeholders in the overarching research project. To solvethe specified problem, a solution will be proposed that is based on the comparisonof available tools and software frameworks which are introduced in Chapter 5.The solution concept is backed by a detailed explanation of the resulting softwarearchitecture and the presentation of a proof of concept implementation. Finally, theresults are summarized and an outlook is given.In particular, the chapters contain the following contents:

Chapter 2 - Domain ‘Real-time Magnetic Resonance Imaging’This chapter introduces the technology of real-time MRI and its application inthe medical field.

Chapter 3 - Technical FoundationsHere, the fundamental technologies that were used for the development arepresented. These technologies are Python, the Visualization Toolkit (VTK)and Qt.

2

1.3 Outline

Chapter 4 - Problem StatementThe goal of this work is explained in more detail in this chapter. The compiledlist of software requirements, an example for a workflow that the softwareimplements and a brief overview of the overarching research project can befound in this chapter.

Chapter 5 - Evaluation of Visualization FrameworksExisting solutions to similar problems are examined. These solutions aregrouped by their level of abstraction in regard to medical image visualization.The abstraction ranges from generalized tools and frameworks for image pro-cessing to software that is designed for medical image processing specifically.The chapter is concluded with a comparison between the two frameworks 3DSlicer and the Medical Imaging Interaction Toolkit (MITK).

Chapter 6 - Solution ConceptIn this chapter, a solution for the development of a software in the domainof real-time MRI is proposed. The proposed concept is supported by anexplanation of the architecture of 3D Slicer and 3D Slicer extensions as wellas the Medical Reality Modeling Language (MRML). To prove the point, anexemplary 3D Slicer extension will be developed.

Chapter 7 - Proof of ConceptAfter 3D Slicer was proposed as a software development platform, the proof ofconcept extension in particular will be described. It includes a description ofthe deployment process of 3D Slicer extensions.

Chapter 8 - Results and DiscussionThis chapter summarizes the results of this work and explains how the solutionconcept presented in Chapter 6 succeeded.

Chapter 9 - Time Planning and Work PackagesHere, an overview of the work packages will be given that were compiled beforethe work on the thesis began. This includes estimates of the effort that wasdevoted to each package.

Chapter 10 - OutlookThe work is concluded with an outlook that explains new questions andproblems that arose during the development.

3

2 Domain ‘Real-time MagneticResonance Imaging’

In this chapter, the general domain of this work is explained. In the first section, aproperty of atoms called nuclear magnetic resonance (NMR) and a method to imagehydrogen atoms that is based on this property called MRI are introduced. In thenext section, the fast low-angle shot (FLASH) method is explained as a first attemptto reduce the time it takes to generate MRI images. That section also introducesan improvement to the FLASH method that enables real-time MRI. After that, thestandard cardiac imaging planes and medical image segmentation are explained.

2.1 Nuclear Magnetic Resonance for Medical ImagingAs explained by Roberts [2], NMR is the property of atomic nuclei to resonatewith oscillating magnetic fields and, consequentially to emit measurable photons.The photons are emitted while the nucleus returns to its original state after theexposure to the magnetic field. This process is called relaxation. Depending on thefrequency at which the magnetic field oscillates, the angle at which it is induced andits surroundings, the relaxation takes more or less time to complete. The relaxationtime as well as the emitted photons’ frequencies are characteristic to the atom thatwas exposed to the magnetic field and can be used to determine the kind of the atomas well as some of its properties.In 1973, Lauterbur published the idea of using NMR for generating images ofobjects [3]. He also proposed an application of this concept that is the generation ofimages of organic structures by specifically measuring the relaxation time of hydrogenatoms inside that structure. This application is also useful for generating imagesof the inside of the human body because about 62 % of the atoms in humans arehydrogen atoms [4]. Additionally, the distribution of hydrogen atoms in differentanatomic structures varies enough so that it is possible to distinguish the structures.Today, this application is known as MRI.

5

2 Domain ‘Real-time Magnetic Resonance Imaging’

2.2 The FLASH MethodIt was not possible, though, to produce images of dynamic systems like the pulsatingheart because the time it took to generate a single image was too long. To solvethis, Frahm et. al. [5] from the Max-Planck-Institute for Biophysical Chemistry inGöttingen proposed the FLASH method for rapid NMR imaging in 1985. The FLASHmethod basically reduces the relaxation time by inducing the oscillating magneticfield at a smaller angle than the usual 90°. This allowed for a generation time ofabout 2 s for one image. In comparison, early MRI image generation took multiplehours to complete [6]. With additional external tools like an electrocardiogram, itwas possible to generate multiple images of periodically moving anatomic structuresby always triggering the measurement at the same point in the periodic cycle. Theimages could then be stitched together, effectively creating an artificial movie of themoving structure. This method is known as ‘CINE’ MRI.In 2010, Uecker et. al. [7] improved the FLASH method to allow real-time MRI.The method is known as FLASH 2 and reduces the measuring times to only 20 msfor one image. Respectively, this method makes it possible to generate 50 imagesper second which is enough for the human eye to perceive such an image sequenceas a continuous motion rather than separate images [8]. This is superior to the‘CINE’ method which currently is the standard in cardiac MRI imaging because theproduced images actually represent complete cardiac cycles instead of artificiallyproduced cycles.As an example, Figure 2.1 shows a set of images that was generated using real-timeMRI. The images show one complete cardiac cycle. Starting at the top left image,the heart is at the peak of the systolic phase which is the phase where blood flowsfrom the heart into the body. The lower right image shows the end of the cardiaccycle, the peak diastole where blood flows from the body into the heart.

Figure 2.1: Real-time MRI images with a temporal resolution of 34 ms. Taken from [9].

6

2.3 Cardiac Imaging Planes

2.3 Cardiac Imaging PlanesCardiac images are typically captured in one of four defined image planes:

• short axis view,• 2-chamber view,• 3-chamber view or• 4-chamber view.

The images in Figure 2.1 are short axis views of the heart. Figure 2.2 shows examplesof all four named imaging planes.

Figure 2.2: Schematic and examples for the cardiac imaging planes. The left schematicshows how the cardiac imaging planes relate to the heart’s anatomy. The four imageson the right, from left to right, top to bottom, show examples for the ‘short axis view’,‘2-chamber view’, ‘3-chamber view’ and ‘4-chamber view’. Left image taken from [10], allfour images on the right taken from [11].

Each plane is specified in reference to the long axis (‘horizontal long-axis’ in Figure 2.2)of the left ventricle of the heart [11]. This axis runs through the top of the leftventricle, the apex, and the center of the mitral valve which is the valve connectingthe left atrium and the left ventricle.The short axis view is then defined as the planes that are perpendicular to the longaxis of the left ventricle.The 4-chamber view defines the planes that contain the long axis of the left ventricleand a line connecting the left and right ventricles. These planes allow a view ontothe two atria and two ventricles of the heart simultaneously.The 2-chamber view specifies the planes that contain the long axis of the left ventricleand that are perpendicular to the planes of the 4-chamber view. This view shows

7

2 Domain ‘Real-time Magnetic Resonance Imaging’

either both the left atrium and left ventricle or both the right atrium and rightventricle of the heart.Lastly, the 3-chamber view is composed of the planes that contain the apex of theleft ventricle and the centers of the mitral and aortic valves.

2.4 Segmentation of Medical ImagesIt is common practice to segment scientific image data to be able to make qualitativeand quantitative conclusions about them. In the case of medical images, one possiblesegmentation would be the labelling of parts of an image as belonging to specificanatomic structures. For example, a cardiac MRI image could be segmented todetermine what region of the image belongs to the left and right ventricles, the leftand right atria or the aorta.As seen in Figure 2.3, a segmentation can be visualized as colored regions in the sourceimage. The example in the figure shows a cardiac MRI image that was segmentedusing artificial intelligence, specifically a neural network based on nnU-Net [12]. Thesegmentation identifies the left and right ventricles as well as the cardiac muscle,called myocardium.

Left ventricleMyocardium Right ventricle

Figure 2.3: An example for medical image segmentation. The images show a short axisview cardiac MRI image, the axes represent pixels. The colors represent the measuredintensity of hydrogen. Bright colors represent a higher intensity, dark colors representa lower intensity. The left image shows the MRI image without any segmentation. Themiddle image includes the segmentation and the right image shows the segmentation afterpost-processing. Adapted from [13].

Since MRI data are commonly composed of many images that represent slices ofthe anatomic structure, additional information can be calculated if multiple imagesegmentations are combined. In the case of cardiac MRI, the segmented image slicesof an entire heart could be used to calculate the volume of the left ventricle and,consequently, the volume of blood that fits inside this ventricle.

8

3 Technical Foundations

This chapter presents the tools and frameworks that were used for the developmentof the software that accompanies this work. These include the programming languagePython, the software library VTK and the software development framework Qt.

3.1 PythonPython [14] is a general purpose interpreted programming language which is widelyused in scientific contexts. The language incorporates multiple programming paradigmsincluding object-oriented, procedural and functional. Python is easy to learn, readand write and the community provides an extensive set of libraries for a varietyof use cases. These libraries are listed on the respective platform PyPI [15] whichcurrently indexes more than 350000 libraries.Assuming the same level of algorithmic optimization, Python will generally executethe algorithm slower than compiled languages like C and C++ [16]. This disadvantagecan be traded off against the simplicity of Python programs. Most algorithms canbe expressed using only a few lines of Python code while still ensuring a high levelof readability.

3.2 The Visualization ToolkitThe Visualization Toolkit (VTK) [17], [18] is an open-source software library for‘computer graphics, visualization and image processing’ [19, p. 3]. It can be assumed,that VTK is the de-facto standard in the field of scientific visualization [20].As a large software system, VTK is designed with care yielding easy developmentworkflows once the developer understands the basic principles behind VTK. One ofthese basic principles is the ‘visualization pipeline’ which defines how scientific dataare to be processed before it is displayed on a screen. This concept is not uniqueto VTK but is used in many implementations of scientific visualization since it is aquite natural approach to solve the problem [21, pp. 3-5].

9

3 Technical Foundations

VTK is developed in C++ but can also be used in Java and Python programs. Inthe case of Python, the vtk package [22] provides access to almost all classes of VTKincluding interfaces to the GUI libraries GTK, Tk, wxWidgets and Qt.VTK and the visualization pipeline will be explained in more detail in Section 5.1.1.

3.3 QtQt [23] is a framework for the cross-platform development of software applicationsand user interfaces. The supported platforms include Windows, Linux, macOS,Android and other mobile and embedded systems.Qt GUIs are composed of ‘widgets’. A widget can be a simple button, a text boxor a more complex composition of multiple widgets. This results in a tree-likerepresentation of nested widgets composing the GUI of an application. The widgetsrespond to input that the user generated using input devices. The operating systempasses the user input to Qt that in turn passes it to the correct widget. The widgetis then responsible for handling the event.Another important feature of Qt is the signals & slots system. This system is usedfor the communication between widgets. If one widget sends a signal, all widgetsthat have a slot connected to that specific signal receive the signal and can actaccordingly. As a simple example, the signals & slots system can be used to updatesome text on the screen when the user clicks on a button.With Qt, a GUI can be defined either declaratively or imperatively. Using thedeclarative approach, the widgets of the GUI are defined in .ui files. These files arebased on the Extensible Markup Language (XML) and can either be edited manuallyusing a text editor or by using ‘Qt Designer’ which is a graphical editor provided byQt.With the imperative approach, the GUI’s widgets are specified directly in the sourcecode of the application. To accomplish this, objects from the desired widget’sclass have to be instantiated. By calling the methods of the object and setting itsproperties, the widget can be further customized.Albeit dependent on the use case, in reality, Qt GUIs are first designed using thedeclarative approach in Qt Designer. After that, the widgets are more finely tunedin the source code of the application taking the imperative approach.Like VTK, Qt is developed in C++ but also accessible from Python. The Pythoninterfaces to Qt are available through the packages PyQt5 [24] or PySide2 [25],respectively. The differences between PyQt5 and PySide2 are mainly reflected intheir licences as both libraries provide interfaces to almost all classes of Qt.

10

4 Problem Statement

In this chapter, the goal of this work is explained. First, the requirements for thesoftware solution that will be developed are listed. After that, an example for aworkflow that represents the primary use case of the software solution is presented.The chapter is concluded with an introduction to the project at DLR where thiswork contributes to.

4.1 Software RequirementsIn order to prepare for the development of a software solution, the software’srequirements were compiled by interviewing the stakeholders at the Institute ofAerospace Medicine (DLR-ME) [26]. The stakeholders at DLR-ME were chosenbecause they define the acceptance criteria of the project. Additionally, thesestakeholders are experts in the domain of real-time MRI both in research and clinicalenvironments. They will also be the first primary users of the software.The requirements for the software solution are listed in Table 4.1. For each require-ment, a unique identifier, the type of the requirement and a description are given.There are two types of requirements represented: functional and non-functional. Inthe following, these two types of requirements and the requirements that fall intothe respective category are explained.

4.1.1 Functional Requirements

A functional requirement is a requirement that describes some expected behaviourof the software which can be verified with a finite number of test steps [27, secs. 1-3]. This is synonymously known as a feature of the software. The requirementslisted in Table 4.1 can be combined into groups depending on their purpose. Thesegroups are data formats, automatic segmentation, visualization features and otherrequirements. In the following, the groups and the requirements assigned to eachgroup are described.

11

4 Problem Statement

Table 4.1: Software requirements.

# Type Description

Data FormatsR1 functional The software shall display MRI image metadata prior to

loading.R2 functional The software shall display previews of MRI images prior

to loading.R3 functional The software shall read MRI images from DICOM files.R4 functional The software shall export MRI images to DICOM files.R5 functional The software shall export the segmentation results to a file

format that is compatible with cvi42.R6 functional The software shall export MRI images to NIfTI files.

Automatic SegmentationR7 functional The software shall synchronize MRI image sequences with

additional measurements.R8 functional The software shall segment MRI images automatically.

Visualization FeaturesR9 functional The software shall display MRI image slices.R10 functional The software shall visualize multiple images of an MRI

dataset for an overview.R11 functional The software shall playback MRI image sequences as a

video.R12 functional The software shall display the volume of blood pumped

during one cardiac cycle.Other Requirements

R13 functional The user shall edit the segmentation result to correct mis-takes of the automatic segmentation.

R14 non-functional The software shall run on the syngo.via OpenApps plat-form.

The requirements R1, R2, R3, R4, R5 and R6 specify the different data formats thatthe software solution should be able to read from or write to. These data formatsare DICOM and NIfTI as well as a proprietary format unique to cvi42.R7 and R8 are the requirements regarding the automatic segmentation of MRIdatasets. R7 specifically results from the fact, that a given cardiac MRI datasetis not ready for analysis immediately after it was measured in the MRI machine.Instead, the dataset needs to be synchronized because the heart is both beating andmoving due to breathing during the measurement. This results in image sequencesthat are correctly ordered in the time domain but not in the space domain since the

12

4.1 Software Requirements

heart is moving while the measured slice plane does not.The requirements R9, R10, R11 and R12 define the required visualization featuresof the software solution. The MRI images should be displayed as single images(R9, R10) and, because of the additional time domain in real-time MRI datasets,be playable as a video (R11). R10 also takes the time domain into account byspecifying that the images should be displayed so that an overview of the datasetcan be retrieved. Lastly, requirement R12 defines that the segmentation resultsshould be used to calculate functional and morphological parameters of the heartand eventually display these. Such parameters could be the volume of blood pumpedduring one cardiac cycle or the distribution of blood flow in different parts of theheart.Finally, R13 specifies, that the user of the software solution must be able to edit theresults of the automatic segmentation. Since the segmentation might produce wrongsegmentations because the underlying neural network will never be perfect, the usercan provide a manual segmentation after investigating the visualizations that aredefined in R9, R10, R11 and R12.

4.1.2 Non-functional Requirements

Non-functional requirements define attributes of the software, e.g. the performance,maintainability, safety or interoperability of the software [27, secs. 1-3]. In the caseof the requirements listed in Table 4.1, the only non-functional requirement, R14, isa requirement for the interoperability of the software. It states that the integrationof the software solution with the syngo.via OpenApps platform [28] by SiemensHealthcare GmbH should be possible without much additional effort. syngo.via is aplatform and software solution by Siemens Healthcare GmbH that integrates medicaldevices like MRI machines by Siemens. syngo.via OpenApps is another platformthat makes medical applications available to syngo.via in an online store where theseapplications can be downloaded. The applications available on syngo.via OpenAppscover use cases ranging from visualization and diagnostics to patient management andadministration in a variety of medical fields like cardiology, neurology and oncology.

13

4 Problem Statement

4.2 Example WorkflowIn this section, the primary workflow that a user of the software solution will followis explained. As shown in Figure 4.1, the user has a cardiac real-time MRI datasetat hand which they import into the software. The dataset might consist of severalthousand images that can be ordered both in relation to their respective capturingtime (time domain) and their spatial offset to an arbitrary origin (space domain).After importing the dataset, the user confirms that the imported dataset is correctand, by inspecting its visual representation displayed by the software, that the datado not appear to be damaged.

Figure 4.1: A common workflow that the software solution depicts.

The user now wants to segment the images in the dataset but cannot do thisautomatically because of the dataset’s size. Instead, the user initiates the automaticsegmentation of the dataset. After the segmentation has been completed, the usermust verify that the segmentation was executed correctly. They can do this by againinspecting the visualization of the dataset and the segmentation applied to it. Forthis interactive step between user and algorithm, an intuitive visualization of the

14

4.3 Project at the German Aerospace Center

four dimensional data is essential. This is especially important as the targeted useris not a software expert but a radiologist, cardiologist or another member of medicalstaff. If any image’s segmentation appears to be faulty, the user can interact withthe software to apply small corrections to the segmentation or create an entirely newone.Finally, after the user is satisfied with the segmentation, they can export the datasetand the segmentation. The dataset can be exported to DICOM or NIfTI and thesegmentation can either be stored in DICOM file(s) or in another file format that isrequired for cvi42 to be able to import the segmentation.

4.3 Project at the German Aerospace CenterThis thesis contributes to a project at DLR. DLR is the research center for aeronauticsand space of the Federal Republic of Germany. Despite its name, DLR engages inresearch of aeronautics, space, energy, transport, security and digitalization [29].The project is a collaborative effort of DLR-ME [26] and the Institute for SoftwareTechnology (DLR-SC) [30]. DLR-ME conducts research with the objective to preservethe health of humans in space, in aviation and on earth. DLR-SC concentrates onresearch in the field of software engineering with focus on distributed and intelligentsystems, artificial intelligence, embedded systems, visualization and high-performancecomputing.The goal of the project is to develop a software solution for the automatic evaluationof cardiac real-time MRI images. A conventional cardiac MRI dataset of about35 images is typically segmented manually by an expert, e.g. a cardiologist. Incomparison, a real-time MRI dataset consists of several thousand images, which isimpossible for a human to segment manually. The proposed solution for this problemis to use machine learning to automatically segment the real-time MRI data.More specifically, the project deals with the diagnostics and therapy of children andteenagers who have univentricular hearts. The evaluation of the MRI data is doneby segmenting it and, using the segmentation, by determining morphological as wellas functional parameters of the heart. Such parameters could be the quantificationof blood flow through the heart or the volume of anatomic parts of the heart, e.g.the ventricles.

15

5 Evaluation of VisualizationFrameworks

Since most of the requirements for the software solution (see Table 4.1) are concernedwith the visualization of cardiac real-time MRI data, this chapter introduces bothcommercially and freely available software tools and frameworks that could potentiallybe used for this task. The sections in this chapter introduce the software tools andframeworks in decreasing order of abstraction beginning with the most abstract tools,here called ‘low level frameworks’. After that, more specialized tools, e.g. with anincluded GUI, are introduced. Next, specific tools for the visualization of medicaldata are presented. The tools are then summarized and insufficiencies of each toolare elaborated. The chapter concludes with a comparison of the two promising tools3D Slicer and the Medical Imaging Interaction Toolkit (MITK) as well as the listingof available useful extensions of 3D Slicer.

5.1 Lower Level FrameworksThis section introduces tools for the visualization of scientific data with a high levelof abstraction. High abstraction is, of course, subjective. Thus, tools with highabstraction in this context are tools that are designed for scientific visualizationbut only provide the basic building blocks that can be put together to build anapplication for the task.

5.1.1 VTK

A freely available framework for the visualization of scientific data is VTK. VTKis being developed since 1993, originally by Schroeder, Martin and Lorensenas the accompanying software to their book ‘The Visualization Toolkit: An Object-oriented Approach to 3D Graphics’ [17]. With almost 30 years of development historyand about 3500 classes available, VTK is the de-facto standard in the field of softwareassisted scientific visualization and the base for other tools and frameworks in thisfield.

17

5 Evaluation of Visualization Frameworks

The fundamental building block of VTK is ‘The Visualization Pipeline’ [17, pp. 83-122]. The pipeline is a concept that defines how given scientific input data are to beprocessed and finally rendered, e.g. to a screen. Refer to Figure 5.1 for an examplepipeline.Each pipeline is a composition of three different types of objects:

• data objects,• process objects and• data flow direction.

The data objects contain the data that should be visualized and interfaces for accessto the data as well as interfaces for the modification of the data.The process objects provide algorithms that can transform data objects, create newdata objects that are derived from the given input data objects or convert dataobjects to graphics primitives for rendering.The third type of object, data flow direction, indicates the order in which the processobjects work on the data objects.The process objects of the pipeline can further be separated into three differentgroups:

• sources,• filters and• sinks.

The sources take no input but output one or more data objects. An example forsources are process objects that read data from the file system and convert them tothe format needed and understood by VTK.These data objects are received by the filters that apply some algorithm to themand output at least one data object. Such algorithms could, for example, betransformations to the data like rotation, translation or scaling. Process objectscould also encapsulate more sophisticated algorithms like advanced image processingalgorithms or segmentation.The sinks finally receive the data objects that were ejected by the filters and terminatethe visualization pipeline. The sinks can for example generate graphics primitivesfor rendering or write the data to the file system. Sinks do not produce output dataobjects.

18

5.1 Lower Level Frameworks

Figure 5.1: An exemplary visualization pipeline depicted as a data flow diagram. Thepipeline has one source in the top left at ‘CT/MRI Scanner’ and one sink in the centerright at ‘Image’. The intermediate ellipses represent filters in the pipeline. The arrowsindicate the direction of data flow and are labeled with information about the respectivedata. Taken from [21, p. 23].

5.1.2 ITK

The Insight Toolkit (ITK) [31] is a framework that is built on top of VTK andspecifically designed and optimized for image analysis. ITK provides algorithmsfor image processing, segmentation and registration. Image registration is thetransformation of multiple images into a common coordinate system [32]. In medicalimaging, for example, image registration can be used to combine images of the sameanatomic structure from CT, positron emission tomography (PET) and MRI scansinto a common coordinate system. This enables the deduction of new informationfrom the analysis of multiple images simultaneously.ITK is, similarly to VTK, more abstract than other tools. Although both provide ahuge set of highly optimized algorithms new applications have to be built more orless from the ground up which may not be the best option under all circumstances.

19

5 Evaluation of Visualization Frameworks

5.2 Visualization Tools for more General ProblemsIn the previous section we learned about relatively abstract frameworks for thevisualization of scientific data. These frameworks provide to some extent userinteraction but are not designed specifically for that and, thus, lack needed featureslike pre-filled menus and dialogs.In this section, the MATrix LABoratory (MATLAB) and ParaView are presented.These tools are designed with an integrated GUI and thus provide more featuresregarding user interaction.

5.2.1 MATLAB

MATLAB [33] is a commercially available proprietary platform and programminglanguage for numerical analysis and visualization of scientific data. For the visualiza-tion of medical image data specifically, the MATLAB Image Processing Toolbox [34]can be used. The toolbox provides algorithms for image analysis, processing, seg-mentation, registration and visualization. Additionally, the toolbox implements alarge portion of the DICOM standard so that image data and metadata both fromDICOM files and DICOM (network) services can be acquired and used.Since MATLAB is also a programming language that can be used directly in the GUIof MATLAB, custom extensions, that can utilize the functions of other extensionslike those of the Image Processing Toolbox can be developed for MATLAB .

5.2.2 ParaView

ParaView [21, pp. 717-732] is a platform for data analysis and visualization which isbuilt on VTK and, like VTK, developed by Kitware. ParaView ships with a built-inGUI and focuses on distributed systems that – combined – provide the necessarycomputing resources for the analysis of large datasets. paraview.org [35] listsseveral domains that ParaView can be used in: Structural Analysis, Fluid Dynamics,Astrophysics, Climate Science and LiDAR/Point-Cloud. Although not advertised,ParaView can certainly be used for the visualization of medical image data but thisis not the intended purpose of ParaView. The documentation of ParaView alsocurrently does not mention the word ‘DICOM’ so the assumption can be made, thatworking with medical image data in ParaView is not straightforward and alternativesshould be consulted.

20

5.3 Visualization Tools for Medical Image Data Specifically

5.3 Visualization Tools for Medical Image DataSpecifically

In addition to the tools presented in the previous sections for more general scientificvisualization tasks, this section focuses on tools and software specifically for theprocessing of medical image data. These tools include the commercially availableand proprietary cvi42 and the free and open source tools 3D Slicer and the MedicalImaging Interaction Toolkit (MITK). Screenshots of the GUIs of each software canbe seen in Figures 5.2, 5.3 and 5.4, respectively.

5.3.1 cvi42

cvi42 [36] is a commercially available and proprietary software for advanced cardiacimage processing, visualization and quantitative evaluation. The software supportsreading of cardiac CT and MRI images from DICOM images.

Figure 5.2: Screenshot of cvi42. Taken from [37].

cvi42 was first approved for clinical use in the European Union in 2008 [38] and hassince become a stable, versatile and feature-rich tool. cvi42 is also already integratedin the syngo.via ecosystem [39]. Yet, cvi42’s source code is not publicly available sothat the development of custom extensions is not easily possible.

21

5 Evaluation of Visualization Frameworks

5.3.2 MITK

MITK [40] is developed by the German Cancer Research Center based in Heidelberg.MITK builds on top of both VTK and ITK and provides a base for the developmentof interactive medical image processing software. The MITK developers providepre-built binaries of MITK that also include the GUI version of MITK which iscalled ‘MITK Workbench’. The pre-built version provides some algorithms for thesegmentation, registration and visualization of medical images. Real-time MRI imagesequences are not supported in this version.

Figure 5.3: Screenshot of ‘MITK Workbench’.

MITK is designed to be extendable with custom developed extensions. Such ex-tensions are built using the ‘Blueberry Application Framework’ which is a projectthat arose during the development of MITK itself. The ‘Blueberry ApplicationFramework’ is built on top of the Common Toolkit (CTK) [41] which is based on theOpen Services Gateway initiative (OSGi) specifications [42]. MITK extensions candefine so-called ‘Extension Points’ that define the interfaces of MITK they interactwith. Using these ‘Extension Points’, the extensions can be loaded lazily, meaningthey are only loaded when they are actually required to. The extensions can alsodefine services that are registered in MITK’s service registry. Other extensions canthen discover these services from the service registry and use them.

22

5.4 Insufficiencies

5.3.3 3D Slicer

3D Slicer [43] is a software platform for the development of medical and related imageresearch software. It is also built on top of VTK and ITK and adds an extensiveset of features for medical image segmentation, visualization and the ability to workwith image sequences, which is necessary for the real-time MRI application. 3DSlicer is open source and currently has a very permissive license [44].

Figure 5.4: Screenshot of 3D Slicer. Taken from [45].

The software is composed of modules, that each cover related use cases like segment-ation, image sequences or reading of DICOM images and metadata. Developers cancreate custom modules for 3D Slicer using either C++, Python or both. A moredetailed description of 3D Slicer and the module system will be given in Chapter 7.

5.4 InsufficienciesThis chapter introduced a variety of tools and frameworks for the visualization ofscientific data. The tools and frameworks provide a broad set of features that arethe result of many years of development and are, thus, highly optimized. Still, thetools do not provide features specifically for the application in real-time MRI. Onepossible reason for this is that real-time MRI is not yet extensively used in clinicalcontexts. This, in turn, is because research and development efforts in the field ofreal-time MRI only recently began to be expended.

23

5 Evaluation of Visualization Frameworks

A summary of the available and unavailable features in each presented tool orframework is given in Table 5.1.

Table 5.1: Summary of the features and insufficiencies of the presented tools and frame-works. ✓ represents a fully met criterion, ✗ represents a criterion that is not met in therespective tool or framework. ‘p’ stands for a partially met criterion.

Criterion/Tool VTK ITK MATLAB ParaView cvi42 MITK 3D SlicerDICOM p p ✓ p ✓ ✓ ✓

Sequences ✗ ✗ p ✗ ✓ ✗ ✓

GUI p p ✓ ✓ ✓ ✓ ✓

Availability ✓ ✓ ✗ ✓ ✗ ✓ ✓

Extendability ✓ ✓ ✗ ✓ ✗ ✓ ✓

Of course, more abstract frameworks like VTK and ITK come with features thatcan be combined to create new software for working with real-time MRI databut this new software first has to be developed. For example, VTK provides thevtkDICOMImageReader class for reading images from DICOM files but this class hasmany limitations [46]. The class makes some assumptions about the data storedin the DICOM files that is not applicable to all the forms a DICOM file can have,thus the class does not support the reading of all DICOM files. More sophisticatedtools like the Grassroots DICOM library (GDCM) [47] or CTK which is based onthe DICOM Toolkit (DCMTK) [48] for reading DICOM files would be required.Consequentially, MITK and 3D Slicer as introduced in Section 5.3.2 and Section 5.3.3respectively, do depend on DCMTK for DICOM support.MATLAB, as introduced in Section 5.2.1, falls short because it is not open source andonly available via commercial licences. Thus, the development of custom extensionsis complicated leaving other tools more approachable. ParaView was introduced inSection 5.2.2 and could be labeled as the GUI to VTK. Although the developmentof custom extensions to ParaView is possible, ParaView itself is not designed for theprocessing of medical image data so that more effort for the development of medicalextensions is required.Finally, cvi42 is definitely the most advanced tool for cardiac image processing whilestill falling short with support for real-time MRI datasets. Similar to MATLAB,the source code of cvi42 is not publicly available and the software is only availablecommercially. cvi42 does not support the development of custom extensions.MITK and 3D Slicer are two promising candidates for the development of a softwarefor the automatic processing of real-time cardiac MRI data. Both tools are opensource, have a permissive licence, are designed to be extendable and provide richGUIs for user interaction.

24

5.5 Comparison of MITK and 3D Slicer

5.5 Comparison of MITK and 3D SlicerThis section compares MITK and 3D Slicer first from the user’s perspective and thenfrom the developer’s perspective.

5.5.1 User’s Perspective

From a user’s perspective, MITK and 3D Slicer do not differ too much (see Figure 5.3and Figure 5.4). The default layouts show three different views of image slices and a3D view of the respective volume. Both MITK and 3D Slicer allow the interactionwith the slice views to pan or zoom the view. Both user interfaces are complexand require familiarity with them to be able to use them efficiently but this is aconsequence of the complex topic of medical image analysis.MITK provides sophisticated algorithms for image segmentation and registration aswell as 3D volume visualization. 3D Slicer currently provides similar functionalitylike MITK but also additional features for working with image sequences.Both MITK and 3D Slicer are designed to be extendable but from a user’s perspectivethis is only really relevant in the case of 3D Slicer. This is because MITK pluginshave to be compiled into the application’s binary so that MITK plugins cannot beinstalled dynamically at runtime. With 3D Slicer on the other hand, extensions canbe installed at runtime using the GUI provided by the built-in ‘Extensions Manager’extension.In regard to the visualization of an overview of a real-time MRI dataset as demandedby R10, both MITK and 3D Slicer do not currently provide such a feature. Userscan navigate through the third space dimension and the time dimension using slidersor the mouse but an overview of more than one image at a time is not implemented.

5.5.2 Developer’s Perspective

From a developer’s perspective, MITK and 3D Slicer differ more considerably. First,MITK’s extensions must be developed using C++ while 3D Slicer extensions canboth be developed using C++ and Python. This already limits the number ofdevelopers being capable of efficiently developing extensions for MITK.The available application programming interfaces (APIs) in MITK and 3D Slicer donot differ much, though. Both build upon VTK and ITK for visualization algorithmsand Qt for GUIs and developers can use these libraries in their extensions.

25

5 Evaluation of Visualization Frameworks

For 3D Slicer, the ‘SlicerDebuggingTools’ [49] extension can be installed. Thisextension provides debugging features for the development of 3D Slicer extensions byintegrating into popular integrated development environments (IDEs) like PyCharm,Visual Studio Code, Visual Studio and Eclipse.

5.5.3 Conclusion

3D Slicer and MITK are good candidates for the implementation of the softwarerequirements listed in Table 4.1. From the user’s perspective the frameworks do notdiffer much but from the developer’s perspective the difference is more significant.The development of 3D Slicer extensions promises to be more rapid and efficient.Additionally, the existing extensions in 3D Slicer provide more features than theplugins available in MITK. The following parts of this work therefore focus on 3DSlicer.

5.6 Existing Extensions in 3D SlicerThe default build of 3D Slicer already provides a set of extensions that fulfil mostof the requirements listed in Table 4.1. In the following, these extensions will bepresented.

5.6.1 Segmentations

The ‘Segmentations’ extension [50] can be used to work with segmentations of medicalimages. Here, each segmentation can be composed of multiple segments each ofwhich refer to an anatomic structure or region of interest. Using this extension, thesegmentations can be represented in different ways [51]:

• binary labelmap,• fractional labelmap,• closed surface and• planar contours and ribbons.

Binary labelmaps store for each pixel in an image if it belongs to the anatomicstructure of interest or not.Fractional labelmaps function similarly but support a wider range of values to indicatee.g. the certainty with which a pixel belongs to the anatomic structure of interest.

26

5.6 Existing Extensions in 3D Slicer

Closed surfaces represent segmentations by defining the surface boundary of theanatomic structure so that everything inside that structure belongs to the anatomicstructure of interest.Finally, planar contours and ribbons define a segmentation by rings or ribbons thatenclose the anatomic structure of interest in each slice.Each type of segmentation has different use cases. For example, while binarylabelmaps are the easiest segmentations to create manually they tend to be inaccurate.Closed surfaces are better for 3D visualization but are harder to edit.With the ‘Segmentations’ extension it is also possible to export segmentations to e.g.NIfTI or stereolithography (STL), files. The ‘Segmentations’ extension can thus beused to implement the requirement R6.

5.6.2 Segment Editor

The ‘Segment Editor’ extension [50] is used to create and edit segmentations ofmedical images. The segmentations can be created or edited manually or semiautomatically [51]. A method of creating or editing a segmentation in 3D Sliceris called a ‘Segment Editor Effect’. Custom effects can be added to 3D Slicer bydevelopers but some are available in the default version of 3D Slicer.For manual segmentations, users can draw them into the displayed image slice byusing a pen or brush tool.These are examples for semi-automatic segment editor effects:

• grow from seeds,• fill between slices and• logical operators.

‘Grow from seeds’ enables segmentations where a given indicated segmentation canautomatically be grown to fill an entire anatomic structure. To use this, a user hasto add a segmentation to the image that lies inside the anatomic structure and thenactivate the ‘grow from seeds’ segmentation. The missing parts of the anatomicstructure should then automatically be added to the segmentation.With ‘fill between slices’, an interpolation between multiple given segmentations canbe calculated. First, some segmentations of non-neighbor slices have to be added.When the ‘fill between slices’ segmentation is then activated, the slices between thosewith given segmentations will also be segmented by interpolating between the givensegmentations.Other useful segmentation effects are ‘logical operators’. Using this segmentation,for example, two segmentations can be combined or added. Alternatively, a new

27

5 Evaluation of Visualization Frameworks

segmentation can be created that contains all parts of one segmentation that are notpart of another segmentation. These segmentations are subtracted from each other.The ‘Segment Editor’ extension can be used to implement the requirement R13.

5.6.3 Sequences

The ‘Sequences’ extension adds support for volume sequences. This includes real-timeMRI data and the currently more widely used ‘Cine’ MRI that stores artificial videosof moving anatomic structures.When an image sequence is imported using the ‘Sequences’ extension, the extensionallows the control of which item in the image sequence should be displayed in 3DSlicer. The selected item can also be automatically updated by 3D Slicer after agiven time duration. Using low enough time durations, this yields the visualization ofvolume sequences as a video. The user does not actually configure the time durationbut the inverse of that value which is called ‘frames per second’. The ‘Sequences’extension can be used to implement R11.

This chapter introduced 3D Slicer as a possible software framework for the imple-mentation of the real-time MRI application. The framework promises the rapiddevelopment of custom extensions that implement the requirements listed in Table 4.1.The previously presented extensions also already implement some requirements. Inthe next chapter, a solution concept is presented that is aimed at the implementationof the requirements by using 3D Slicer.

28

6 Solution Concept

The previous chapter introduced tools and frameworks that could be used to realizea real-time MRI application. Among these tools, 3D Slicer seems to be the mostpromising option since the tool is freely available, open source, extendable andfocused primarily on medical image analysis.In this chapter, a solution for the realization of R10 will be proposed that is basedon similar approaches in video editing software and the commercial medical imagingsoftware cvi42. As a reminder: R10 was defined as ‘the software shall visualizemultiple images of an MRI dataset for an overview’. To support the proposedsolution, the architecture of 3D Slicer and the Medical Reality Modeling Language(MRML) are explained.

6.1 Overview of a Real-time MRI DatasetThis section proposes a solution for the realization of R10. A conventional MRIdataset is typically visualized as a stack of images where each image is two-dimensional.Adding the one dimension of the stack to the two dimensions of each image yieldsthree dimensions for a conventional MRI dataset. Real-time MRI adds time as afourth dimension to the data. Instead of being composed of four dimensions, areal-time MRI dataset can also be thought of as a 3D+t dataset where ‘t’ stands fortime. One might also think about a real-time MRI dataset as a stack of videos.To present an overview of a real-time MRI dataset, it is useful to present both thespace dimensions and time dimension simultaneously. Since two dimensions areacquired by each image, two more dimensions have to be visualized. A natural wayof doing this would be a table- or matrix-like representation. One dimension of thetable would represent the remaining space dimension of the dataset and the otherdimension of the table would represent the time dimension.A similar approach is seen in video editing software. Videos are essentially 2D+tdata that is commonly visualized in a linear structure in video editing software (seeFigure 6.1 for some examples). Notice here that video editing software also presentsan overview of the data. In contrast, in video players the images are displayed one

29

6 Solution Concept

after another instead of all or multiple at the same time.One can also observe that the time domain is usually visualized in a horizontal mannerwhich seems to be the natural way of visualizing time [52, pp. 1-9]. According to [52],the horizontal visualization of time data can be found on cave paintings that aredated back to 500 years B.C. More scientific approaches to visualizing time are firstfound in the 18th century introduced by Playfair. Playfair visualized the balanceof trade between England, Denmark and Norway. The time dimension in thesevisualizations is also set in the horizontal direction.

Figure 6.1: Screenshots of video editing software. From left to right, top to bottom, thetools are OpenShot [53] (image adapted from [54]), Adobe Premiere Pro [55], MAGIX [56]and DaVinci Resolve 17 [57].

Another approach from the field of medical imaging software is seen in cvi42 whichwas presented earlier. cvi42 displays 4D image volume sequences in a tabular repres-entation (see Figure 6.2). The feature is called ‘thumbnail grid’ and is responsiblefor displaying a box for each image in the image volume sequence. Each box cancontain icons that indicate detected or assigned features of the respective image. Forexample, colored circles indicate that a segmentation is available for the image (referto Figure 6.2 for more details).The thumbnail grid is useful for a user when they quickly want to see what segment-ations are available for a dataset as well as for which images the segmentations areavailable. A user can also select a box in the thumbnail grid to select the imagerepresented by that box and then work on that image, e.g. by adding a manualsegmentation.

30

6.2 Architecture of 3D Slicer

Figure 6.2: The thumbnail grid of cvi42. A purple circle indicates a segmentation of theleft ventricle, a yellow circle indicates a segmentation of the right ventricle and a greencircle indicates a segmentation of the heart’s outer wall (epicardium). The colored lettersat the top (D, D, S and S) indicate that the software detected the diastolic and systolicphases of the cardiac cycle in the left and right ventricles, respectively. Taken from [37].

For the research project (see Section 4.3), the implementation of R10 is especiallyuseful as the resulting features assist greatly in clinical contexts. A framework thatis selected for this task must, thus, support the implementation of R10 in particular.The framework must also allow the implementation of the other requirements listedin Table 4.1.

6.2 Architecture of 3D Slicer3D Slicer is composed of a core software library and extensions that provide additionalfeatures to the software. In this section, first the Model View Controller (MVC)design pattern and how it is implemented in 3D Slicer is explained. After that, theconcept of extensions in 3D Slicer is explained.

6.2.1 Model View Controller

The core of 3D Slicer is implemented following the MVC design pattern [43]. Thispattern states that the data in an application, the logic that works on the data andthe presentation of the data should be separated [58]. As seen in Figure 6.3, the ‘GUI’

31

6 Solution Concept

component is responsible for the presentation of the data, the ‘Logic’ component isresponsible for the modification of the data and the ‘MRML’ component representsthe model part of the MVC pattern.

Figure 6.3: Architecture of 3D Slicer. Taken from [43].

These components cannot freely communicate with each other but are restricted.For example, the ‘Logic’ can never directly update the ‘GUI’ but requests the model(‘MRML’) to update some data which in turn emits an event that the ‘GUI’ reacts toand updates the view accordingly. The advantage of the MVC pattern is that differentconcerns are separated. If the ‘Logic’ has to be updated because, for example, of anew business requirement, the ‘GUI’ is unaffected from that change. Of course, if therequirement provides new features to the ‘Logic’, these features will not necessarybe immediately visible in the ‘GUI’ but at least the ‘GUI’ will continue to functioncorrectly with the previous set of features.

6.2.2 3D Slicer Extensions

3D Slicer can be extended with custom developed extensions. These extensions caneither be implemented in C++ or Python. Each extension contains one or moremodules that provide new features. To create a new extension, one can use thebuilt-in extension ‘Extension Wizard’ [59]. The new extension contains generatedcode with an example extension and the required build files although these are onlyreally needed if the extension is developed using C++. The wizard can then also beused to generate new modules for an extension.

32

6.3 Medical Reality Modelling Language

3D Slicer differentiates between three types of modules:

• scripted,• loadable and• command line interface (CLI).

The differentiation between these types of modules is only relevant to the developer;users will see no difference between the different types of modules. In the following,these types of modules are elaborated.Scripted modules in 3D Slicer are always written in Python and are best suited for‘fast prototyping and custom workflow development’ [60]. This is because scriptedmodules are interpreted at runtime by a Python interpreter and are not compiledinto the 3D Slicer binary, so that long-running build processes can be avoided.Scripted modules have no limitations in what APIs of 3D Slicer they can use. AllAPIs including those of VTK, Qt, MRML, ITK and 3D Slicer itself are available aswrapped functions and classes in Python.Loadable modules must be implemented in C++ and are included in the compilationprocess of a 3D Slicer binary. Loadable modules also have native access to all APIs of3D Slicer and the used libraries. For performance-critical use cases, loadable modulesare preferred since they are compiled instead of interpreted.A CLI module is a standalone binary or Python script, that is optionally accompaniedby an interface description file in XML format. CLI modules have limited capabilitiessince they are, essentially, command line tools that can be executed. Arguments canbe passed to the CLI module through a generated GUI in the 3D Slicer application.CLI modules are most commonly used for one shot tasks that do not require theuser to constantly interact with the module.

6.3 Medical Reality Modelling Language3D Slicer incorporates MRML [61] for modelling the data used in the application.The name MRML refers to both a file type and a software library: The files thatstore MRML data typically have a .mrml extension and are special XML files. Thesoftware library is responsible for reading from and writing to .mrml files as well asinterpreting the data contained in these files. Although the source code of the MRMLsoftware library is part of the 3D Slicer repository, it is completely decoupled from3D Slicer itself and could be used in applications other than 3D Slicer. Additionalto modelling the data used in a 3D Slicer instance, MRML also provides advancedfeatures like undo/redo and events. With undo/redo, actions that the user takes can

33

6 Solution Concept

be reverted or reapplied after they have been reverted. Events allow a node to benotified if another node emits an event. The most commonly emitted event is theModifiedEvent which notifies listeners that the internal data or state of a node haschanged.The top-level component of the MRML model is the MRML scene which is a collectionof MRML nodes. Each type of MRML node is responsible for storing a specific typeof data. The documentation of 3D Slicer lists seven types of nodes [61]:

• data nodes,• display nodes,• storage nodes,• view nodes,• plot nodes,• subject hierarchy nodes and• sequence nodes.

Almost all of these node types are subclassed to allow more specific behaviour fordifferent use cases.In the following, the data nodes, sequence nodes, view nodes and display nodes areexplained, since these node types are most relevant for the real-time MRI application.

6.3.1 Data Nodes

Data nodes store raw or processed data which most of the time is of medical naturebut can be of any kind. The types of data range from simple text or table data tomore complex data structures like segmentations, 3D image volumes and 4D imagevolume sequences. To cover the requirements of the real-time MRI application, datanodes are used to store the measured images of one timestep, yielding a volume.Therefore, the data are stored in an object of type vtkMRMLScalarVolumeNode whichis the sole derivative of vtkMRMLVolumeNode.

6.3.2 Sequence Nodes

Sequence nodes are responsible for storing a sequence of data nodes. There are norestrictions to what kind of data nodes can be stored in a sequence node, but it makessense to store related data nodes. In the case of the real-time MRI application thiswould be, for example, a sequence of vtkMRMLVolumeNodes. These nodes togetherrepresent a 3D image volume, that was measured at multiple points in time.

34

6.3 Medical Reality Modelling Language

The sequence nodes are implemented in the vtkMRMLSequenceNode class which storesall items of the sequence and exposes one of these nodes through a so-called proxynode. To control which node of the sequence is exposed through the proxy node, avtkMRMLSequenceBrowserNode is used. With this node it is also possible to changethe node exposed by the proxy node repeatedly after a given duration. If thisduration is short enough, the sequence cycles through the internally stored datanodes in a speed that allows for the sequence to be played like a video.

6.3.3 View Nodes

View nodes specify how a view onto the data should be configured. This configurationincludes the rendering quality and the style of animations, the field of view and theangle of the camera. An important view node type is the vtkMRMLSliceNode thatis responsible for the visualization of a slice of a 3D image volume. When a userstarts 3D Slicer, three slice views are shown by default, each of which displaying aview onto a plane parallel to one of the three anatomical planes that are shown inFigure 6.4.

Figure 6.4: The three anatomical planes used in medical imaging (left) and how they arevisualized in 3D Slicer (right): The sagittal plane divides the body’s left (L) and right (R)parts, the coronal plane divides the front (anterior, A) and back (posterior, P) parts andthe transversal plane divides the top (superior, S) and bottom (inferior, I) parts. The rightimage shows a 3D view of a head MRI scan as well as one slice parallel to each anatomicalplane. The left image is adapted from [62], the right image is a self-taken screenshot.The colors of the slice views in the right image were adapted to fit to the colors of theanatomical planes in the left image.

35

6 Solution Concept

6.3.4 Display Nodes

Display nodes store the properties that define how data nodes should be visualized.Such properties include the color, opacity or the visibility that indicates if the datanode should be visualized at all. Since display nodes and data nodes are decoupled,a data node can have multiple display nodes associated with it. Thus, the datacontained in the data node can be displayed in different ways without the datahaving to exist twice.The abstract class that implements display nodes, vtkMRMLDisplayNode, has sev-eral child classes that derive from it. These child classes specialize how specificdata nodes should be displayed. For example, a vtkMRMLVolumeNode data node isvisualized via a vtkMRMLVolumeDisplayNode with features specifically designed forthe visualization of volumes; a vtkMRMLSegmentationNode data node is visualizedusing a vtkMRMLSegmentationDisplayNode and so forth. For the real-time MRIapplication, display nodes are used for the visualization of the 3D image volumes.

36

7 Proof of Concept

This chapter explains the implementation of a proof of concept extension thatcovers requirement R10. The chapter is concluded with the explanation of differentapproaches for the deployment of 3D Slicer extensions.In this chapter, deductions from the source code of 3D Slicer where made fromthe revision with hash 7a593c83780166ff9f43f002302e431c9deac06d which at thetime of writing was tagged with the tag v4.11.20210226.The proof of concept extension is available here: https://github.com/DLR-SC/slicer-timeline [63]. The results in this chapter are based on the revision withhash 41997af8d9e0db8b715a19f3f329e6de07b45155 which is tagged with the tagv1.0.0.

7.1 Proof of Concept Extension for 3D SlicerAs a part of this work, a proof of concept extension for 3D Slicer was built, thatfocuses on the realization of R10 (‘the software shall visualize multiple images of anMRI dataset for an overview’). For fast prototyping and because of prior knowledge ofthe Python programming language, a scripted module was selected for the realizationof the requirement.

7.1.1 Usage

Figure 7.1 shows a screenshot of the extension running in 3D Slicer. Refer to thisfigure to better understand the following paragraphs.To use the proof of concept extension, the user first has to select the extension fromthe module list in the 3D Slicer GUI. The user is then presented with an interfacewhere they can make two selections before initializing the loading of the timeline.The first selection is the selection of a slice view which is mandatory. By default,three slice views are available in a 3D Slicer MRML scene, one for each anatomicalplane (see Figure 6.4).The second selection is the selection of a sequence. This selection is actually optionalbut highly recommended for 3D+t datasets. If the user loaded a 3D+t dataset into

37

7 Proof of Concept

3D Slicer, a sequence should automatically be created by 3D Slicer. To confirm thata sequence is available, the user can investigate the state of the playback controls inthe top right of the GUI. If these controls are enabled, a sequence is available.After both selections are made, the user can click the ‘Load Timeline’ button whichinitializes the loading of the timeline.

1

2

34

5 67

8

Figure 7.1: The timeline provided by the proof of concept extension. The image showsa screenshot of 3D Slicer with the proof of concept extension loaded and selected on theleft. Sample data provided by 3D Slicer was loaded into the scene and one slice view ofthis data is shown. The circles indicate the timeline 1 , the currently selected image 2 ,the module selection list 3 , the GUI of the extension 4 , the playback controls 5 , thetimestep controller 6 , the slice offset controller 7 and the slice view 8 .

The loaded timeline contains a tabular display of images. Each image in the timelinecorresponds to two values: an offset of the slice plane from some origin and onetimestep in the sequence. The slice plane offsets increase from top to bottom in thetimeline while the timesteps increase from left to right.There is always one image in the timeline that has a green border. This borderindicates that the respective image is the image with the currently selected sliceoffset and timestep. The user’s interaction with either the slice offset controller abovethe slice view or the timestep controller next to the playback controls is reflected inthe timeline by the green border being transferred to the newly selected image.The user can also interact with the timeline using their mouse. When the user hoversthe mouse pointer over an image in the timeline, that image receives a red color. Theuser can also select images in the timeline which in turn displays the respective slice

38

7.1 Proof of Concept Extension for 3D Slicer

in the slice view. The selection of images in the timeline also selects the respectivetimestep in the sequence. This can be confirmed by investigating the slice offsetcontroller and the timestep controller.

7.1.2 Implementation Details

This subsection elaborates on the details of the implementation of the proof ofconcept extension which is done in three parts. In the first part the classes containedin the extension are described. In the second part the logic that is responsible forloading the images in the timeline is explained. The third part describes how theuser interaction with the timeline is implemented.

Classes

The proof of concept extension is made up of six Python classes. Four of theseclasses are mandatory for each scripted module in 3D Slicer. The first of these classescontains metadata about the module and a setup method that is called by the 3DSlicer runtime when the module is initialized. Another class controls the widgetof the module and one class contains the business logic. This class may containcomputationally heavy algorithms of the module that are not dependent on the GUIwhen executed. The fourth class contains test cases for the module.In the current state, two more classes are contained in the module: TimelineWidgetand TimelineEntryWidget. The TimelineWidget class controls the timeline that isprovided by the module. Each TimelineWidget contains multiple TimelineEntry-Widgets where each TimelineEntryWidget holds one image of the timeline.

Logic

The timeline is loaded by first storing the current state of the 3D Slicer scene. Thisis done because the state will be restored after the timeline has finished loading.After that, the required number of rows and columns for the timeline is retrieved. Thenumber of columns is the number of timesteps in the sequence and the number of rowsis the number of slice offsets. These numbers can be retrieved from the two selectionsthat the user made prior to loading the timeline. In 3D Slicer, the selections are rep-resented by two nodes: a vtkMRMLSliceNode and a vtkMRMLSequenceBrowserNode.For each cell in the table with the previously determined number of rows and columns,a TimelineEntryWidget is created. Each TimelineEntryWidget is then instructedto load the respective image for the given slice offset and timestep. The images areacquired by changing the slice offset that is stored in the vtkMRMLSliceNode and

39

7 Proof of Concept

the timestep that is stored in the vtkMRMLSequenceBrowserNode. After that, thecorresponding slice view is instructed to render the image. The rendered image isthen copied into the TimelineEntryWidget. This process is completed by addingthe TimelineEntryWidget to the TimelineWidget in the correct row and column.The loading of the timeline is finished after all widgets and their images have beenloaded. For a final step, the previously stored state of the 3D Slicer scene is restored.

User Interaction

The widget that allows the selection of the vtkMRMLSliceNode is a qMRMLNodeCom-boBox which is provided by the MRML library. This widget can be set up toautomatically only allow the selection of nodes in the MRML scene that are of acertain type. The other selection is also possible through a qMRMLNodeComboBox thatonly allows the selection of vtkMRMLSequenceBrowserNodes.The user can interact with the timeline by using the mouse. When the mousecursor is hovered over a TimelineEntryWidget, that widget’s border is colored red.This is possible by implementing the enterEvent method that is declared in theQWidget class. When the mouse cursor leaves the TimelineEntryWidget, this canbe captured by implementing the leaveEvent method which is also declared in theQWidget class. In this event handler, the border of the TimelineEntryWidget isremoved.When a user uses the left mouse button to click on a TimelineEntryWidget, thisevent can be captured by implementing the mousePressEvent method. In the caseof the TimelineEntryWidget, this event sets the border color of the widget to green.This color will also not be cleared when the mouse cursor leaves the widget to indicatethat the currently selected image has not changed.Clicking on a TimelineEntryWidget has one more effect: Each image in the timelinecorresponds to two values: an offset of the slice plane from some origin and one pointin the time sequence. The mousePressEvent utilizes this fact by emitting an eventthat notifies listeners of these two new values. The extension registers two listenerswith this event: the vtkMRMLSliceNode and vtkMRMLSequenceBrowserNode thatwere selected by the user. When the vtkMRMLSliceNode receives the event, it canupdate the current slice offset. The vtkMRMLSequenceBrowserNode updates thecurrent timestep of the sequence, respectively. These two nodes in turn emit eventsthat notify the respective slice view widgets that are then updated to visualizethe slice selected in the timeline. This selection process also works in reverse: theextension registers the TimelineEntryWidgets as listeners to the ModifiedEventsof the vtkMRMLSliceNode and vtkMRMLSequenceBrowserNode. When one of these

40

7.1 Proof of Concept Extension for 3D Slicer

nodes emits the event, the selected TimelineEntryWidget in the timeline is updatedwhich is indicated by the border color of the image.

7.1.3 Source Code

The extension was generated with the Extension Wizard that was introduced earlier.Figure 7.2 shows the generated files and the directory structure of the extension.

Figure 7.2: Directory structure of the proof of concept extension.

The directory contains build files, resource files and code files. The resource filescontain icons that are for example displayed in 3D Slicer in the module selection list.The .ui file is used to declaratively specify the GUI of the extension.The main file of the module is the .py file that contains the code of the module.This file contains 728 lines of code including comments and empty lines betweenfunction, class and method definitions. Of these lines, about 75 % belong to thedeveloped proof of concept extension and about 25 % belong to the example codegenerated by the Extension Wizard. The .ui file that specifies the module’s GUIwas generated using the graphical editor ‘Qt Designer’ [64] which is the preferredoption for designing GUI components for 3D Slicer extensions.The Python code follows the PEP 8 Style Guide for Python Code [65] and is splitinto sufficiently many functions for increased readability. The readability would beincreased even more if the code were to be split into multiple files but this would slowdown the development process because 3D Slicer only reloads the main Python fileof each module when an extension is reloaded. Changes to other files belonging tothe module would not be detected and, thus, break the current state of the extensionin 3D Slicer making a restart of 3D Slicer necessary.

41

7 Proof of Concept

7.2 Deployment of a 3D Slicer ExtensionDevelopers have several options for the deployment or distribution of 3D Slicerextensions (see Figure 7.3). The most important factor that limits the selectionof an option is if the extension should be publicly available or not. If licencelimitations, company guidelines and other restrictions result in the possibility of apublic distribution of the extension, the extension should be made available in the3D Slicer Extensions Catalog. This allows users to directly download and install theextension from within 3D Slicer using the built-in Extensions Manager.To deploy a 3D Slicer extension to the Slicer Extensions Catalog, the extension’ssource code should first be made publicly available, for example on GitHub. Anotherrequirement is to fill out a .s4ext file that contains metadata of the extension likea name, developer contact information and a helpful description. This file shouldthen be requested to be merged into the 3D Slicer Extensions Index repository [66]by creating a pull request in that repository on GitHub. After the pull request isaccepted and the respective branch is merged into the master branch, the extensionwill be listed in the 3D Slicer Extensions Catalog.To deploy a non-publicly available 3D Slicer extension two options are possible. Thefirst option is to compile a custom binary of 3D Slicer that includes the extension. Ifthe extension contains scripted modules or CLI modules written in Python, thesecannot be compiled into the binary, of course, but have to be shipped separately aswell. The custom binary must then be configured to be able to find the modules thatwere not built as a part of the binary.The second option is to package the extension and let users install it manually.Depending on the operating system, the extension will be packaged either in a.tar.gz file on Linux and macOS or a .zip file on Windows. The package thencontains the needed shared libraries in the case of C++ extensions or the Pythonscripts in the case of Python extensions in a well-known folder structure. TheExtensions Manager can finally be used to manually install the extension.For now, the timeline will only be deployed to the stakeholders that test the extensionand give feedback. This deployment is therefore best done by distributing a packagedversion of the extension to the stakeholders because this process is quick and easy toimplement. Later, a public release might be considered but this is dependent on thelicence restrictions of the developed extension.

42

7.2 Deployment of a 3D Slicer Extension

Figure 7.3: Deployment diagram of the deployment of a 3D Slicer extension.

43

8 Results and Discussion

This chapter presents the results of this work and discusses these.

8.1 Implementation of Requirement R10Requirement R10 was defined as ‘the software shall visualize multiple images of anMRI dataset for an overview’. This requirement was fulfilled with the implementationof a 3D Slicer extension that provides a timeline overview of an MRI dataset (referto the previous chapter, Chapter 7, for more details).

8.1.1 Features

The extension provides two GUI components: One for the configuration of thetimeline and the timeline itself.The configuration component can be used to select the slice view in 3D Slicer thatthe timeline should be generated for as well as to select the sequence that containsthe time dimension data of a 3D+t image volume sequence.The timeline shows an interactive visualization of the images in the 3D+t imagevolume sequence. The possible interactions include scrolling through the tabularvisualization to be able to see all available images. Single images in the timelinecan be selected which also configures the corresponding slice view and sequence tochange so that they represent the selected slice and position in the sequence.Figure 8.1 shows an example for another feature of the timeline. Segmentations thatare created using the built-in ‘Segment Editor’ extension are displayed in the timeline.This is because the proof of concept extension is developed in conformity with MRMLyielding good interoperability with the other extensions of 3D Slicer. The featureis useful in clinical contexts where medical stuff is able to quickly determine whichimages in a real-time MRI dataset are segmented already and which are not.

45

8 Results and Discussion

Figure 8.1: An example segmentation as it is displayed in 3D Slicer and the timeline.The segmentation is indicated by the green, yellow and red areas. As seen in the timeline,the segmentation is also displayed there.

8.1.2 Loading Times of the Timeline

The timeline is not yet optimized for fast loading times. To test this, two exampledatasets were loaded into the timeline: ‘CTP Cardio Volume Sequence’ and ‘CTCardio Volume Sequence’. These datasets can be obtained directly through 3D Slicerusing the ‘Sample Data’ module.The ‘CTP Cardio Volume Sequence’ dataset contains a cardiac CT dataset thatis divided up into 61 slices. Each slice is further divided into recordings from 26timesteps yielding a total of 1586 images in that dataset. Each image has a resolutionof 102 by 102 pixels.The ‘CT Cardio Volume Sequence’ dataset is also a cardiac CT dataset with 72 slicesand 10 timesteps for a total of 720 images. Each image has a resolution of 128 by104 pixels.Although this work assumes image data to originate from MRI, CT data can alsobe used to test the behaviour of the timeline. This is because both methods ofmeasurement can produce 3D+t data with similar image resolutions.For the tests, ‘loading’ the timeline is defined to be the process that is started whenthe user clicks on the ‘Load Timeline’ button and ended when all images are loadedin the timeline. This process explicitly excludes the time that the user takes to loada dataset and apply the required configurations prior to loading. Measuring thistime yields the results presented in Figure 8.2.

46

8.1 Implementation of Requirement R10

The loading times increase linearly with each attempt of loading the timeline. Forthe CTP dataset, the loading time increases from 16933.87 ms in the first to 33935.74ms in the seventh attempt. For the CT dataset, the loading time increases from11751.78 ms in the first to 15185.48 ms in the seventh attempt.This suggests a bug that was not obvious before. When 3D Slicer is restarted betweeneach loading attempt, the measured time stays as low as in the first data points. Ifthe MRML scene is closed after each loading attempt, the measured time is less thanfor the previous attempt but substantially higher than after restarting 3D Slicer.

1 2 3 4 5 6 7#Attempt

0

5000

10000

15000

20000

25000

30000

Time (m

s)

Loading times for the CTP dataset

1 2 3 4 5 6 7#Attempt

0

2000

4000

6000

8000

10000

12000

14000

Time (m

s)

Loading times for the CT dataset

Figure 8.2: Time measurements of the timeline loading. The x-axes show the number ofthe measuring attempt, the y-axes show the measured time in milliseconds.

47

8 Results and Discussion

A possible reason for the bug is wrong memory management. Especially the differencesin memory management between C++ and Python could be the origin of the issue.This bug seems to be fixable when investigated thoroughly. The loading time of thetimeline should then be as low as in the first data points.Even when the bug is fixed, the loading time appears to be too high. Albeitrequirement R10 does not specify this, it can be assumed that clinical and researchuses of the extension demand faster loading times.Once the timeline is loaded, all images are available in random access memory (RAM)providing very low access times that are not noticeable by the user. This enables theuser to interact fluidly with the timeline.

8.2 3D Slicer as a Platform3D Slicer was selected as the software platform for the development of a real-timeMRI application. The software’s source code is publicly available which simplifiesthe development process if the documentation should fail to explain a needed detail.3D Slicer provides features that cover most but not all requirements listed in Table 4.1(see Table 8.1 for an overview of the covered requirements). The requirements R1,R3, R4, R6, R9, R11 and R13 are covered by 3D Slicer’s core and the extensionsthat are part of the default build of 3D Slicer. Using 3D Slicer as a softwaredevelopment platform, the requirements R2, R5, R7, R8, R10 and R12 can berealized by developing one or more custom 3D Slicer extensions.

Table 8.1: Coverage of the software requirements by 3D Slicer and its extensions.

Requirement Realization with 3D SlicerR1 (MRI image metadata) Built-in DICOM extensionR2 (MRI dataset preview) Custom extensionR3 (Import DICOM) Built-in DICOM extensionR4 (Export DICOM) Built-in DICOM extensionR5 (Export segmentation to cvi42) Custom extensionR6 (Export NIfTI) Built-in Segmentations extensionR7 (Synchronize MRI) Custom extensionR8 (Automatic segmentation) Custom extensionR9 (Display MRI slices) 3D Slicer’s coreR10 (MRI dataset overview) Custom extensionR11 (Playback MRI as video) Built-in Sequences extensionR12 (Calculate physiological parameters) Custom extensionR13 (Edit the segmentation) Built-in Segment Editor extensionR14 (Integrate with syngo.via OpenApps) 3D Slicer as a platform

48

8.2 3D Slicer as a Platform

Lastly, requirement R14 can also be realized using 3D Slicer as the software develop-ment platform. The syngo.via OpenApps platform’s only relevant requirement forsoftware that should be deployed on the platform is that the software is provided asan executable. Using 3D Slicer, a custom build of it could be supplied that containsthe additionally developed custom extensions.The licence of 3D Slicer’s explicitly ‘contains no restrictions on legal uses of thesoftware’ [67]. 3D Slicer is not approved for clinical use, though, which has to bekept in mind.

49

9 Time Planning and Work Packages

The work on this thesis was planned beforehand using project planning strategies.This resulted in a work breakdown structure (see Figure 9.1) and a precedencediagram (see Figure 9.2). The work packages have been fulfilled as depicted inTable 9.1.

Table 9.1: Effort spent for and degree of fulfillment of the work packages.

Work package Effort (person days) Degree of fulfillmentRequirements 1 -State of the art 5 100%Libraries/frameworks 5 100%Programming languages 1 100%Programming 15 100%Tests 1 50%Documentation 1 80%Writing environment 2 100%Writing 40 100%Registration 1 100%Proofreading 7 100%Printing 2 100%Submission 1 100%

Some notes on the work packages:

Requirements: The requirements are determined in an iterative process so that adefinitive degree of fulfillment cannot be determined for this work package.

Tests: Only manual tests were conducted.Documentation: Important parts of the source code of the proof of concept extension

are commented. For now, no dedicated documentation for the extension wascreated.

51

9 Time Planning and Work Packages

Figure 9.1: Work breakdown structure.

Figure 9.2: Precedence diagram.

52

10 Outlook

This work showed how 3D Slicer can be used as a software development platform inthe domain of real-time MRI. The main advantage of 3D Slicer in regard to softwaredevelopment is the support for the development of custom extensions that can beintegrated into the existing architecture of 3D Slicer. Another superiority of 3DSlicer in contrast to other platforms is the developer-friendly ecosystem. Customextensions, and especially scripted modules can be prototyped and developed rapidlyusing 3D Slicer and the Python programming language in unison with the developertools provided by 3D Slicer. The extensions can furthermore leverage the capabilitiesof the other available extensions and the software systems that are widely used inscientific programming like VTK and ITK.This final chapter first proposes possible solutions for the improvement of thedeveloped proof of concept extension in terms of its performance. After that, thenext steps in the overarching research project are elaborated as well as how the proofof concept extension is auxiliary in this process.

10.1 Performance ImprovementsIn the current implementation of the proof of concept extension, the images of eachslice and timestep are copied before they are displayed in the timeline. This isbecause the images are produced by the visualization pipeline for slices of 3D Slicer.This pipeline provides a single sink, that holds one image at a time. To generate allimages for all slices and timesteps, the pipeline has to be configured for each imageand then be executed again. The resulting images therefore must be copied to beable to display all of them simultaneously in the timeline.It might be possible to intercept the visualization pipeline with a custom filter toprovide a sink that holds all images of all slices and timesteps. This could result infaster loading of the images and an even better integration into the 3D Slicer eventsystem. Changes to the pipeline or its sources would automatically be passed to thecustom filter that was attached to the pipeline. The events that the custom filterreceives could then be processed in a way that the timeline is automatically updated.

53

10 Outlook

The images could also be loaded asynchronously. This would not result in an actualdecrease of the loading time of the images, though. The user would rather perceivethe loading times to be shorter since they can already interact with the timelinebefore all images are loaded.Another way of optimizing the timeline in terms of performance is to not load allimages at once. Currently, the algorithm steps through each slice and, for eachslice, through each timestep to generate the images. Since not all images but only asubset of about 20-30 images are displayed to the user at once, the timeline could beoptimized to only render these images.

10.2 Next Steps in the Overarching Research ProjectAs a result of this work, 3D Slicer will be used to implement some of the remainingrequirements listed in Table 4.1. This is done by developing new extensions for 3DSlicer or extending the proof of concept extension. New extensions can, of course,make use of the features of the timeline.The proof of concept extensions is also helpful for the development of new 3D Slicerextensions. For example, it can be used as a template or guideline for the developmentprocess.At the time of writing, requirement R7 is being implemented at DLR. This require-ment states that ‘the software shall synchronize MRI image sequences with additionalmeasurements’. As a reminder, real-time MRI of moving anatomic structures likethe human heart introduces a problem: the slice plane of the resulting images inreference to the patient’s body moves in each timestep. This is because the body ismoving, e.g. due to breathing. Thus, combining all slices of one timestep into a 3Dmodel would result in a distorted representation of the heart. To adjust this issue,external measurements like electrocardiograms can be used.This is where the proof of concept extensions can be helpful: the results of synchron-izing the MRI data with external measurements can be verified using the timeline.Since the timeline displays an overview of all images in the MRI dataset, the developercan quickly determine if the synchronization was successful. If the synchronizationwas not successful, the developer might still be able to deduct information from thetimeline about how the synchronization algorithm could be improved.The successful synchronization of real-time MRI image data with external measure-ments has another important side effect. Since cardiological diagnostics currently donot take breathing into account when considering measurements of the cardiac cycle,new insights into the physiology of the heart could be gained.

54

Bibliography

[1] W. C. Röntgen, Ueber eine neue Art von Strahlen, German, 1895.

[2] J. D. Roberts, Nuclear Magnetic Resonance: Applications to Organic Chemistry.McGraw-Hill Book Company, Inc., 1959.

[3] P. C. Lauterbur, ‘Image Formation by Induced Local Interactions: ExamplesEmploying Nuclear Magnetic Resonance’, Nature, vol. 242, no. 5394, pp. 190–191, 1973, issn: 1476-4687. doi: 10.1038/242190a0.

[4] J. Gordon Betts, K. A. Young, J. A. Wise et al., Anatomy and Physiology.OpenStax, 2013. [Online]. Available: https://openstax.org/books/anatomy-and-physiology/pages/1-introduction (visited on 24 January 2022).

[5] J. Frahm, A. Haase and D. Matthaei, ‘Rapid NMR imaging of dynamic processesusing the FLASH technique’, Magnetic resonance in medicine, vol. 3, pp. 321–7,2 April 1986, issn: 0740-3194. doi: 10.1002/mrm.1910030217.

[6] R. R. Edelman, ‘The History of MR Imaging as Seen through the Pages ofRadiology’, Radiology, vol. 273, no. 2S, S181–S200, 2014. doi: 10.1148/radiol.14140706.

[7] M. Uecker, S. Zhang, D. Voit, A. Karaus, K.-D. Merboldt and J. Frahm, ‘Real-time MRI at a resolution of 20 ms’, NMR in biomedicine, vol. 23, pp. 986–94,8 October 2010, issn: 0952-3480. doi: 10.1002/nbm.1585.

[8] L. E. Humes, T. A. Busey, J. C. Craig and D. Kewley-Port, ‘The effects ofage on sensory thresholds and temporal gap detection in hearing, vision, andtouch’, Attention, perception & psychophysics, vol. 71, pp. 860–71, 4 May 2009,issn: 1943-3921. doi: 10.3758/APP.71.4.860.

[9] M. Uecker, S. Zhang and J. Frahm, ‘Nonlinear inverse reconstruction for real-time MRI of the human heart using undersampled radial FLASH’, MagneticResonance in Medicine, vol. 63, no. 6, pp. 1456–1462, 2010. doi: 10.1002/mrm.22453.

55

Bibliography

[10] S. Oeltze, ‘Visual Exploration and Analysis of Perfusion Data’, Ph.D. disserta-tion, Fakultät für Informatik der Otto-von-Guericke-Universität Magdeburg,2010.

[11] H. Knipe and F. Deng. ‘Cardiac imaging planes’. (2016), [Online]. Available:https://doi.org/10.53347/rID-43538 (visited on 22 February 2022).

[12] F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen and K. H. Maier-Hein,‘nnU-Net: a self-configuring method for deep learning-based biomedical imagesegmentation’, Nature Methods, vol. 18, no. 2, pp. 203–211, 2021, issn: 1548-7105. doi: 10.1038/s41592-020-01008-z.

[13] W. Koslow, KI-basierte Auswertung von Echtzeit Kardio–Magnetresonanz-tomographie (MRT), personal communication, 2021.

[14] Python Software Foundation. ‘python.org’. (2022), [Online]. Available: https://www.python.org/ (visited on 1 March 2022).

[15] Python Software Foundation. ‘PyPI · The Python Package Index’. (2022),[Online]. Available: https://pypi.org/ (visited on 1 March 2022).

[16] E. Ampomah, E. Mensah and A. Gilbert, ‘Qualitative Assessment of Compiled,Interpreted and Hybrid Programming Languages’, Communications on AppliedElectronics, vol. 7, pp. 8–13, October 2017. doi: 10.5120/cae2017652685.

[17] Kitware, Inc. ‘VTK - The Visualization Toolkit’. (2022), [Online]. Available:https://vtk.org/ (visited on 20 January 2022).

[18] W. Schroeder, K. Martin, B. Lorensen and Kitware, Inc., The VisualizationToolkit: An Object-oriented Approach to 3D Graphics. Kitware, Inc., 2006, isbn:9781930934191. [Online]. Available: https://books.google.de/books?id=rx4vPwAACAAJ (visited on 23 January 2022).

[19] Kitware, Inc., The VTK User’s Guide, 11th ed. Kitware, Inc., 2010, isbn:9781930934238. [Online]. Available: https://www.kitware.com/products/books/VTKUsersGuide.pdf (visited on 2 March 2022).

[20] I. Bitter, R. Van Uitert, I. Wolf, L. Ibáñez and J.-M. Kuhnigk, ‘Comparison offour freely available frameworks for image processing and visualization thatuse ITK’, IEEE transactions on visualization and computer graphics, vol. 13,pp. 483–93, 3 2007, issn: 1077-2626. doi: 10.1109/TVCG.2007.1001.

[21] C. Hansen and C. Johnson, Eds., The Visualization Handbook. Elsevier Science,2011, isbn: 9780080481647. [Online]. Available: https://books.google.de/books?id=mA8ih1AieaYC (visited on 23 January 2022).

56

Bibliography

[22] VTK developers. ‘vtk · PyPI’. (2022), [Online]. Available: https://pypi.org/project/vtk/ (visited on 2 March 2022).

[23] The Qt Company. ‘Qt | Cross-platform software development for embedded& desktop’. (2021), [Online]. Available: https://www.qt.io/ (visited on 1March 2022).

[24] Qt for Python Team. ‘PyQt5 · PyPI’. (2022), [Online]. Available: https://pypi.org/project/PyQt5/ (visited on 2 March 2022).

[25] Qt for Python Team. ‘PySide2 · PyPI’. (2022), [Online]. Available: https://pypi.org/project/PySide2/ (visited on 2 March 2022).

[26] German Aerospace Center. ‘DLR - Institute of Aerospace Medicine - Home’.(2022), [Online]. Available: https://www.dlr.de/me/en (visited on 22 January2022).

[27] P. Bourque and R. E. Fairley, Eds., Guide to the Software Engineering Bodyof Knowledge, V3.0, IEEE Computer Society, 2014. [Online]. Available: https://www.swebok.org (visited on 30 January 2022).

[28] Siemens Healthcare GmbH. ‘syngo.via OpenApps’. (2022), [Online]. Available:https : / / www . siemens - healthineers . com / de / medical - imaging - it /advanced-visualization-solutions/syngovia-openapps (visited on 29January 2022).

[29] German Aerospace Center. ‘DLR - DLR Portal’. (2022), [Online]. Available:https://www.dlr.de/EN/organisation-dlr/dlr/dlr.html (visited on 22January 2022).

[30] German Aerospace Center. ‘DLR - Institute for Software Technology - Home’.(2022), [Online]. Available: https://www.dlr.de/sc/en (visited on 22 January2022).

[31] M. McCormick, X. Liu, L. Ibanez, J. Jomier and C. Marion, ‘ITK: enablingreproducible research and open science’, Frontiers in Neuroinformatics, vol. 8,2014, issn: 1662-5196. doi: 10.3389/fninf.2014.00013.

[32] L. G. Brown, ‘A Survey of Image Registration Techniques’, ACM ComputingSurveys, vol. 24, no. 4, pp. 325–376, December 1992, issn: 0360-0300. doi:10.1145/146370.146374.

[33] The MathWorks, Inc. ‘MATLAB - MathWorks - MATLAB & SimuLink’. (2022),[Online]. Available: https://www.mathworks.com/products/matlab.html(visited on 7 February 2022).

57

Bibliography

[34] The MathWorks, Inc. ‘Image Processing Toolbox - MATLAB’. (2022), [Online].Available: https://www.mathworks.com/products/image.html (visited on 7February 2022).

[35] Kitware, Inc. ‘Paraview’. (2022), [Online]. Available: https://www.paraview.org (visited on 4 February 2022).

[36] Circle Cardiovascular Imaging, Inc. ‘Cardiac MRI and CT Software - CircleCardiovascular Imaging’. (2022), [Online]. Available: https://www.circlecvi.com/ (visited on 7 February 2022).

[37] Circle Cardiovascular Imaging, Inc., cvi42 Version 5.13 User Manual. Circle Car-diovascular Imaging, Inc., 2021. [Online]. Available: https://www.circlecvi.com/docs/product- support/manuals/cvi42_user_manual_v5.13.pdf(visited on 8 February 2022).

[38] The British Standards Institution, EC Certificate - Full Quality AssuranceSystem, In respect of: Design and manufacture of multiplatform Cardiovas-cular Magnetic Resonance (MR) and Computed Tomography (CT) Imagingsoftware applications in DICOM standard format. 2019. [Online]. Available:https://www.circlecvi.com/site/assets/files/certificates/bsi-ce-certificate-for-cvi42-ce-539277.pdf (visited on 8 February 2022).

[39] Siemens Healthcare GmbH. ‘Siemens Healthineers Digital Marketplace’. cvi42.(2022), [Online]. Available: https://marketplace.teamplay.siemens.com/app/detail/OpenApps-Circle-CVI42 (visited on 8 February 2022).

[40] I. Wolf, M. Vetter, I. Wegner et al., ‘The medical imaging interaction toolkit’,Medical image analysis, vol. 9, pp. 594–604, 6 December 2005, issn: 1361-8415.doi: 10.1016/j.media.2005.04.005.

[41] The Common Toolkit Developers. ‘CTK - The Common Toolkit’. (2022),[Online]. Available: https://commontk.org/index.php/Main_Page (visitedon 8 February 2022).

[42] Eclipse Foundation. ‘OSGi Working Group | The Eclipse Foundation’. (2022),[Online]. Available: https://www.osgi.org/ (visited on 9 February 2022).

[43] A. Fedorov, R. Beichel, J. Kalpathy-Cramer et al., ‘3D Slicer as an imagecomputing platform for the Quantitative Imaging Network’, Magnetic resonanceimaging, vol. 30, pp. 1323–41, 9 November 2012, issn: 0730-725X. doi: 10.1016/j.mri.2012.05.001.

58

Bibliography

[44] BWH and the 3D Slicer contributors. ‘Commercial Use | 3D Slicer’. (2022),[Online]. Available: https://www.slicer.org/commercial-use.html (visitedon 9 February 2022).

[45] BWH and the 3D Slicer contributors. ‘3D Slicer image computing platform’.(2022), [Online]. Available: https://www.slicer.org/ (visited on 19 January2022).

[46] Kitware, Inc. ‘VTK: vtkDICOMImageReader Class Reference’. (2022), [Online].Available: https://vtk.org/doc/nightly/html/classvtkDICOMImageReader.html (visited on 6 February 2022).

[47] M. Malaterre and The GDCM Contributors. ‘GDCM Wiki’. (2022), [Online].Available: http://gdcm.sourceforge.net/wiki/index.php/Main_Page(visited on 8 February 2022).

[48] The OFFIS computer science institute. ‘dicom.offis.de - DICOM Softwaremade by OFFIS - DCMTK - DICOM Toolkit’. (2022), [Online]. Available:https://dicom.offis.de/dcmtk.php.en (visited on 6 February 2022).

[49] A. Lasso and M. Brudfors, SlicerDebuggingTools, 2021. [Online]. Available:https : / / github . com / SlicerRt / SlicerDebuggingTools (visited on 17February 2022).

[50] C. Pinter, A. Lasso and G. Fichtinger, ‘Polymorph segmentation representationfor medical image computing’, Computer Methods and Programs in Biomedicine,vol. 171, pp. 19–26, 2019, issn: 0169-2607. doi: https://doi.org/10.1016/j.cmpb.2019.02.011.

[51] Slicer Community. ‘Image Segmentation - 3D Slicer documentation’. (2022),[Online]. Available: https://slicer.readthedocs.io/en/v4.11/user_guide/image_segmentation.html (visited on 20 February 2022).

[52] G. Wills, Visualizing Time, Designing Graphical Representations for StatisticalData (Statistics and Computing), 1st ed. Springer, New York, NY, 2012, isbn:9780387779065. doi: 10.1007/978- 0- 387- 77907- 2. [Online]. Available:https://link.springer.com/book/10.1007/978-0-387-77907-2 (visitedon 18 February 2022).

[53] OpenShot Studios, LLC. ‘OpenShot Video Editor’. (2022), [Online]. Available:https://www.openshot.org/ (visited on 17 February 2022).

[54] USATechDude. ‘Main Windows of OpenShot Video Editor v2.6.1’. (2021), [On-line]. Available: https://commons.wikimedia.org/wiki/File:OpenShot_Video_Editor_v2.6.1.png (visited on 17 February 2022).

59

Bibliography

[55] Adobe Systems Software Ireland Limited. ‘Adobe Premiere Pro’. (2022), [On-line]. Available: https://www.adobe.com/de/products/premiere.html(visited on 17 February 2022).

[56] MAGIX Software GmbH. ‘MAGIX Video Pro X 365’. (2022), [Online]. Available:https://www.magix.com/de/videos-bearbeiten/video-pro-x/ (visitedon 17 February 2022).

[57] Blackmagic Design Pty. Ltd. ‘DaVinci Resolve 17’. (2022), [Online]. Available:https://www.blackmagicdesign.com/products/davinciresolve/ (visitedon 17 February 2022).

[58] E. Gamma, Entwurfsmuster: Elemente wiederverwendbarer objektorientierterSoftware (Professionelle Softwareentwicklung), German. Addison-Wesley, 2004,isbn: 9783827321992. [Online]. Available: https://books.google.de/books?id=-GXxUV0L6XsC (visited on 14 February 2022).

[59] M. Woehlke, 3D Slicer extension: Extension Wizard, 2014.

[60] Slicer Community. ‘Modules | 3D Slicer Wiki’. version 4.10. (2019), [On-line]. Available: https://www.slicer.org/wiki/Documentation/4.10/Developers/Modules (visited on 10 February 2022).

[61] Slicer Community. ‘MRML Overview - 3D Slicer documentation’. version v4.11.(2022), [Online]. Available: https://slicer.readthedocs.io/en/v4.11/developer_guide/mrml_overview.html (visited on 10 February 2022).

[62] Slicer Community. ‘Coordinate systems’. (2021), [Online]. Available: https://www.slicer.org/wiki/Coordinate_systems (visited on 14 February2022).

[63] J. L. Weber, DeepArc 3D Slicer Timeline extension, version v1.0.0, March 2022.doi: 10.5281/zenodo.6376223. [Online]. Available: https://github.com/DLR-SC/slicer-timeline (visited on 22 March 2022).

[64] The Qt Company, Ltd. ‘Qt Designer Manual’. (2022), [Online]. Available:https://doc.qt.io/qt-5/qtdesigner-manual.html (visited on 15 February2022).

[65] G. van Rossum, B. Warsaw and N. Coghlan. ‘PEP 8 – Style Guide for PythonCode’. (2001), [Online]. Available: https://www.python.org/dev/peps/pep-0008/ (visited on 15 February 2022).

[66] Slicer Community. ‘3D Slicer Extensions Index’. (2022), [Online]. Available:https://github.com/Slicer/ExtensionsIndex (visited on 12 February2022).

60

Bibliography

[67] Slicer Community. ‘About 3D Slicer - 3D Slicer documentation’. (2020), [Online].Available: https://slicer.readthedocs.io/en/v4.11/user_guide/about.html (visited on 21 February 2022).

61