Realistic methods for simulating Radio Propagation

160
Faculteit Toegepaste Ingenieurswetenschappen Elektronica-ICT Realistic methods for simulating Radio Propagation Proefschrift voorgelegd tot het behalen van de graad van Doctor in de Toegepaste Ingenieurswetenschappen aan de Universiteit Antwerpen te verdedigen door Ben Bellekens version: June 6, 2018 Prof. dr. ing. Maarten Weyn Prof. dr. Rudi Penne Antwerpen, 2018

Transcript of Realistic methods for simulating Radio Propagation

Faculteit Toegepaste IngenieurswetenschappenElektronica-ICT

Realistic methods for simulating RadioPropagation

Proefschrift voorgelegd tot het behalen van de graad vanDoctor in de Toegepaste Ingenieurswetenschappenaan de Universiteit Antwerpen te verdedigen door

Ben Bellekens

version: June 6, 2018

Prof. dr. ing. Maarten WeynProf. dr. Rudi Penne Antwerpen, 2018

Jury

ChairmanProf. dr. Peter Hellinckx

SupervisorsProf. dr. ing. Maarten WeynProf. dr. Rudi Penne

MembersProf. dr. Paul De MeulenaereProf. dr. ir. Wout JosephProf. dr. Widyawan

Contact

Ben BellekensIDLab, Faculty of Applied Engineering, University of AntwerpGroenenborgerlaan 171, 2020 Antwerpen, BelgiumM: [email protected]: +32 3265 1682

Copyright c© 2018 Ben BellekensAll rights reserved.

ISBN 978-90-572-8588-2Wettelijk depot D/2018/12.293/15 9 789057 285882

Often when you think you’re at the end of something, you’re at the beginning of some-thing else.

Fred Rogers

Preface

Finishing this thesis and obtaining a Ph.D degree is by far the greatest satisfaction thatI ever felt. Doing academic research is globally seen as a difficult task that is done by oneperson. Furthermore it takes a lot of courage, motivation, and perseverance to fulfill allgoals. Nevertheless, doing doctoral research involves more than this. In my experience,when I look at the bigger picture, a Ph.D includes the idea of obtaining all competencesof a business chain.

These four years would not have been possible without the continuous support ofmany people. First, I would like to express my gratitude to my supervisors MaartenWeyn and Rudi Penne for the opportunity of doing a Ph.D, the many technical talks,and critical insight to conduct me in my research. By continually rephrasing a questionin a critical fashion they gave me a better understanding of the problem. Moreover,by combining my own interests and competences, I had the pleasure to participate indifferent industrial projects where we needed to work as team in order to deliver a proofof concept. I am very excited and thankful of being part of this team and would like tothank my colleagues of both IDLab and CoSys-Lab for the different and endless talksabout any kind of subject. In a more discrete way I would like to thank Dragan, Rafael,Glenn, Noori, Stijn, and Michiel for supporting me and being part of our team. More-over, I want to thank Michelle for proof reading my thesis.

Finally, I want to thank my parents Albert and Chris, my sister Elly and husbandGert, friends from either Jamaswapi and Aspen for encouraging me during this period.Furthermore, I want to express my special thanks to Charlotte, Annouck, Bart, Magalie,Arne, and Kris for the countless talks and endless support during this period.

To all of you, Thank You!

June 2018, AntwerpBen Bellekens

i

Abstract

In order to localize wireless devices in buildings, different wireless communication systemsand setups are being used nowadays. Furthermore, to ensure a good connectivity andcoverage in an indoor environment, the received signal strength of a transmitted signalat different locations is simulated. Current radio propagation simulators uses either 2Dand 3D maps of the environment to predict the signal strength of a transmitting device.These simulators are not able to predict the signal strength at a specific location in arealistic way. Due to walls, ceiling, floor, and objects that are located in an environment,different wave phenomena are occurring and causing multipath, which is hard to incor-porate in a simulation.

To predict the signal strength of an electromagnetic signal, both empirical and de-terministic propagation loss models are being used. On one hand, empirical propagationloss models are capable of simulating the line-of-sight (LoS) signal strength. On theother hand, deterministic propagation loss models are able to simulate on top of the LoSsignal strength, the Non-LoS signal strength or multipath. Moreover, these deterministicpropagation loss models can not be used for accurate signal strength simulations withoutthe knowledge of the real environment. At this moment, there are no automated systemsthat are able to model the real geometry of an environment that contains all objects andcan be used to apply a deterministic propagation loss model. Subsequently, to investigatethe influence that all objects has on the signal strength between two devices, all objectsneed to be identified together with the environment. One of the problems in order tocreate a 2D and 3D map of an environment is reducing the influence of sensor noise. Atthis moment, the literature is not able to deal with this sensor noise, which makes itvery complex to create an environment model that can be used to apply a deterministicpropagation model.

The research that is described in this thesis covers an automated system that is ableto create a model of the real geometry of an environment, which is used by a determin-istic propagation loss model to predict the signal strength of a wireless communicationsystem. This research is divided in three parts: the first part will create an environ-ment model by incorporating the different unknown variables and uncertainties. In thischapter, different methods were investigated that are capable of aligning two consecutivepoint clouds. In order to investigate these methods, a survey study was done where a

iii

Abstract

dataset of different point clouds were captured and used to compare the results of allmethods. Furthermore, to create an environment model that can be used for simulatingthe signal strength, MapFuse is developed. This algorithm is capable of merging an initialCAD-model of an environment with the result of 3D-SLAM algorithm. The second partof this thesis holds the implementation of a ray-launching propagation loss model thatsimulates the signal strength of a wireless communication system. This ray-launchingpropagation loss model uses an environment model that was created by a moving robot.Subsequently, the model enables the computation of multipath, which is caused by re-flections, transmissions, constructive, and destructive wave effects. The third part of thethesis holds an automated validation approach to identify the accuracy and precisionof the ray-launching propagation loss model. This approach makes it possible to mea-sure the signal strength of a communication system while the environment model wascreated. Since the environment model and the trajectory of a robot are relative to thefirst location of the robot, all receiver locations are known. Because of this, an largevalidation can be made with respect to each receiver and transmitter location and theresolution of the environment model. Finally, the added value of the propagation lossmodel for applying it in localization algorithms such as Angle of Arrival localization,Radio Tomography Imaging, and Signal strength localization is investigated.

iv

Samenvatting

Om draadloze apparaten te kunnen lokaliseren in gebouwen, worden tegenwoordig ver-schillende draadloze communicatietechnologien en opstellingen gebruikt. Om in de volledigeomgeving een optimale ontvangst te garanderen wordt typisch de signaalsterkte van deuitgezonden signalen in het gebouw gesimuleerd. Huidige simulatiemethodes maken ge-bruik van een twee- en driedimensionale (2D) plattegrondkaart om zo de veldsterkte ingebouwen te simuleren. Deze methodes voorspellen de ontvangststerkte op een gegevenlocatie niet voldoende realistisch. De reden hiervoor is omdat het moeilijk is om reken-ing te houden met vaste en bewegende objecten en met elektrische verzwakkingen tengevolge van muren, vloeren, plafonds en objecten.

Om de verzwakking van een elektromagnetisch signaal te berekenen worden deter-ministische en empirische propagatiemodellen gebruikt. Enerzijds zijn empirische prop-agatiemodellen in staat om enkel de line-of-sight (LoS) verzwakkingen te berekenen,anderzijds kunnen deterministische propagatiemodellen bovenop de LoS verzwakkingenook de Non-LoS verzwakkingen berekenen. Echter kunnen deze deterministische mod-ellen niet in realiteit gebruikt worden zonder afdoende kennis over de storingen die deobjecten in de omgeving veroorzaken. Tot op heden zijn er geen geautomatiseerde sys-temen die een omgeving volledig in kaart brengen om op basis daarvan de verzwakkingvan een draadloos signaal te kunnen berekenen. Om de invloed van objecten in een RF-simulatie te kunnen onderzoeken dienen we de verschillende objecten van een omgevingte identificeren in een realistisch 3D- of 2D-model. Huidige algoritmes die objecten ende ruimte in kaart brengen op basis van het afgelegde pad maken verschillende foutendoor het gebruik van puntenwolken. Doordat de algoritmes geen rekening houden metsensorruis is het probleem om een nauwkeurige map te maken zeer complex.

Tijdens dit onderzoek ben ik op zoek gegaan naar een manier om een geautomatiseerdsysteem dat de signaalsterkte van een draadloos communicatiesysteem kan berekenenvoor een reele omgeving, rekening houdend met alle objecten die in die omgeving aanwezigzijn. Dit onderzoek is opgedeeld in drie delen: Een eerste deel zal het omgevingsmodelopbouwen door rekening te houden met de verschillende parameters en onzekerheden.Binnen dit deel werden de methodes onderzocht dat in staat zijn om een geometrischetransformatie te berekenen tussen twee overlappende puntenwolken. Om dit te onder-zoeken is een uitgebreide studie uitgevoerd waarbij een dataset van puntenwolken is

v

Samenvatting

gebruikt om de resultaten van elke algoritme te vergelijken. Vervolgens is er in dit deelMapFuse ontwikkeld. Dit algoritme is in staat om een initieel CAD-model uit te brei-den met de berekende map van een 3D-SLAM algoritme. In het tweede deel van hetdoctoraatsonderzoek is een ray-launching propagatiemodel ontwikkeld dat op basis vaneen omgevingsmodel de signaalsterkte kan berekenen waarbij de extra verzwakkingen tengevolge van reflecties en transmissies mee in rekening zijn gebracht. Door gebruikt temaken van een ray-launching algoritme is het propagatiemodel in staat om de construc-tieve en destructieve golfeffecten mee in rekening te nemen. Bij het derde deel van ditonderzoek is het ontwikkelde propagatiemodel gevalideerd aan de hand van een geau-tomatiseerd robotsysteem. Dit systeem kan draadloze metingen uitvoeren en de locatiesvan deze metingen relatief in kaart brengen ten opzichte van de map en het afgelegdepad. Deze validatie van het model is gevalideerd met betrekking tot de signaalsterkte,de rekentijd en de resolutie van de omgeving. Verder is er onderzocht welke meerwaardedit onderzoek kan bieden voor hedendaagse lokalisatie-algoritmes zoals Angle-of-arrival(AoA), Radio Tomography Imaging (RTI), en signal strength lokalisatie.

vi

Contents

Preface i

Abstract iii

Samenvatting v

Contents vii

List of Figures x

List of Tables xiv

List of Publications xviiPatents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiJournal papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiConference papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

1 Introduction 11.1 Motivations and Research Objectives . . . . . . . . . . . . . . . . . . . . . 21.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 State-of-the-art 72.1 SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.1 Mathematical Tools . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.1.1 Homogeneous Transformations . . . . . . . . . . . . . . . 112.1.1.2 Least-Squares Minimization . . . . . . . . . . . . . . . . . 122.1.1.3 Loop Closing . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.1.2 Registration Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 132.1.2.1 Principal Component Analysis . . . . . . . . . . . . . . . 132.1.2.2 Singular Value Decomposition . . . . . . . . . . . . . . . 142.1.2.3 Iterative Closest Point . . . . . . . . . . . . . . . . . . . . 15

2.1.3 GMapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

vii

Contents

2.1.4 Large-Scale Direct SLAM . . . . . . . . . . . . . . . . . . . . . . . 212.1.5 RGB-D SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2 Spatial Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.2.1 Quadtree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.2.2 Octree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.3 Binary Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.4 Octomap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.3 Wave Propagation Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . 272.3.1 Deterministic Propagation Loss Model . . . . . . . . . . . . . . . . 302.3.2 Empirical Propagation Loss Model . . . . . . . . . . . . . . . . . . 31

2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3 Realistic Environment modeling 373.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2 Application Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.2.1 Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.2.2 Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.3 Bechmark Survey Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3.2 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.3.3 Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.4 Mapfuse System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.4.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.4.2 SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.5 Mapfuse Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.5.1 LSD SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.5.2 RGB-D SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.5.3 Optimization Results . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4 Realistic Indoor Ray-launching Propagation Loss Model 654.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.2.1 Line Segment Extraction . . . . . . . . . . . . . . . . . . . . . . . 684.2.2 Device Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 704.2.3 Ray Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.2.4 Electrical Field Computation . . . . . . . . . . . . . . . . . . . . . 754.2.5 Application Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 784.2.6 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.3 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.3.1 Validation model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.3.2 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.3.3 Test Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

4.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.4.1 Office Environment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.4.1.1 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 864.4.1.2 Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . 88

viii

Contents

4.4.1.3 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 904.4.1.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.4.2 Office Environment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 924.4.2.1 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 924.4.2.2 Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . 954.4.2.3 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 964.4.2.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . 98

4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

5 Indoor RF-propagation Applications 1015.1 Signal based Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025.2 Device-free localization & Radio Tomographic Imaging . . . . . . . . . . . 103

5.2.1 The RTI-algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.2.2 The Weighting Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.3 Angle of Arrival Localization . . . . . . . . . . . . . . . . . . . . . . . . . 1065.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6 Outdoor IEEE 802.11.ah Range Characterization using ValidatedPropagation Models 1116.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116.2 IEEE 802.11.ah Range Characterization . . . . . . . . . . . . . . . . . . . 1136.3 Measurement methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 1136.4 path loss models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146.5 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

6.5.1 Path loss model comparison . . . . . . . . . . . . . . . . . . . . . . 1176.5.2 MAC-layer performance . . . . . . . . . . . . . . . . . . . . . . . . 118

6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

7 Conclusion 1237.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Bibliography 129

ix

List of Figures

1.1 Visual illustration of the PhD structure and the indication of the individualcontributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.1 2D illustration of a trajectory that contains a set of poses x and motion vectorsu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 The graphical model of the full-SLAM principle where X, U, and Z illustratesthe robot poses, the relative robot motions, and the sensor measurementsrespectively [Thrun et al., 2005b] . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3 The graphical model of the online-SLAM principle where X, U, and Z illus-trates the robot poses, the relative robot motions, and the sensor measure-ments respectively [Thrun et al., 2005b] . . . . . . . . . . . . . . . . . . . . . 10

2.4 ICP Least square approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5 PCA alignment from source to target. . . . . . . . . . . . . . . . . . . . . . . 152.6 ICP overview scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.7 ICP alignment based on a point to point approach. . . . . . . . . . . . . . . . 172.8 ICP alignment based on a point to surface approach. . . . . . . . . . . . . . . 172.9 The blue curve illustrates the Huber Loss function and the green curve rep-

resents the L1-optimization curve. . . . . . . . . . . . . . . . . . . . . . . . . 182.10 This figure illustrates a simplistic schematic of the LSD SLAM algorithm. . . 212.11 This figure illustrates a simplistic schematic of the RGB-D SLAM algorithm. 222.12 This figure illustrates a Quadtree hierarchical data structure where the gray

cells indicate the route toward the two gray indicated occupied leaves. . . . . 232.13 Example of modeled environment in a Quadtree data structure. The Red cell

indicates the occupied cells. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.14 A visual representation of the Octree data structure [Hornung et al., 2013].

The black leaf node represents occupied space, whereas grey nodes indicatefree space. Unknown space is marked by transparent nodes. . . . . . . . . . . 24

2.15 Local binary address of a Quadtree Node. . . . . . . . . . . . . . . . . . . . . 252.16 Binary addressing example, gray leafs represents the occupancy of objects. . 262.17 This illustrates the space wave mechanism, where d is distance between both

transmitter Tx and receiver Rx . . . . . . . . . . . . . . . . . . . . . . . . . . 27

x

List of Figures

2.18 Difference between empirical and ray-based deterministic propagation lossmodel where dotted lines are representing the environment model and thefull lines the individual rays. . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.1 The mobile Pioneer-3dx robot with a mounted Microsoft Kinect Camera,Laser scanner and Sonar sensor. . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.2 Occupancy grid map from SLAM approach and the smoothed traveled path . 413.3 The benchmark robustness scheme includes a set of two 3D point clouds.

Each set contains a source point cloud Si, a target point cloud ti, and atransformation. Every point cloud is indicated as an individual marker onthe time line t. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.4 This figure shows the comparison between the number of registration iterationand the time logarithmic in red and the comparison between the number ofiterations and the fitness score in green for the average of ICP point-to-point(ICP), SVD applied before ICP (SVD ICP), ICP point-to-surface (ICP pts),ICP non-linear (ICP nl) and Generalized ICP (GICP) . . . . . . . . . . . . . 43

3.5 The horizontal axis represents the number of registration iterations and thevertical axis represents the sum of the average and the variance for ICP point-to-point (ICP), SVD applied before ICP (SVD ICP), ICP point-to-surface(ICP pts), ICP non-linear (ICP nl) and Generalized ICP (GICP) . . . . . . . 44

3.6 The benchmark precision scheme . . . . . . . . . . . . . . . . . . . . . . . . . 453.7 The horizontal axis represents the different methods and the vertical axis

represents the variance of the precision test in the x direction . . . . . . . . . 463.8 The horizontal axis represents the different methods and the vertical axis

represents the variance of the precision test in the y direction . . . . . . . . . 463.9 The x-axis represents the different methods and the y-axis represents the

variance of the precision test in the z direction . . . . . . . . . . . . . . . . . 473.10 The horizontal axis represents the different methods and the vertical axis

represents the variance of the precision test in the yaw direction . . . . . . . 483.11 The horizontal axis represents the different methods and the vertical axis

represents the variance of the precision test in the pitch direction . . . . . . . 483.12 The horizontal axis represents the different methods and the vertical axis

represents the variance of the precision test in the roll direction . . . . . . . . 493.13 With MapFuse, a dataset that is recorded in a simulated or real environment

is used as input for a SLAM algorithm. In the map optimisation component,the resulting SLAM point cloud is merged with an initial model which wasmodeled based on exact dimensions of the environment. The final MapFuseresult is a complete volumetric model of the environment. . . . . . . . . . . . 50

3.14 An initial guess point cloud will be used as to complete the unfinished SLAMpoint cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.15 With Gazebo, we are able to simulate quadcopter flight and sensor measure-ments in order to gather an ideal dataset. This dataset was used to evaluatewhich SLAM algorithm was most suitable for our approach. . . . . . . . . . . 51

3.16 In both simulation and reality, we conducted tests with a common webcam-era (3.16a), a wide field-of-view webcamera (3.16b) and a Microsoft Kinect(3.16c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.17 Basic schematic of the optimization block of our system. . . . . . . . . . . . . 53

xi

List of Figures

3.18 In figure 3.18a, the initial guess (IG) is iteratively merged with the completeSLAM point cloud (SL). A balance between map completeness and detail isregulated by the amount of IG or SL point clouds we merge. Figure 3.18billustrates another option, where a single IG is merged with partial onlineSLAM clouds (SLn). The online merging process is finished when SLAMhas completely processed the dataset. The difference between both mergingmethods is discussed in detail in section 3.5.3 . . . . . . . . . . . . . . . . . . 53

3.19 LSD SLAM result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.20 For our research, we mounted a Kinect camera to an Erle-Copter . . . . . . . 553.21 We assessed the RGB-D SLAM algorithm in several environments. First,

we tested indoor environments as shown in figures 3.21a, 3.21b and 3.21c.Second, we applied the algorithm to map an industrial train cart (figures 3.21dand 3.21e). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.22 RGB-D SLAM X-axis precision . . . . . . . . . . . . . . . . . . . . . . . . . . 583.23 RGB-D SLAM Y-axis precision . . . . . . . . . . . . . . . . . . . . . . . . . . 593.24 RGB-D SLAM Z-axis precision . . . . . . . . . . . . . . . . . . . . . . . . . . 593.25 An OctoMap created from our RGB-D SLAM result of figure 3.21c. . . . . . 603.26 Iterative merging of our initial guess model (IGM) with the complete SLAM

point cloud (SL). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.27 Online merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.28 Optimisation results for our own datasets. For figure 3.28a, online merging

was applied. In figure 3.28c, we conducted iterative merging of 2 point clouds. 62

4.1 overview ray launching propagation loss model . . . . . . . . . . . . . . . . . 674.2 region growing based line extraction. . . . . . . . . . . . . . . . . . . . . . . . 694.3 clustering + line Segment extraction . . . . . . . . . . . . . . . . . . . . . . . 704.4 Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.5 Reflection binary tree data structure + ray order illustration . . . . . . . . . 754.6 Example of a ray calibration. . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.7 example single ray propagation . . . . . . . . . . . . . . . . . . . . . . . . . . 784.8 Result of complexity analysis where the x-axis represents the Quadtree depth,

the right y-axis represents the average number of cells that were traversed bythe rays, the left y-axis represents the computation time in seconds . . . . . . 80

4.9 Result of complexity analysis where the x-axis represents the Quadtree depth,the right y-axis represents the average number of line segments in the occupiedcells that were traversed by the rays, the left y-axis represents the computationtime in seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.10 overview of the validation approach . . . . . . . . . . . . . . . . . . . . . . . 814.11 Picture of the Pioneer 3-DX robot that was used for validating the propagation

model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.12 Office environment that was used to validate the propagation loss model. . . 834.13 Hardware that is used in the CPM environment for transmitter and receiver . 844.14 iTower Gent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.15 Hardware that is used in the iTower Gent environment for transmitter and

receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.16 RMSE validation with phases shifts included of office Environment 1 . . . . . 874.17 Local constructive and destructive phenomena . . . . . . . . . . . . . . . . . 874.18 RSS Heatmap of transmitter that is located at position 2 . . . . . . . . . . . 88

xii

List of Figures

4.19 RMSE validation of the first office environment where eight neighbours areincluded. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.20 Reflection benchmark for office environment 1 . . . . . . . . . . . . . . . . . . 894.21 Environment Model with transmitter and receiver locations . . . . . . . . . . 904.22 Validation of the different transmitters . . . . . . . . . . . . . . . . . . . . . . 914.23 The performance of a simulation where 3200 rays are launched with different

resolution levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.24 RMSE validation of the second office environment where no phase shifts are

included . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934.25 RMSE validation of the second office environment where eight neighbors are

included . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934.26 Difference between a real environment model after SLAM and a environment

based on the outer boundaries of the real environment. . . . . . . . . . . . . . 944.27 RMSE validation between a real environment model after SLAM and a envi-

ronment based on the outer boundaries of the real environment. . . . . . . . 944.28 Reflection benchmark for office environment 2 . . . . . . . . . . . . . . . . . . 954.29 Environment Model where blue dots are indicating the transmitter locations

and the red stars are indicating the transmitter locations. Next, the greencells represent objects and the red cells represent a wall . . . . . . . . . . . . 96

4.30 Validation of each individual transmitter . . . . . . . . . . . . . . . . . . . . . 974.31 Heatmap of transmitter number two . . . . . . . . . . . . . . . . . . . . . . . 984.32 The performance of a simulation where 2700 rays are launched with different

resolution levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.1 Results of three simulations where the reflection recursion level was configuredto 0, 1, and 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.2 Schematic overview of a test environment which consists of 2 rooms connectedby a hallway. The red asterisks indicate the locations of the nodes . . . . . . 105

5.3 RTI image and overview of the deployment environment. . . . . . . . . . . . 1055.4 Visual overview of the difference between the LoS weighting matrix and the

weighting matrix where the reflections are included . . . . . . . . . . . . . . . 1065.5 Overview of the tomographic environment system together with the RTI lo-

cation estimation where the influence of multipath is incorporated . . . . . . 1075.6 Simplistic environment model where the red diamonds indicates the receiver

locations and the black squares indicates the transmitter locations. . . . . . . 1085.7 Preliminary results of AoA localization based on propagation loss model sim-

ulations where reflection recursion level of 0, 1, and 5 was configured . . . . . 109

6.1 Transmitter (cross) and receiver locations (dots) of both LoS and non-LoSscenarios applied in a macro and pico deployment. . . . . . . . . . . . . . . . 115

6.2 Correlation between the measurements and path loss models in terms of RSSand as a function of distance . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

6.3 Packet loss and throughput as a function of distance for the actual radiohardware as well as ideal case . . . . . . . . . . . . . . . . . . . . . . . . . . 120

7.1 Visual illustration of the PhD structure and the indication of the individualcontributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

xiii

List of Tables

4.1 Validation Results of office environment 1 . . . . . . . . . . . . . . . . . . . . 914.2 Overall Results of office environment 1 . . . . . . . . . . . . . . . . . . . . . 914.3 Validation Results of office environment 2 . . . . . . . . . . . . . . . . . . . . 974.4 Overall Results of office environment 2 . . . . . . . . . . . . . . . . . . . . . 97

6.1 Normalized RMSE comparison of the path loss loss models for all scenarios . 1176.2 Physical layer parameters used for simulation . . . . . . . . . . . . . . . . . . 119

xiv

Acronyms

AoA Angle of Arrival

AoD Angle of Departure

DoF Degrees of Freedom

FDTD Finite Difference Time Domain

FEM Finite Elements Method

GICP Generalized ICP

Gmapping Grid-Mapping

GO Geometric Optic

ICP Iterative Closest Point

IoT Internet of Things

LB Link Budget

LIDAR Laser Imaging Detection And Ranging

LMA Levenberg Marquadt

LSD SLAM Large-Scale Direct SLAM

MAE Mean Absolute Error

ME Mean Error

xv

Acronyms

MIMO Multiple-Input-Multiple-Output

MoM Method of Moments

NARF Normal Aligned Radial Feature

PCA Principal Component Analysis

PCL Point Cloud Library

PL path loss

RANSAC RANdom Sample Consensus

RGB-D SLAM RGB-Depth SLAM

RMSE Root Mean Square Error

ROS Robotic Operating System

RSS Received Signal Strength

RTI Radio Tomographic Imaging

SIFT Scale-invariant Feature Transform

SIR Sampling Importance Resampling

SLAM Simultaneous Localization and Mapping

SURF Speeded-Up Robust Features

SVD Singular Value Decomposition

TEM Transversal Electromagnetic

UTD Uniform Theory of Diffraction

xvi

List of Publications

Patents

1. B. Velleman, M. Weyn, R. Berkvens, and B. Bellekens. Energy-saving positioningand communication, 2016

Journal papers

1. B. Bellekens, R. Penne, and M. Weyn. Realistic Indoor Radio Propagation forSub-GHz Communication. MDPI Sensors, 2018

2. M. Aernouts, B. Bellekens, and M. Weyn. MapFuse : Complete And Realistic 3DModelling. Hindawi Journal of Robotics, 2017

3. B. Bellekens, V. Spruyt, R. Berkvens, R. Penne, and M. Weyn. A BenchmarkSurvey of Rigid 3D Point Cloud Registration Algorithms. International Journal onAdvances in Intelligent Systems, 8(1):118–127, 2015. ISSN 1942-2679

Conference papers

1. R. Berkvens, F. Smolders, B. Bellekens, M. Aernouts, and M. Weyn. Comparing433 and 868 MHz Active RFID for Indoor Localization Using Multi-Wall Model.Number June, pages 26–28, 2018. ISBN 9781538669846

2. B. Bellekens, L. Tian, P. Boer, M. Weyn, and J. Famaey. Outdoor IEEE 802.11ahRange Characterization Using Validated Propagation Models. In GLOBECOM2017 - 2017 IEEE Global Communications Conference, pages 1–6. IEEE, dec 2017.ISBN 978-1-5090-5019-2. doi: 10.1109/GLOCOM.2017.8254515. URL http://

ieeexplore.ieee.org/document/8254515/

3. R. Berkvens, B. Bellekens, and M. Weyn. Signal strength indoor localization usinga single DASH7 message. In 2017 International Conference on Indoor Positioningand Indoor Navigation (IPIN), number September, pages 1–7. IEEE, sep 2017.ISBN 978-1-5090-6299-7. doi: 10.1109/IPIN.2017.8115875

xvii

List of Publications

4. E. Tanghe, P. Laly, D. P. Gaillot, N. Podevijn, S. Denis, N. BniLam, B. Bellekens,R. Berkvens, M. Weyn, M. Lienard, L. Martens and W. Joseph. Dense MultipathComponent Polarization and Wall Attenuation at 1.35 GHz in an Office Environ-ment. In EuCap2018, 2017

5. B. Bellekens, R. Penne, and M. Weyn. Validation of an indoor ray launchingRF propagation model. In 2016 IEEE-APS Topical Conference on Antennas andPropagation in Wireless Communications (APWC), pages 74–77. IEEE, sep 2016.ISBN 978-1-5090-0470-6. doi: 10.1109/APWC.2016.7738122

6. S. Denis, R. Berkvens, G. Ergeerts, B. Bellekens, and M. Weyn. Combining multiplesub-1 ghz frequencies in radio tomographic imaging. In Indoor Positioning andIndoor Navigation (IPIN), 2016 International Conference on, pages 1–8. IEEE,2016

7. S. Vanneste, B. Bellekens, and M. Weyn. Obstacle Avoidance Using an Octomap.In MORSE 2014, 2014

8. B. Bellekens, V. Spruyt, and M. Weyn. A Survey of Rigid 3D Pointcloud Regis-tration Algorithms. In AMBIENT 2014, The Fourth International Conference onAmbient Computing, Applications, Services and Technologies. 2014., pages 8–13,2014. ISBN 9781612083568

xviii

Chapter 1

Introduction

In the present day, our society strongly depends on the usage and developments oftelecommunication. Because of the wireless connectivity, it is possible to chat, call, mail,search the Internet or even to localize ourselves at any time and any place. Furthermore,due to Internet of Things (IoT) applications, many services such as water and electricityconsumption monitoring are automated. In addition to this, the wireless connectivitynowadays takes place at different levels. This includes a seamless switch between a cel-lular network and a Wi-Fi network at the level of our daily life. On the other hand,at industrial level it is possible to apply different wireless technologies together so thatconnectivity can be assured at any time. Furthermore, due to the decreasing servicecosts of using the infrastructure of a service provider, the demand of continuously im-proving telecommunication systems is increased. As a result of this increasing demand,the research for a better connection in terms of throughput, scalability, and coverage isincreased.In order to get a better picture of the scope of this thesis, I go back in time to World WarOne. In that period, telecommunication was used to send small Morse messages betweenthe battlefield and the commando post by applying the concept of radio propagation. Atthat time, an analogue signal was transposed on a carrier signal that was transmittedby an antenna. On the battlefield, a receiver of the enemy, which was equipped with anantenna, was able to receive the Morse message by a reflection of an airplane that wasflying over the battlefield and thus the message could be extracted from the carrier sig-nal. As a result of this communication system, the distance between both antennas, theoperating frequency, and the terrain was found to be very important in order to receivethe message properly. Therefore, research was conducted in order to find a frequency,the height of the antenna, and a type of terrain that was able to carry a signal for longdistances. Subsequently, different aspects of the fundamental radio wave phenomena,such as reflections, refractions, and diffractions were used to increase or decrease thedistance between two antennas [Worts, 1915].Presently, many radio networks are widely deployed in indoor environments, these radiowave phenomena cause many losses which result in limited coverage and a signal qual-

1

1. Introduction

ity that can change based on the location of the receiver. Since many years research isconducted to get fundamental knowledge about these losses in indoor environments byapplying deterministic radio propagation loss models for different kind of applications.On the one hand, site surveys for Wi-Fi and signal strength simulations for Internet ofThings (IoT) applications are one of the most applied applications, while on the otherhand antenna design optimizations are done by applying a deterministic radio propaga-tion loss model. Such a deterministic radio propagation loss model takes a geometricmodel of a specific environment as input in order to simulate the signal strength at dif-ferent locations. Furthermore, a deterministic radio propagation takes the different wavephenomena such as reflections and refraction into account. Since this model addressesthe geometry of an environment in a general way, the result is limited to the level of thedetails that the environmental model describes. This thesis explains and implements asystem that is able to produce a detailed map of the environment in 2D and 3D by usinga moving robot. Furthermore, this thesis implements a deterministic propagation modelthat is capable of using this detailed map of the environment model in both 2D and3D. The propagation model that is implemented in this thesis is based on the principleof ray launching. The tool will launch a bunch of rays that are modeled according toan antenna radiation pattern in the environment model that was captured by a movingrobot or drone. Since it is important to address the material parameters of walls, floor,ceiling, windows, or objects, a segmentation algorithm is implemented for 2D environ-ments so that it is possible to assign a material parameter in a supervised way to eachsegmented object. The main goal of this thesis is to indicate the relation between thelevel of detail and the number of rays that are launched when applying a deterministicradio propagation model on an environment model that holds the actual geometry. Next,this thesis expands the research with an automated validation solution that enables itto apply a validation of the propagation model where many individual radio links areincluded and evaluated. This validation solution is based on two processes:

1. Capturing and generating a geometric map of the environment while maintainingthe location of the robot by applying a SLAM algorithm.

2. Placing a radio receiver on the robot while it is generating the map regarding to thefirst robot location so that each radio measurement can be related to a geometricaltransformation.

The combination of both processes enables it to maintain all geometrical transformationbetween the places where a radio measurement was taken and the first robot location.This thesis describes the implementation and validation of a ray launching propagationloss model that uses a real indoor environment model so that the state-of-the-art researchabout localization and telecommunication can be optimized with realistic signal strengthvalues.

1.1 Motivations and Research Objectives

Since many decades, research has proven that ray launching propagation loss modelstechniques are promising for simulating the signal strength so that it can be used forwireless communication systems [Yun and Iskander, 2015]. As a result of this research,different limitations that cause inaccurate results are indicated and are further examined

2

1.2. Contributions

in this thesis. The main limitation that is addressed, is the lack of realistic 2D and 3Dgeometric models that contains objects, walls, door openings, etc. when a ray launchingpropagation loss model is applied. This thesis focuses on four research goals that en-ables the possibility to cope with a geometric model of the real environment, which areaddressing the following global research question:

Which methods are possible to apply an indoor radio propagation model thatmakes use of a 2D or 3D realistic environment model an how can both

models be combined in an efficient way.

In order to solve this global research question, the following research questions are defined:

• How to model a 2D and 3D geometric model of the real environment where allobjects, walls, and floors are segmented and classified?

• What is the most efficient method to implement a ray launching propagation lossmodel?

• How to integrate the geometric model of the real environment with a ray launchingpropagation loss model?

• How to validate a ray launching propagation loss model where a large scale wirelesssensor network can be used together with the geometric model?

1.2 Contributions

The contributions that are made in this thesis are:

A Study on robust geometric techniques that captures and segment an environmentin 2D and 3D so that it can be used for radio propagation simulations

B Implement a ray launching propagation loss model that can enhance localizationalgorithms such as Angle of Arrival, Radio Tomographic Imaging, and RSS local-ization.

C Design an automated validation system that combines geometrical environmentelements with radio transceiver measurements by applying a Simultaneous Local-ization and Mapping (SLAM) algorithm with a moving robot

D Evaluate the integration of applying a realistic environment model in a radio prop-agation loss model by the automated validation model.

E Coverage Characterization of the IEEE 802.11.ah standard using validated propa-gation models.

These contributions are listed in following publications:

• Bellekens B., Penne R., & Weyn M., Realistic Indoor Radio Propagation forSub-GHz Communication, MDPI Sensors, 2018, in press

3

1. Introduction

• Bellekens B., Penne R., & Weyn M., Validation of an indoor ray launching RFpropagation model. In Proceedings of the 2016 6th IEEE-APS Topical Conferenceon Antennas and Propagation in Wireless Communications, IEEE APWC 2016(pp. 7477). IEEE. http://doi.org/10.1109/APWC.2016.7738122

• Aernouts, M., Bellekens, B., & Weyn, M., MapFuse : Complete And Realistic 3DModelling. Hindawi Journal of Robotics, 2018, 113. http://doi.org/10.1155/2018/4942034

• Bellekens B., Spruyt V., Berkvens R., Penne R., & Weyn M., A BenchmarkSurvey of Rigid 3D Point Cloud Registration Algorithms. International Journal onAdvances inIntelligent Systems, 8(1), 118127.

• Bellekens B., Spruyt V., & Weyn M., A Survey of Rigid 3D Pointcloud Regis-tration Algorithms. In AMBIENT 2014, The Fourth International Conference onAmbient Computing, Applications, Services and Technologies. 2014. (pp. 813).

• Bellekens B., Tian L., Boer P., Weyn M., & Famaey J., Outdoor IEEE 802 . 11ahRange Characterization. In GLOBECOM 2017 - 2017 IEEE Global Communica-tions Conference (pp. 16). IEEE. http://doi.org/10.1109/GLOCOM.2017.8254515

1.3 Outline

The following Figure 1.1 shows the outline of the structure and each overlap in thestructure illustrates the contributions that were indicated in the previous section.

Figure 1.1: Visual illustration of the PhD structure and the indication of the individualcontributions.

The outline of this thesis starts in Chapter 2 by describing the literature, which isdivided in three parts:

1. The literature about Simultaneous Localization and Mapping (SLAM) is describedaccording to algorithms that were used during the research.

4

1.3. Outline

2. The research about spatial data structures such as a Quadtree and a Octree, thebinary addressing technique that is used to address each tree node, and the addedvalue of applying an Octomap for 3D radio propagation modeling.

3. The literature of wave propagation models and how the power density of an elec-tromagnetic wave decreases when the distance increases.

Chapter 3 explains and discusses the research about the behavior of iterative closestpoint in real world scenarios. Furthermore, a generic solution is implemented that makesit possible to merge the result of a SLAM algorithm with a general model of the environ-ment so that is can be used by the implemented propagation model. In Chapter 4, theimplementation and validation of the ray launching propagation loss model is described.Next, the validation of the propagation model is done by evaluating and analyzing the re-sults of two indoor office environments. Chapter 5 includes the details and different waysof applying this propagation model in other research topics such tag-less localization,Angle of Arrival and Received Signal Strength (RSS) Localization. Finally, Chapter 6explains the outdoor range characterization of the IEEE 802.11.ah standard regardingto the validation of seven widely used outdoor propagation loss models.

5

Chapter 2

State-of-the-art

The state-of-art that is given in this chapter covers different research domains related tothe listed publications that were proposed in Chapter 1. The research is classified in threemain parts. First, I will discuss the state-of-the-art of SLAM that is used to create amap of the environment given the sensor measurements that were obtained from a robot.Hereby, different common mathematical tools and the literature about rigid registrationalgorithms will be explained that are widely used in SLAM applications. Moreover,to enable the creation of a map in 2D and 3D, three SLAM solutions are explained.Second, two commonly used spatial data structures are presented that make it possibleto represent an environment to a given resolution, space or volume. To extend such aspatial data structure to a more realistic representation of the environment, a sensormodel can be added. One example of such a spatial data structure that is extended witha sensor model, is OctoMap and is described more in detail. Third, the literature relatedto the research about electromagnetic wave propagation modeling is illustrated in twoparts. Deterministic propagation models are considered: these describe different relevanttechniques regarding to indoor environments and their complexity. Furthermore, outdoorpropagation models which explain widely used empirical models in order to model thepropagation loss outdoors.

2.1 SLAM

Since many years, researchers have come up with solutions to the SLAM problem. Thisproblem occurs when a robot is placed in an unknown environment. Without a map orinformation about its own location, the robot has to be able to build a map of the envi-ronment while determining its own position at the same time [Durrant-Whyte and Bailey,2006, Dissanayake, 2000, Eliazar and Parr, 2003, Gutmann and Konolige, 2012]. Apply-ing such an algorithm is by far, one of the most important steps towards autonomousdriving cars and realistic radio propagation simulation.

As a first step in describing the mathematical basics of a SLAM algorithm, all robotreadings like wheel encoder readings, Laser Imaging Detection And Ranging (LIDAR)

7

2. State-of-the-art

readings or range camera readings need to be recorded together with a timestamp t. First,every robot motion can be described by the following sequence ut = u1, u2, u3, ..., ut,where (ut − ut−1) is the relative motion between two robot poses. Second, all sensormeasurements like a LIDAR scan or a 3D point cloud captured from a range camera areexpressed as a vector zt = z0, z1, z2, ..., zt, where zt is a captured LIDAR scan or a 3Dpoint cloud of the environment related to the orientation of the robot. As a result ofthis, each sensor measurement is projected with respect to the current pose of the robot.Because of the goal of a SLAM algorithm: the poses that describe the traveled trajectoryare unknown and have to be computed based on the robot readings. The result of theSLAM algorithm is on the one hand, a trajectory that consists of a set of poses x that canbe denoted by a vector that holds a coordinate X,Y, Z and a bearing θ. Furthermore,the trajectory can be expressed as xt = x0, x1, x2, ..., xt, where xt is the latest poseand x0 is the initial pose and thus the reference. In order to illustrate this, Figure 2.1shows a 2D representation of a trajectory xt where each pose is represented by its ownaxis aligned coordinate system that is used to represent every sensor measurement suchas a LIDAR scan. On the other hand, the final result of a SLAM algorithm makes itpossible to represent a sensor measurement that is captured at pose xt relative to posex0 so that a global environment map m can be created.

Figure 2.1: 2D illustration of a trajectory that contains a set of poses x and motionvectors u.

Since none of the range sensors are noise-free because of accumulated errors that areintroduced due to analogue-digital converters, the environment cannot be captured in aperfect way. This makes it impossible to state that every sensor measurement is correct.For this reason, ut, and zt are defined as probability distributions.

In order to cope with these different probability distributions to compute the pos-terior distribution, two models are introduced: first, a motion model that describes theprobability of a robot pose regarding to the previous pose given the motion measurement

8

2.1. SLAM

of two wheel encoders. Second, a sensor model defines the probability distribution of asensor measurement to the given location xt and the environment map that is representedby m [Thrun et al., 2005a]. Two types of motion models are described in the literaturein order to estimate a new pose based on the velocity and movements of the robot Thrunet al. [2005a]. First, an odometry-based model can be used when the robot is equippedwith wheel encoders. These wheel encoders delivers a translation vector, and two rota-tions that are used with traditional geometry to estimate a new pose. Secondly, whenno wheel encoders are available, a velocity model can be used to estimate a new pose.Such a velocity model takes a translation and rotation velocity into account in order toestimate a new pose by traditional geometry. Both motion models can be modeled asa multivariate Gaussian distribution centered at the new pose that is estimated by themotion model as described by following equation:

p(xt|xt−1, ut) = N (xt, Rt) (2.1)

where, Rt is a covariance matrix that has size of 3 by 3. This covariance represents thenoise related to the three dimensions. The sensor model can be expressed as followingmultivariate probability distribution:

p(zt|xt,m) = N (h(pt,m), Qt) (2.2)

where h(pt,m) is the sensor measurement at location pt according to the current belief ofthe environment map m and Qt is the noise covariance matrix of the sensor measurement.

When implementing such a SLAM algorithm, a pose of a robot can be determined ina probabilistic fashion by combining sensor measurements with odometry information.In order to illustrate the SLAM problem, two concepts are described that are relevant inthe understanding of the implementation [Thrun et al., 2005a]. Firstly, the full SLAMapproach enables to estimate the posterior distribution based on all trajectory and sensorrecordings, which can be seen in the following posterior formula:

P (x1:t,m|z1:t,u1:t) (2.3)

where x1:t is the sequence of all poses that are indexed from timestamp 1 until t, m is theenvironment map. z1:t and u1:t are respectively the sensor measurements and motionreadings. To illustrate the meaning of this posterior, Figure 2.2 shows the differentmeasurements related to time t graphically. In this figure, the gray sections indicate theparameters that are being calculated. Secondly, the online SLAM approach only takesthe latest reading into account in order to estimate the next pose. This can be seen infollowing posterior formula:

P (xt,m|z1:t,u1:t) (2.4)

in which xt represents the current pose at time t with respect to the environment m. Asan illustration of the online SLAM Figure 2.3 shows the different blocks related to thetime t. The main difference between the two concepts is the fact that the online SLAMapproach only calculates the latest position, while the full SLAM approach is redefiningthe full trajectory.

Since, the motion model and the sensor model are modeled according to a Gaussianmultivariate distribution, a SLAM algorithm can not be solved in a deterministic fashion.This results in the usage of Bayes filters such as Kalman and particle filters. Generally,three main SLAM paradigms are distinguished. First, landmark-based SLAM that rely

9

2. State-of-the-art

Figure 2.2: The graphical model of the full-SLAM principle where X, U, and Z illus-trates the robot poses, the relative robot motions, and the sensor measurements respec-tively [Thrun et al., 2005b]

Figure 2.3: The graphical model of the online-SLAM principle where X, U, and Z illus-trates the robot poses, the relative robot motions, and the sensor measurements respec-tively [Thrun et al., 2005b]

on the implementation of an Extended Kalman Filter. Second, graphical based SLAMsolutions that solves the full SLAM posterior distribution by non-linear optimizationtechniques. Third, a solution that provides an implementation for the online SLAM con-cept, which uses particle filters in order to solve the posterior based on a non-parametricstatistical approach.

Furthermore in this section, I will first introduce some definitions that are commonlyused in SLAM solutions. Secondly, I will describe the literature about rigid registrationalgorithms. Thirdly, three different SLAM solutions that fit the particle filter paradigmare explained. These three solutions are further being used in the next chapter to createa realistic environment model for radio propagation simulations. First, a SLAM solution

10

2.1. SLAM

which presents an approach that drastically decreases the number of particle samplesthat are used for each individual map by applying an adaptive technique. This solutionis explained in Subsection 2.1.3. Second, two visual-SLAM algorithms are explained thatcan be further subdivided in two solutions:

1. Large-Scale Direct SLAM (LSD SLAM) uses a monocular camera with awide angle in order to find depth information [Engel et al., 2014]. This algorithmis further explained in 2.1.4.

2. RGB-Depth SLAM (RGB-D SLAM), that generally uses a depth-sense cam-era in order to find the relative transformation between two consecutive point cloudsby applying a closed form registration algorithm [Endres et al., 2012]. This solutionis also refered to as feature-based SLAM and is described in Subsection 2.1.5.

2.1.1 Mathematical Tools

In this subsection, on the other hand the least-squares optimization problem will be intro-duced and the concept of homogeneous transformations will be discussed since this is thebase of 3D registration algorithms, which is described in Rigid registration. Furthermore,loop closing is introduced as a part of a SLAM algorithm that restores misalignments inthe map when a robot returns at a place it was before.

2.1.1.1 Homogeneous Transformations

A homogeneous transformation in three dimensions is specified by a 4 × 4 projectivetransformation matrix [Kay, 2005]. This matrix is used to project each point in Cartesianspace with respect to a specific viewpoint. Since we use (moving) orthonormal referenceframes, we can restrict our considerations to rigid transformations. In the following, letv1 = (x1, y1, z1, 1)ᵀ be standard homogeneous coordinates of a point in an orthonormalbase defined by viewpoint one, and let v2 = (x2, y2, z2, 1)ᵀ be standard homogeneouscoordinates of the same point in an orthonormal base defined by viewpoint two. Then itis possible to express v2 relative to the base of viewpoint one as T v1 = v2, where T isa Euclidean transformation matrix defined by (2.5).

T =

r1,1 r1,2 r1,3 t1r2,1 r2,2 r2,3 t2r3,1 r3,2 r3,3 t30 0 0 1

(2.5)

The transformation matrix shown by (2.5) consists of a 3× 3 rotation matrix (2.6),

R =

r1,1 r1,2 r1,3r2,1 r2,2 r2,3r3,1 r3,2 r3,3

(2.6)

and the column vector t = (t1, t2, t3)ᵀ representing a translation. Because the nine entriesof the rotation matrix can be generated by three parameters (e.g., the Euler angles), weconclude that a rigid transformation has six Degrees of Freedom (DoF).

11

2. State-of-the-art

2.1.1.2 Least-Squares Minimization

A rigid transformation is defined by only 6 DoF, which can describe noisy observations,i.e., and point coordinates. Therefore, the number of parameters of any cost functionfor this problem is much smaller than the number of equations, resulting in an ill-posedproblem that does not have an exact solution. A well known technique to obtain anacceptable solution in such case, is to minimize the square of the residual error. Thisapproach is called least-squares optimization and is often used for fitting and regressionproblems.

Whereas a linear least-squares problem can be solved analytically, this is often notthe case for non-linear least-squares optimization problems. In this case, an iterativeapproach can be used by iteratively exploring the search space of all possible solutions inthe direction of the gradient vector of the cost function. This is illustrated by Figure 2.4,where the cost function f(d) of the ICP registration algorithm is minimized iteratively.The cost function in this case represents the sum of the squared Euclidean distances,defined by the rotation and the translation, between all corresponding points of twopoint cloud viewpoints.

d

min f(d)

f(d)

Figure 2.4: ICP Least square approach.

2.1.1.3 Loop Closing

One of the problems that occurs when applying a SLAM algorithm is detecting andoptimizing the location and map when a location is detected that was visited before.This process is often referred to loop closing. The main idea behind loop closing isdivided in two parts: One, detecting the loop closure by finding correspondences. Two,the accumulated error between the latest location and the location that was known beforeshould be distributed over all other transformation.

12

2.1. SLAM

In order to detect such a loop, place recognition has to be applied. Williams et al.[2009] proposed three different data associations for detecting a loop closure: Map-to-map, Image-to-image, and Image-to-map.

1. a map-to-map approach is based on finding correspondences in the different submapsof all measurements. These correspondences are usually based on common features.This can be computed based on the original geometric compatibility branch andbound algorithm (GCBB).

2. a image-to-image approach is based on visual features such as Speeded-Up RobustFeatures (SURF), Scale-invariant Feature Transform (SIFT), Normal Aligned Ra-dial Feature (NARF), etc. that are analyzed from individual camera frames. Aloop closure is detected when a set of analyzed features points corresponds to a setthat previously was computed.

3. an image-to-map approach is used as a localization technique that tries to find arobot pose which corresponds to detected point-features from the image and thefeatures in the map.

As a result to the comparison of [Williams et al., 2009], image-to-map data associationwas the most robust and stable approach in order to perform an optimal loop closure.

2.1.2 Registration Algorithms

In the next sections, I will discuss five widely used rigid registration algorithms. Eachof these methods tries to estimate the optimal rigid transformation that maps a sourcepoint cloud on a target point cloud. Generally, there exist two categories of registrationalgorithms: First, rigid registration algorithms that allow the alignment of two rigid 3Dpoint clouds by a rigid transformation. This means that the shape of the environment orobjects is not changing over time. Second, non-rigid registration algorithms allows thealignment of two 3D point clouds by a non-rigid transformations, which allow a highernumber of DoF in order to cope with non-linear or partial stretching or shrinking of theobject. Both rigid and non-rigid registration algorithms can be further categorized intopairwise registration algorithms and multi-view registration methods. Pairwise registra-tion algorithms calculate a rigid transformation between two subsequent point cloudswhile the multi-view registration process takes multiple point clouds into account tocorrect for the accumulated drift that is introduced by pairwise registration methods.Both Principal Component Analysis (PCA) alignment and Singular Value Decomposi-tion (SVD) are pairwise registration methods based on the covariance matrices and thecross correlation matrix of the point clouds, while the ICP algorithm and its variantsare based on iteratively minimizing a cost function that is based on an estimate of pointcorrespondences between the point clouds. The selected correspondences will determinethe quality of how the final transformation fits the source point cloud to the target pointcloud.

2.1.2.1 Principal Component Analysis

PCA is often used in classification and compression techniques to project data on a neworthonormal basis in the direction of the largest variance [Draper et al., 2002]. Thedirection of the largest variance corresponds to the largest eigenvector of the covariance

13

2. State-of-the-art

matrix of the data, whereas the magnitude of this variance is defined by the correspondingeigenvalue.

Therefore, if the covariance matrix of two point clouds differs from the identity ma-trix, a rough registration can be obtained by simply aligning the eigenvectors of theircovariance matrices. This alignment is obtained as follows.

First, the two point clouds are centered such that the origins of their original basescoincide. Point cloud centering simply corresponds to subtracting the centroid coordi-nates from each of the point coordinates. The centroid of the point cloud corresponds tothe average coordinate and is thus obtained by dividing the sum of all point-coordinatesby the number of points in the point cloud.

Since registration based on PCA simply aligns the directions in which the point cloudsvary the most, the second step consists of calculating the covariance matrix of each pointcloud. The covariance matrix is an orthogonal 3×3 matrix, the diagonal values of whichrepresent the variances while the off-diagonal values represent the covariances.

Third, the eigenvectors of both covariance matrices are calculated. The largest eigen-vector is a vector in the direction of the largest variance of the 3D point cloud and,therefore, it represents the point cloud’s orientation. In the following, let A be the co-variance matrix, let v be an eigenvector of this matrix, and let λ be the correspondingeigenvalue. The eigenvalues decomposition problem is then defined as:

Av = λv (2.7)

and further reduced to:x(A− λI) = 0. (2.8)

It is clear that (2.8) only has a non-zero solution if A− λI is singular and, consequently,if its determinant equals zero:

det(A− λI) = 0 (2.9)

The eigenvalues can simply be obtained by solving (2.9), whereas the correspondingeigenvectors are obtained by substituting the eigenvalues into (2.7).

Once the eigenvectors are known for each point cloud, registration is achieved byaligning these vectors. In the following, let matrix T yt represent the transformation thatwould align the largest eigenvector t of the target point cloud with the y-axis. Let matrixT sy represent the transformation that would align the largest eigenvector s of the sourcepoint cloud with the y-axis. Then the final transformation matrix T st that aligns thesource point cloud with the target point cloud can be obtained easily, as illustrated byFigure 2.5.

Finally, the centroid of the target data is added to each of the transformed coordinatesto translate the aligned point cloud, such that its center corresponds to the center of thetarget point cloud.

2.1.2.2 Singular Value Decomposition

PCA based registration simply aligns the directions of the largest variance of each pointcloud and, therefore, it does not minimize the Euclidean distance between correspondingpoints of the datasets. Consequently, this approach is very sensitive to outliers and onlyworks well if each point cloud is approximately normally distributed.

14

2.1. SLAM

s

tTty

Tys

Tts

X

Y

Figure 2.5: PCA alignment from source to target.

However, if point correspondences between the two point clouds are available, a morerobust approach would be to directly minimize the sum of the Euclidean distances be-tween these points. This corresponds to a linear least-squares problem that can be solvedrobustly using the SVD method [Marden and Guivant, 2012].

Based on the point correspondences, the cross correlation matrix M between thetwo centered point clouds can be calculated, after which the eigenvalue decomposition isobtained as follows:

M = USV ᵀ (2.10)

Where U and V are orthonormal matrices whose columns describes the eigenvectors ofboth point clouds and S is a diagonal matrix where the diagonal elements are describingthe eigenvalues of matrix M . The optimal solution to the least-squares problem is thendefined by rotation matrix R as:

Rst = UV ᵀ (2.11)

and the translation from target point cloud to source point cloud is defined by:

t = cs −Rstct (2.12)

2.1.2.3 Iterative Closest Point

Whereas the SVD algorithm directly solves the least-squares problem, thereby assumingperfect data, Besl and Mc. Kay [Besl and McKay, 1992] introduced a method that itera-tively disregards outliers in order to improve upon the previous estimate of the rotationand translation parameters. Their method is called ‘ICP’ and is illustrated conceptuallyin Figure 2.6.

The input of the ICP algorithm consists of a source point cloud and a target pointcloud. Point correspondences between these point clouds are defined based on a nearestneighbor approach or a more elaborate scheme using geometrical features or color infor-mation. SVD, as explained in the previous section, is used to obtain an initial estimate of

15

2. State-of-the-art

Source

Target

Correspondences SVD Transform

Iteration

Output

Figure 2.6: ICP overview scheme.

the affine transformation matrix that aligns both point clouds. After transformation, thiswhole process is repeated by removing outliers and redefining the point correspondences.

Two widely used ICP variants are the ICP point-to-point and the ICP point-to-surfacealgorithms. These approaches only differ in their definition of point correspondences andare described in more detail in the next sections.

ICP point-to-point

The ICP point-to-point algorithm was originally described in [Rusu, 2010] and simplyobtains point correspondences by searching for the nearest neighbour target point qi ofa point pj in the source point cloud. The nearest neighbour matching is defined in termsof the Euclidean distance metric:

i = arg mini‖qi − pj‖2, (2.13)

where i ∈ [0, 1, ..., N ], and N represents the number of points in the target point cloud.Similar to the SVD approach discussed in Section 2.1.2.2, the rotation R and trans-

lation ~t parameters are estimated by minimizing the squared distance between thesecorresponding pairs:

R, ~t = arg minR,~t

N∑i=1

‖(R~pj + ~t)− ~qi‖2 (2.14)

ICP then iteratively solves (2.13) and (2.14) to improve upon the estimates of theprevious iterations. This is illustrated by Figure 2.7, where surface s is aligned to surfacet after n ICP iterations.

ICP point-to-surface

Due to the simplistic definition of point correspondences, the ICP point-to-point algo-rithm proposed by [Low, 2004] is rather sensitive to outliers. Instead of directly findingthe nearest neighbor to a source point pj in the target point cloud, one could take the lo-cal neighborhood of a correspondence candidate qi into account to reduce the algorithm’ssensitivity to noise.

The ICP point-to-surface algorithm assumes that the local neighborhood of a pointin a point cloud is co-planar. This local surface can then be defined by its normal vectorn, which is obtained as the smallest eigenvector of the covariance matrix of the pointsthat surround correspondence candidate qi.

Instead of directly minimizing the Euclidean distance between corresponding points,we can then minimize the scalar projection of this distance onto the planar surface defined

16

2.1. SLAM

q2

q1q3

Iteration 1

Iteration n

p1 p3p2

s

t

t

q2

q1q3

p1 p3p2

s

Figure 2.7: ICP alignment based on a point to point approach.

by the normal vector n:

R, t = arg minR,t

(N∑i=1

((Rpj + t)− qi) · ni

)(2.15)

This is illustrated more clearly in Figure 2.8.

q2

q1q3

Iteration 1

Iteration n

p1 p3p2

s

t

t

q2

q1q3

p1 p3p2

s

nn

n

nnn

Figure 2.8: ICP alignment based on a point to surface approach.

17

2. State-of-the-art

ICP non-linear

Both the point-to-point and point-to-surface ICP approaches defined a differentiable,convex, squared cost function, resulting in a simple linear least-squares optimizationproblem, known as a L2-optimization, that can be solved numerically using SVD. How-ever, L2-optimization is known to be highly sensitive to outliers because the residuals aresquared. An approach that solves this problem is known as L1-optimization, where thesum of the absolute value of the residuals is minimized instead of the square. However,the L1 cost function is non-differentiable at the origin, which makes it difficult to obtainthe optimal solution.

As a compromise between L1 and L2 optimization, the so called Huber loss functioncan be used as shown by (2.16).

e(n) =

n2/2 if |n| ≤ kk|n| − k2/2 if |n| > k

(2.16)

where k is an empirically defined threshold and n is the distance measure. The Huber lossfunction is quadratic for small values and thus behaves like an L2 problem in these cases.For large values, however, the loss function becomes linear and, therefore, it behaves likean L1 cost function. As Figure 2.9 shows the difference between the Huber-Loss functionby the green curve, the blue curve shows the L2 quadratic function. Moreover, the Huberloss function is smooth and differentiable, allowing traditional numerical optimizationmethods to be used to efficiently traverse the search space.

n

e(n)

Figure 2.9: The blue curve illustrates the Huber Loss function and the green curverepresents the L1-optimization curve.

The ICP non-linear algorithm uses the Huber loss function instead of a naive squaredloss function to reduce the influence of outliers:

R, t = arg minR,t

N∑i=1

e2(n) (2.17)

18

2.1. SLAM

wheren = ‖(Rpj − t)− qi‖ (2.18)

To obtain the optimal estimates R, t in (2.17), the Levenberg Marquadt (LMA) [Fan-toni et al., 2012] is used. The LMA method is an iterative procedure similar to the wellknown gradient descent and Gauss-Newton algorithms, which can quickly find a localminimum in non-linear functions.

Generalized ICP

A major disadvantage of the traditional point-to-point ICP algorithm, is that it assumesthat the source point cloud is taken from a known geometric surface instead of beingobtained through noisy measurements. However, due to discretization errors it is usuallyimpossible to obtain a perfect point-to-point matching even after full convergence of thealgorithm. The point-to-surface ICP algorithm relaxes this constraint by allowing pointoffsets along the surface, in order to cope with discretization differences. However, thisapproach still assumes that the source point cloud represents a discretized sample set ofa known geometric surface model since offsets along the surface are only allowed in thetarget point cloud.

To solve this, Segal et al. [2009] proposed the Generalized ICP (GICP) algorithmthat performs plane-to-plane matching. They introduced a probabilistic interpretationof the minimization process such that structural information from both the source pointcloud and the target point cloud can be incorporated easily in the optimization algo-rithm. Moreover, they showed that the traditional point-to-point and point-to-surfaceICP algorithms are merely special cases of the Generalized ICP framework.

Instead of assuming that the source point cloud is obtained from a known geometricsurface, Segal et al. [2009] assume that both the source point cloud A = ai and thetarget point cloud B = bi consist of random samples from an underlying unknown

point cloud A = ai and B = bi. For the underlying and unknown point clouds Aand B, perfect correspondences exist, whereas this is not the case for the observed pointclouds A and B, since each point ai and bi is assumed to be sampled from a normaldistribution such that ai ∼ N (ai, C

Ai ) and bi ∼ N (bi, C

Bi ). The covariance matrices CAi

and CBi are unknown. If both point clouds would consist of deterministic samples fromknown geometric models, then both covariance matrices would be zero such that thenA = A and B = B.

In the following, let T be the affine transformation matrix that defines the mappingfrom A to B such that bi = T ai. If T would be known, we could apply this transformationto the observed source point cloud A, and define the error to be minimized as dTi =bi − Tai. Because both ai and bi are assumed to be drawn from independent normaldistributions dTi , which is a linear combination of ai and bi, is also drawn from a normaldistribution:

dTi ∼ N (bi − T ai, CBi + TCAi Tᵀ) (2.19)

= N (0, CBi + TCAi Tᵀ) (2.20)

The optimal transformation matrix T is then the transformation that minimizes the

19

2. State-of-the-art

negative log-likelihood of the observed errors di:

T = arg minT

∑i

log (p(dTi ))

= arg minT

∑i

dTiᵀ(CBi + TCAi T

ᵀ)−1dTi (2.21)

Segal et al. [2009] showed that both point-to-point and point-to-plane ICP are specificcases of (2.21), only differing in their choice of covariance matrices CAi and CBi ; If thesource point cloud is assumed to be obtained from a known geometric surface, CAi = 0.Furthermore, if points in the target point cloud are allowed three degrees of freedom,then CBi = I. In this case, (2.22) reduces to:

T = arg minT

∑i

dTiᵀdTi

= arg minT

∑i

‖dTi ‖2, (2.22)

which indeed is exactly the optimization problem that is solved by the traditional point-to-point ICP algorithm. Similarly, CAi and CBi can be chosen such that obtaining themaximum likelihood estimator corresponds to minimizing the point-to-plane or the plane-to-plane distances between both point clouds.

2.1.3 GMapping

Grid-Mapping (Gmapping) is a full-SLAM solution that creates a map that is relativeto each pose based on a Rao-Blackwellized particle filter. This particle filter solves thejoint posterior shown in equation (2.3) to following factorization:

p(x1:t,m|z1:t, u1:t−1) = p(m|x1:t, z1 : t).p(x1:t|z1:t, u1:t−1) (2.23)

This makes it possible to estimate the location given the robot measurements first andthe map afterwards. By factorizing this joint posterior, the algorithm becomes moreefficient because of the computation of the map posterior since the pose and robot mea-surements are know. In order to compute the location posterior p(x1:t|z1:t, u1:t−1), aSampling Importance Resampling (SIR) filter, is applied. This particle filter representseach particle as a potential trajectory that is sampled according to a proposal distribu-tion π. In most of the cases, the motion model is used as proposal distribution to sampleall the particles. As a result of the particle sampling step, each sample is weighted by an

individual importance weight w(i)i that can be represented by the sensor model. Further-

more, a re-sampling step is necessary in the process of updating the individual weightsso that the estimation of the location is more accurate. To overcome the problems of amotion model that is not accurate and a sensor model that is very accurate, the latestmeasurements are used. Grisetti et al. [2007] improved the efficiency of such a particlefilter by reducing the number of samples in the re-sampling step. When only samplesare drawn around the peak values of the proposal distribution, the location estimationwill be more accurate, which results in an accurate grid-map.

20

2.1. SLAM

2.1.4 Large-Scale Direct SLAM

LSD SLAM is a Visual-SLAM algorithm that uses a direct method in order to obtaindepth information from monocular images. LSD-SLAM uses monocular cameras as wellas stereo cameras [Engel et al., 2014, 2015]. One of the advantages of LSD-SLAM is theusage of the full camera image as input data in order to compute the correspondencesin two consecutive images. Because of the fact that the full image is used and thus allpixels are included, feature-less rigid registration based on se(3) can be applied [Engelet al., 2014, Silveira et al., 2008, Umeyama, 1991]. Additionally, loop closures are solvedwith a image-to-map data association explained in 2.1.1.3.

Figure 2.10 illustrates the LSD-SLAM work-flow in a basic schematic. First, thetracking component estimates the rigid body pose with se(3). Secondly, the current key-frame will be refined or a new key-frame will be created from the most recent trackedimage, depending on the estimated transformation between two images. Finally, thedepth-map optimization component inserts key-frames into the global map when theyare replaced by a new reference frame.

Figure 2.10: This figure illustrates a simplistic schematic of the LSD SLAM algorithm.

2.1.5 RGB-D SLAM

In contrary to LSD SLAM, RGB-D SLAM is a feature based Visual-SLAM algorithm.Feature based Visual-SLAM collects feature observations from the camera image, andthen compares these features to the previous camera image. Numerous feature detec-tors can be implemented for this purpose, e.g. Oriented FAST and Rotated BRIEF(ORB), Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features(SURF) [Rublee et al., 2011, Panchal et al., 2013]. The used RGB-D SLAM methodexists of a front-end and a back-end system in order to create a 3D-map relative to thetraveled path.

The front-end system has the ability of computing the visual odometry based on apairwise relation between two consecutive RGB-D images in two steps. First, the front-end system extracts visual features such as SIFT, SURF, or ORB from each RGB-imageand will match these features in the next images. This will result in a list of featuredescriptor pairs that matches the two images. Next, due to inconsistency and noisyRGB-D images, a second process is necessary in order to filter local mismatches. Thissecond step applies a RANdom Sample Consensus (RANSAC) which filters the localmismatches based on their local correspondence distance [Endres et al., 2012]. This willresult in a robust SE(3) rigid transformation estimation [Umeyama, 1991].

The back-end system handles the global pose estimation by maintaining a graphusing the g2o framework. At first hand, poses are added as a node to the pose graph.When a pose matches one of the previous poses, it will be connected to the existing

21

2. State-of-the-art

pose graph of the matching node. Otherwise, the new node is connected to the previousnode in the pose graph. At second hand, the global pose graph optimization makes itpossible to detect a loop closure when a previous location is visited. When such a loopclosure is detected, the accumulated drift will be distributed with all previous graphnodes. This approach makes it possible to create a realistic trajectory of the traveledpath. Figure 2.11 provides a simplified overview of the RGB-D SLAM work-flow.

Figure 2.11: This figure illustrates a simplistic schematic of the RGB-D SLAM algorithm.

2.2 Spatial Data Structures

Applying numerical algorithms to solve obstacle avoidance in robotics on geographicaldata or deterministic radio propagation loss model, requires the usage of an efficient datastructures. This will prevent long computation times and makes it possible to scale tomultiple dimensions an thus improve the performance of such a computation. One ofthe most naive and common used data structures to represent geographical data is aregular grid map. Such a grid map can be compared to an image that consists of pixels,which can be addressed by either an individual address or a coordinate. This makes itpossible to index each cell and to increase the performance of a numerical algorithm. Asa result of such a grid map, the performance of a numerical simulation can be furtherimproved by reducing the number of cells that exists in the data structure. Such a datastructure is build upon hierarchical levels by applying recursive decomposition. Sincethere are different ways of decomposing geographical information like dividing the spacein four or eight equal parts or on the other hand dividing the space in two based onlocal features of geographical data, an environment can be described with a lot detailsor without any details. In the following subsections the implementation of a quadtreeand a octree is explained. As an advantage of using such a hierarchical data structure,an environment can be quantified with a specific space resolution by limiting the level ofrecursion. Additionally, every decomposed node exists of geographical information likea point, line, or plane that addresses a region in space. As a result of the recursive treedecomposition, every decomposed node can be addressed by applying binary indexingor their specific geographic coordinate. In order to search for node information at aspecific location, tree search algorithms such as depth-first or breath-first algorithmscan be applied. This tree structure reduces the number of cells so that only the spacearound the geographical data is modeled. Furthermore, to employ a hierarchical data

22

2.2. Spatial Data Structures

structure for a ray launching radio propagation model, a tree traversal algorithm has tobe implemented, which is further explained in chapter 4.

2.2.1 Quadtree

A Quadtree is a hierarchical data structure that can contain 2D spatial information likea point or a line segment [Zachmann and Langetepe, 2003, Samet, 1984]. The recursivedecomposition of a quadtree divides every node in four equal leaves as can be seen inFigure 2.12.

Figure 2.12: This figure illustrates a Quadtree hierarchical data structure where the graycells indicate the route toward the two gray indicated occupied leaves.

In this figure, every leaf is indexed according to North-East (NE), North-West (NW),South-West (SW), and South-East (SE). This makes it possible to implement an efficientsearch algorithm on top of the data structure. Moon et al. [1996] described a survey ofdifferent search patterns or space-filling techniques, of which the Hilbert curve achievesthe best results. Such a technique tries to find the most efficient pattern for traversingand indexing a spatial data structure. Furthermore, they provided algebraic closed-formsolutions to expect the number of cluster for a given query region.

Based on the type of the quadtree different nuances can be made in the implemen-tation towards the minimum number of cells or maximum level of details. Therefore,different kind of applications are possible. First, a quadtree that contains points is beingapplied in navigation applications in order to search for example shortest paths. Sec-ondly, a region based quadtree is used to improve the performance of either lossless andlossy compression techniques in image processing. Furthermore, in this dissertation, twoquadtree types are implemented. First, a quadtree, which contains points is used to con-struct line segments that are further inserted in the second quadtree type that containsline segments. The main difference between both types, is the insertion process of theobject. For example, a quadtree that contains points has to check whether a point isinside a square or not. On the other hand, a quadtree that contains line segments hasto evaluate whether a line segment is inside or outside a square or if the line segmentoverlaps the square. Figure 2.13 visualizes an example of a line quadtree where everyred cell indicates the existence of a line.

23

2. State-of-the-art

Figure 2.13: Example of modeled environment in a Quadtree data structure. The Redcell indicates the occupied cells.

2.2.2 Octree

An Octree is also an hierarchical data structure such as a Quadtree. The main differencebetween both data structures is that an Octree exists of 3D geographical data [Szeliski,1993]. Furthermore, the recursive decomposition of an Octree divides each node in eightdifferent leaves. Because of the 3D representation, an Octree can be seen as a cube. Thismeans that each recursion will divide a cube in eight sub-cubes, which can be seen inFigure 2.14.

Figure 2.14: A visual representation of the Octree data structure [Hornung et al., 2013].The black leaf node represents occupied space, whereas grey nodes indicate free space.Unknown space is marked by transparent nodes.

As a result of holding 3D geographical data, an Octree is widely being used in com-puter games to model an environment. On top of this tree-based data structure obstacle

24

2.2. Spatial Data Structures

avoidance and real-time ray-tracing algorithms are developed. On the other hand, anOctree can also be seen as a data structure that makes it possible to describe a volumein space, which enables the possibility of applying a radio propagation simulation.

2.2.3 Binary Addressing

Due to the implementation of a spatial data structure like a Quadtree or Octree, everyleaf can be addressed by a coordinate or a specific cell identifier. This leaf can containline segments or individual points which represent the environment. Within the scopeof this research, a binary address is used to identify each leaf [Frisken and Perry, 2002].This makes it possible to have a consistent addressing scheme that is bound to the spatialregion. In order to explain the addressing algorithm more in detail, a simplified Quadtreeis visualized in figure 2.16. Within this Quadtree each gray node on the lowest level (2)represents the occupancy of an object. Because of the spatial dimension limit of twowhen using Quadtree, each node can contain four leafs. This means that each leaf canbe represented with two bits. So the total number of leafs that the Quadtree can holdcan be derived from the total number of bits that is computed according to the followingformula:

bitstotal = levelmax ∗ 2 (2.24)

When applying this formula with the simplified example that is visualized in Fig-ure 2.16, the total number of bits is 4 or 24 possible leafs. In this example levelmax is 2because of the fact that the three exist of a maximum level depth of two. When insertingeach object in the data structure, each node is being divided into 4 leafs. This makes itpossible to address each leaf with a local address (00, 01, 10, 11 ), which can be referredto the previous stated order of a Quadtree node as illustrated in Figure 2.15. In order tocompute the global binary address, the local address needs to be shifted relative to thenext level. The amount of bits that needs to be shifted can be calculated as follows:

Figure 2.15: Local binary address of a Quadtree Node.

bitsshifted = bitstotal − (2 ∗ (levelcurrent + 1)) (2.25)

Finally, the local binary address needs to be summed with bitsshifted.

address = (addresslocal << bitsshifted) + addresslocal (2.26)

The result of this addressing technique can be followed in the following figure:

25

2. State-of-the-art

0

Level 1

Level 2

00 01 10 11

0000 0001 0010 00111000 1001 1010 1011

Figure 2.16: Binary addressing example, gray leafs represents the occupancy of objects.

2.2.4 Octomap

According to the Octree or Quadtree data structure, 3D and 2D geographical data canbe stored in a hierarchical data structure. Furthermore, the goal of my research is tomodel a realistic environment, which results in the usage of SLAM algorithm so thata real environment can be captured. As we have seen in section 3.4.2, the full-SLAMposterior is defined in equation (2.3). This equation describes the probability of beingat location x with map m given the sensor measurements z and odometry u. In order tocope with a realistic map, each node in the hierarchical map needs to hold a probabilitywhether this node is occupied by a measured 3D-point or not, given the location of thesensor. This is also referred to as an inverted sensor model. With OctoMap, an indooror outdoor environment can be created by such an inverted sensor model. Additionally,an OctoMap stores information about occupied spaces, free spaces and unknown spaces,which further can be used for autonomous exploration of an environment.

OctoMap is a mapping framework based on an Octree hierarchical data structureand probabilistic occupancy estimation [Hornung et al., 2013]. When it comes to map-ping arbitrary 3D environments, OctoMap has two main advantages over other mappingapproaches. First, an OctoMap is highly memory efficient as they consist of an octantwhich can be divided in eight leaf nodes. Subsequently, these leaf nodes can be seenas new octants, which in their turn can be divided in leaf nodes again. The desiredresolution of the 3D-model is determined by the depth of the Octree. For example, largeadjacent volumes can be represented by a single leaf node to save memory. Second, Fig-ure 2.14 also displays that an octant node n can have different states: free, occupied, orunknown. This state can be derived from the calculated probability by thresholding onthe probability of a given node, according to equation (2.27).

P (n|z1:t) =

[1 +

1− P (n|zt)P (n|zt)

1− P (n|z1:t−1)

P (n|z1:t−1)

P (n)

1− P (n)

]−1(2.27)

Where the probability whether a node is occupied or free is determined by the currentsensor measurement zt, the previous estimate P (n|z1:t−1), and a prior probability P (n),which is initiated at the beginning to 0.5 and further be updated when more sensor

26

2.3. Wave Propagation Modeling

measurements are captured. In order to rewrite the equation so that it becomes moreperformance efficient, the log-odds notation is applied that can be seen in the followingequation:

L(n) = log

[P (n)

1− P (n)

](2.28)

Thus, equation (2.27) can be rewritten as

L(n|z1:t) = L(n|z1:t−1) + L(n|zt) (2.29)

With log-odds, probabilities of 0% to 100% are mapped to −∞ and +∞. A mainadvantage of this notation is that small differences at the outer edges of the range havethe strongest influence on the probability. E.g. 50.00% and 50.01% are mapped to 0and 0.0017, while 99.98% and 99.99% are mapped to 37 and 40. As equation (2.29)makes use of additions instead of multiplications, the probability of a leaf node can beupdated faster than in equation (2.27). Faulty measurements due to noise or reflectionsare canceled out by the update formula.

Furthermore, the map can be extended at any time when the robot explores newunknown areas. As the OctoMap holds information about unmapped space, the robotknows which areas it has to avoid for safety reasons, or which areas are yet to be explored.

2.3 Wave Propagation Modeling

In the understanding of how radio waves propagate through space, I will discuss themathematical basics in this section that are related to my research. This means that Iwill focus on the fundamentals of space waves, which operate at frequencies higher then 30MHz. As can be seen in Figure 2.17, a space wave assumes that a wave travels in a line-of-sight direction between transmitter Tx and receiver Rx and can be described as large-scalefading [Richards, 2008, Saunders and Zavala, 2007, Balanis, 1982]. Additionally, a spacewave undergoes different phenomena, such as reflection, refraction, or diffraction, whenit intersects with a surface and are called multipath fading.

Figure 2.17: This illustrates the space wave mechanism, where d is distance betweenboth transmitter Tx and receiver Rx

According to the assumption that we have an isotropic point source at Tx thatradiates in all directions in free space, the effective isotropic radiation power of EIRP

27

2. State-of-the-art

that is defined as PtxGtx is transmitted and can further be described with the inversesquare law p and follows following formula:

p =PtxGtx4πd2

(2.30)

Since the transmitter is radiating the power in a spherical fashion, the received power canbe stated as the portion of the full surface that intersects with the receiver at a distanced from the transmitter Tx. This results in the fact that the power at receiver side Prxcan be described as:

Prx = pArx (2.31)

where Arx denotes the surface that intersects at receiver side and is called effectiveaperture and is given by:

Arx =λ2Grx

4π(2.32)

Here, Grx describes the gain of the antenna, which expresses how much more power thereceiver antenna can receive regarding to a isotropic antenna. Furthermore, λ denotesthe wavelength of the wave regarding to the frequency it operates at. This concludes infollowing power density equation at receiver side,

Prx =PtxGtxArx

4πd2(2.33)

As a result of equation (2.33), Arx can be further substituted in the following equa-tion (2.34) that is widely know as the Friis Transmission equation.

Prx =PtxGtxGrxλ

2

(4πd)2(2.34)

As long as the needs of the application covers the computation of a received power, theFriis equation is satisfactory. Alternatively, when the received power does not satisfy therequirements like for example to optimize localization algorithms like Angle of Arrival(AoA), Angle of Departure (AoD), a more fundamental knowledge of the underlyingelectrical field is required because of the inclusion of the phases. For this reason, the spacewave can also be seen as a combination of electric field E and magnetic field H that areorthogonal to each other. Such a wave is also referred to as Transversal Electromagnetic(TEM). Since both field vectors are projected on the same axes in the direction of thepropagation, a new vector can be defined from the product of the magnitudes of bothfield vectors, which is called the Poynting Vector S and can be referred to as p that wasgiven in Equation (2.30) as:

p = |S| = |E||H| (2.35)

In the case of free space, the electrical field becomes:

|E| = η|H| = 120π|H| (2.36)

where η is the intrinsic impedance of free space. As a result from equation (2.30), (2.35),and (2.36) the reference electrical field strength is:

|E2| = ηPtxGtx4πd2

(2.37)

28

2.3. Wave Propagation Modeling

which becomes

|E| =√

30PtxGtxd

(2.38)

Because of the expression that the electrical field strength is not a power density, aconversion is required in order to interpret it as a power. This conversion is given by

Prx =|E2|λ2Grx

η4π(2.39)

As illustrated in Figure 2.17, the total received power at receiver Rx can be definedas the sum of the direct ray and all diffuse, diffracted, and reflected rays. Because ofthese reflected rays and variations on the signal from noise and a cluttered environment,the signal at the receiver is different. This is called multipath fading, and occurs ei-ther in frequency and time domains. Due to the multipath fading, a signal undergoesan attenuation loss and a phase shift. When both signals are coming together at thereceiver location, a constructive or destructive signal behavior occurs. Constructive be-havior, means that the sum of both field strengths results in a signal that is stronger. Onthe other hand, destructive behavior results in a signal that is weaker due to oppositephases. An important notice in order to analyze this behavior is the fact that this behav-ior only can be evaluated when field strengths are being calculated. As a result of this,a propagation loss model applies these algorithms in order to predict the received signalstrength or distance of a wireless link between two devices in a given environment model.Next, radio wave propagation loss models can be subdivided in two types of algorithms,which are illustrated in Figure 2.18 [Iskander and Yun, 2002]. First, deterministic algo-

(a) Empirical Propagation model (b) Deterministic Propagation model

Figure 2.18: Difference between empirical and ray-based deterministic propagation lossmodel where dotted lines are representing the environment model and the full lines theindividual rays.

rithms need a full environment model that has all objects included with their specificelectrical material parameters so that the influence of the wave propagation phenomenaare included in the resulted signal strength computation [Imai, 2017, NAGY and andBudapest University of TechnologyEconomics, 2007]. Secondly, statistical or empiricalalgorithms, do not take the different phenomena into account, which makes them lesscomplex, fast to compute, but less suitable for validating indoor localization algorithmbecause of the large impact of multipath.

29

2. State-of-the-art

2.3.1 Deterministic Propagation Loss Model

These kind of radio propagation loss models solve the original Maxwell’s equations ac-cording to the integral or differential solution. In the literature, two types of models areproposed and are also called site-specific models:

1. Ray-based algorithms that make it possible to trace a set of rays in the modeledenvironment until a specific location. Every ray that is either traced or launchedfrom a transmitter location and is able to reflect, refract, or diffract in the en-vironment by using the Geometric Optic (GO) theory and the Uniform Theoryof Diffraction (UTD). Ray-based methods enable the visibility of every path fromtransmitter to receiver which gives researchers a better understanding of the un-derlying phenomena within the frequency spectrum. On one hand, the literaturestates an image-based solution, which traces all ray paths from a receiver locationtowards a transmitter location [Sato et al., 2005, Imai, 2017, Tam and Tran, 1995].On the other hand, a ray-launching solution makes it possible to launch many dif-ferent rays according to a specific antenna radiation pattern. Since both solutionsare providing a good accuracy, they are widely being applied for either outdoorand indoor coverage simulations [Klepal, 2003].

2. Finite Difference Time Domain (FDTD), Method of Moments (MoM), Finite Ele-ments Method (FEM) models are solutions that solve the Maxwell’s equations inboth time and frequency domains [Nagy, 2010, Klepal, 2003]. All of these determin-istic propagation loss model solutions use a grid-based environment model, whichcan define a 2D or a 3D environment model, to compute at every cell the electricalfield relative to the time according to the differential or integral method. The dif-ferent wave propagation phenomena make such an algorithm very complex at oneside, while these algorithms obtain the optimal accuracy and precision for the givenenvironment model relative to the resolution of the obtained environment [Yun andIskander, 2015].

Furthermore, a more detailed explanation is discussed about ray-based methods toprovide a fundamental background of existing solutions. Ray-based methods can be di-vided into two categories: ray-tracing and ray-launching methods [Yun and Iskander,2015, Subrt and Pechac, 2011]. Ray-tracing methods determines all rays between thereceiver and the transmitter by applying the image based technique. This technique isable to find all reflections starting from the source (receiver) until the transmitter ina recursive way by mirroring the source point with the reflecting plane. Second, themirrored source point can then be connected with the transmitter location in order tocompute the reflection intersection point. This principle can further be extended in or-der to find multiple reflections so that all paths can be found. Since this process isvery time consuming, this technique is mostly being used for simulating point-to-pointconnections. Alternatively, ray-launching methods calculates all paths starting from thetransmitters. This technique launches n numbers of rays in different directions, whichcan be modeled according to a specific antenna radiation pattern. As an example of sucha ray-tracing model, the WINNER model models different channel parameters accordingto a geometry-based stochastic channel modeling technique [Meinil et al., 2008]. Thismodel uses a generic channel model implementation to compute the channel parametersfor either indoor, outdoor, and hybrid scenarios. Subsequently, the WINNER model

30

2.3. Wave Propagation Modeling

parametrizes a communication system in different scenarios so that many large-scale andsmall-scale parameters can be found. The geometrical model of this WINNER model isbased on provided blueprints. Ray-launching propagation loss models are more efficientthen ray-tracing methods, which makes them suitable for coverage predictions. Threeexamples are further be explained. First, Klepal [2003] designed a semi-deterministicpropagation loss model that is called Motif model. This model defines an environmentas a regular grid with m× n cells. In addition to this, each cell that is part of an objector environment segment is called a Motif, which is represented by probabilistic motifparameters. These motif parameters include the probability of absorption of a ray andthe probability radiation pattern when a ray intersects the Motif cell. Furthermore, theMotif model implemented a Monte Carlo principle that enables to split up a ray in manysub-rays when it intersects a Motif element. These sub-rays will be launched accordingto the overall probabilistic radiation pattern. Second, Lai et al. [2011] implemented theindoor Intelligent Ray Launching Algorithm (IRLA) model. This model is based on adiscrete ray-launching principle that launches n number of rays in the environment modelthat is modeled as a regular grid according a specific antenna radiation pattern. Next,three main components are applied to simulate the received signal strength: the Horizon-tal Reflection Diffraction (HRD), the Line-of-Sight (LoS), and the Vertical Diffraction(VRD). The Indoor IRLA model applies a calibration process to optimize the materialparameters according to measurements, which enhances the accuracy of the validation.Next, both ray-based propagation loss models do not include phase difference whichresults in the fact that constructive and destructive wave propagation concept are notincluded. Third, Liu et al. [2012] designed the COST2100 model, which is able to finddifferent stochastic channel parameters based on geometrical properties like visibility re-gion. This region is described as an ellipsoid region between the base station and themobile station and is further depicted in three clusters: a local cluster, single cluster, andtwin cluster in order to describe delay spread in function of elevation and azimuth angle.Both large-scale and small-scale fading parameters are described in the model in terms ofMultiple-Input-Multiple-Output (MIMO) channel properties such as AoA, AoD, singleand multi-path components.

2.3.2 Empirical Propagation Loss Model

As illustrated in Figure 2.18a empirical propagation loss models do not take any reflec-tions or diffraction phenomena’s into account. Although, some of the empirical modelsthat are used for outdoor range characterization include the influence of the ground wavereflection. Furthermore, empirical models are also called statistical models. This meansthat they are based on measurements that were recorded in a specific environment anda specific frequency range. In general, the total received power in a communication linkbetween a transmitter and a receiver, as illustrated in Figure 2.17, is described by theLink Budget (LB) and follows following basic equation (2.40)

Prx = Ptx +Gtx +Grx − PL+X (2.40)

where, Prx and Ptx are the receiver and transmitter power expressed in dBm. Further-more, Grx and Gtx are the antenna gains expressed in dB and PL denotes the pathloss (PL), such as the direct ray, expressed in dBm. Additionally, a factor X is add tothe link budget which indicated a log-normal variation which is Gaussian distributed.Within this section, a list of path loss models that can be applied for simulating outdoor

31

2. State-of-the-art

propagation loss research in urban environments is explained COST Action 231 [1999],ITU [2015].

Free Space path loss

The most naive and basic model that expresses the free space path loss PL as inverselyproportional to the squared distance of a wave that is propagating in free space can beexpressed as:

PL = 20 log10

(4πd

λ

)(2.41)

where, d is the distance between the transmitter and the receiver and λ is the wavelengthaccording to the frequency that is system is using.

Two-ray path loss

This model includes on one hand the LoS signal and on the other hand the non-LoSsignal. This non-LoS signal enables the inclusion of the ground reflection. In orderto compute this path loss model, the heights of the transmitter htx and receiver hrxantennas have to be known to calculate the path loss PL according to the distance d.This path loss model is described in the most efficient solution by following equation.

PL = 20 log10

(d2

htxhrx

)(2.42)

Here, htx and hrx denotes the antenna height of the transmitter antenna and the receiverantenna respectively. Both heights can also be found in Figure 2.17.

COST-231 Hata

An outdoor path loss model that is used in urban and suburban environments. It hassome restrictions that limit the heights and the frequency range of the used devices. Thisrestriction will limit the height of transmitting devices from 30 m to 200 m and 1 m to10 m for receiving devices. The frequency range of both devices should be below 1 GHzCOST Action 231 [1999]. Furthermore, according to the environment a correction factorα(hrx) has to be incorporated.

α(hrx) = 1.1 log10(f − 0.7)hrx − 1.56 log10(f − 0.8) (2.43)

In this formula, f denotes the frequency that is used and expressed in MHz. Thiscorrection factor is combined with the following equation to compute the path loss PL.

PL = 69.55 + 26.16 log10(f)− 13.82 log10(htx)

−α(hrx) + (44.9− 6.55 log10(htx)) log10(d) (2.44)

COST-231 Walfisch-Ikegami

This model adds the average rooftop heights of nearby buildings, and the antenna heightsto compute the loss. It is mostly used in urban environments COST Action 231 [1999].Additionally, the model distinguishes two cases: a LoS and Non-LoS scenario. The LoSscenario is defined as follows:

PL =

42.64 + 26 log10(d) + 20 log10(f) d > 20m20 log10

(4πdλ

)d ≤ 20m

(2.45)

32

2.3. Wave Propagation Modeling

Where λ defines the wavelength of the frequency that is defined by f . As a side-note thisequation, in the case where distance is larger than 20m the path loss PL is defined asthe free space path loss model. Furthermore, distance d describes the euclidean distancebetween both devices. Furthermore the Non-LoS scenario is defined as:

PL =

PL0 + PLrts + PLmsd PLrts + PLmsd > 0PL0 PLrts + PLmsd ≤ 0

(2.46)

Where PL0 is described by the free space path loss model, PLrts is described as thediffraction and scatter loss of the wave propagation with the rooftop of a building andthe street where the mobile station is located. Additionally, PLmsd is described by themultiple screen diffraction loss. Further information around the computations of theseextra losses can be found in reference [COST Action 231, 1999].

3GPP

The 3GPP SCM standard is described by multiple path loss models that can be used indifferent deployments scenarios. Within the context of the section two outdoor modelsare explained, which stand for macro and pico cell deployments. The macro cell deploy-ment path loss model, assumes an antenna height 15 m above rooftop level and can becomputed according to following formula:

PL = 8 + 36.7 log10(d) (2.47)

The pico cell deployment path loss model proposed by the 3GPP standard assumes anantenna height at rooftop level and uses the following equation:

PL = 23.3 + 36.7 log10(d) + 21 log10

(f

900

)(2.48)

Log normal

This includes a Gaussian fading component that implements large-scale and fast fadingfor urban environments. Such fading can appear due to several reflections on objects orbuildings.

PL = PLd0 + 10η log10

(d

d0

)+Xg (2.49)

Where, η describes the path loss exponent and is related to the environment. Xg denotesa zero mean Gaussian distribution that can be used to describe small scale fading effectsor multipath.

ITU-R street canyon

This model is characterized by two slopes defined by two individual models and a breakpoint. This break point has a dependency on the used wavelength and the differentantenna heights and is defined as:

Rbp =4htxhrx

λ(2.50)

The first slope is defined by the Free Space path loss model for distances smaller thanthe break point bp.

Lbp =∣∣20 log10

(λ2

8πhtxhrx

) ∣∣ (2.51)

33

2. State-of-the-art

Beyond the break point, the LoS path loss model is used with a different path lossexponent, which represents the worst case path loss [ITU, 2015].

PL = Lbp + 6 +

20 log10

(dRbp

)d ≤ Rbp

40 log10

(dRbp

)d > Rbp

(2.52)

The first propagation model that is selected is the most naive and basic model thatcomputes the line-of-sight (LoS) path loss. The second model, two ray path loss model,includes on one hand the LoS signal and on the other hand the Non-LoS signal. This Non-LoS signal enables the inclusion of the ground reflection and is based on the calculationof the Fresnel reflection coefficients at reflection intersection with the soil. In order to usethis propagation model, the different heights of the transmitter and the receiver have tobe known in order to calculate the reflection intersection. The third model, the COST 231Okomura-Hata model is being considered as an outdoor propagation model that is usedin urban and suburban environments. This model has some restrictions that limit theheights of the devices and the frequency range until 1 Ghz. Next, the COST-231 Walfisch-Ikegami adds the average rooftop heights of the nearby buildings, the different antennaheights to compute the path loss. This model is mostly used in urban environments. Thefifth model describes the macro deployment model and the micro deployments model ofthe 3GPP standard. Next, the log-normal has been applied in urban environments dueto the Gaussian fading component that implements the fast fading component. This willappear from several reflections on objects or buildings. Finally, the ITU-R suburbanpropagation model is included in the propagation loss model list. This model describes,in addition to the slow fading, the fast fading as well by defining a break point distance.Due to this break point, a second model that predicts the signal loss can be added. Sothe worst case propagation loss is calculated with this model [Sarkar et al., 2003].

2.4 Conclusion

To conclude this chapter, three topics related to the research of applying a deterministicradio propagation loss model regarding to an environment model of a real environmentare explained and discussed:

1. The concept of SLAM is introduced that solves robot localization and mapping in asimultaneous fashion. Because of the reason that it strongly depends on the qualityand the number of sensor measurements, a realistic environment can be modeled.This can hold a model that can be represented in 2D or 3D. In Section 3.4.2,a general explanation about the problem is described and three specific SLAMsolutions are further explained.

2. The state-of-the-art about spatial data structures such as a Quadtree and an Octreeis explained and discussed. Such a data structure enables an algorithm to traversethe tree in a fast and recursive way in order to find a specific node which canindicate a location. In addition to the survey of a Quadtree and an Octree, abinary addressing technique is explained, which is used to address each node.

3. The literature of wave propagation is introduced by describing the difference abouta power density and an electrical field regarding to specific applications. Further-

34

2.4. Conclusion

more, the difference is made between the deterministic and the empirical propa-gation loss model, which are further subdivided in subclasses. First, deterministicpropagation loss models are classified in two groups: one, ray-based methods thatlaunch many rays in different angles, which reflect, refract, and diffract in the en-vironment. Two, FDTD and MoM models that solve the Maxwell’s equations ineither the frequency and time domain. Secondly, a group of widely used empiricalpropagation loss models that can be applied in urban and suburban environmentsare explained and discussed.

Based on the literature overview, different research questions can be found in order tooptimize and combine each topic. This results in the following research possibilities:

• Integration of a realistic environment model to perform radio propagation simula-tions.

• A segmentation and classification solution to indicate individual walls, furniture,and other objects.

• Implementation of an efficient and parallel ray launching propagation model.

• Implementation of a generic 2D and 3D propagation loss model.

• An automatized solution to validate a radio propagation model.

• A generic model that enables the simulation and synthesis of localization applica-tions.

35

Chapter 3

Realistic Environment modeling

This chapter describes our contributions regarding to the creation of a realistic envi-ronment. To this end, the survey study about 3D registration algorithms presented inchapter 2 is analyzed and synthesized [Bellekens et al., 2014]. The study included inthis chapter enables the validation of the accuracy and precision of 3D registration al-gorithms according to a wide range of realistic point clouds that were captured with arobot. Moreover, such a registration algorithm enables to apply a SLAM system. Thistechnique handles the creation of a map that can be improved by incorporating a previouslocation estimation in an iterative way without any knowledge of the environment. Thiskind of algorithms minimizes at one side the geometry transformations due to misalign-ments while at the other side it uses a probabilistic way of combining the wheel odometrywith sensor measurements. A drawback of the SLAM algorithm is that ceiling, floor, andwalls are not fully captured and thus the environment is not complete. In order to solvethis, I introduced mapfuse with a student, which makes it possible to combine an initialCAD-model of the room with the output of a SLAM algorithm [Aernouts et al., 2017].

3.1 Introduction

With the advent of inexpensive depth sensing devices, robotics, computer vision and am-bient application technology research has shifted from 2D imaging and LIDAR scanningtowards real-time reconstruction of an environment based on 3D point cloud data. Onthe one hand, there are structured light based sensors such as the Microsoft Kinect v1and Asus Xtion sensor that generate a structured point cloud, sampled on a regular grid.On the other hand, there are many time-of-flight based sensors such as the SoftkineticDepthsense camera that yield an unstructured point cloud. These point clouds can ei-ther be used directly to detect and recognize objects in the environment where ambienttechnology has been used, or can be integrated over time to completely reconstruct a 3Dmap of the camera’s surroundings [Rusu, 2010, Newcombe et al., 2011, Kerl et al., 2013].However, in the latter case, point clouds obtained at different time instances need to bealigned, a process that is often referred to as registration. Registration algorithms are

37

3. Realistic Environment modeling

able to estimate the motion of a robot by calculating the transformation that optimallymaps two point clouds, each of which is subject to camera noise.

As stated previously, registration algorithms can be classified coarsely into rigid andnon-rigid approaches. Rigid approaches assume a fixed rigid environment such that ahomogeneous transformation can be modeled using only 6 Degrees of Freedom (DoF). Onthe other hand, non-rigid methods are able to cope with articulated objects or soft bodiesthat change shape over time. Additionally, registration algorithms can be classified intocoarse and fine approaches. Coarse registration approaches compute an initial geometricalignment whereas fine registration approaches compute a transformation that can regis-ter two point clouds precisely. A combination of coarse and fine registration algorithmsis often used in applications to reduce the number of iterations.

Registration algorithms are used in different fields and applications, such as 3D objectscanning, 3D mapping, 3D localization and ego-motion estimation or human body de-tection. Most of these state-of-the-art applications employ either a simple SVD [Mardenand Guivant, 2012] or PCA based registration, or use a more advanced iterative schemebased on the Iterative Closest Point (ICP) algorithm [Besl and McKay, 1992]. Recently,many variants on the original ICP approach have been proposed, the most important ofwhich are non-linear ICP [Fantoni et al., 2012], and generalized ICP [Segal et al., 2009].These are explained and discussed in this chapter.

To my knowledge, a survey study of each of the above methods that are applied ina real world scenario where environment data is been acquired with a 3D sensor is notavailable in literature. Salvi et al. [2007] presented a survey article, which gives an overallview of coarse and fine registration methods that are able to register range based images[Salvi et al., 2007]. But they presented a performance comparison based on syntheticdata and real data that was recorded by a laser scanner.

The choice of an algorithm generally depends on several important characteristics suchas accuracy, computational complexity, and convergence rate, each of which depends onthe application of interest. Moreover, the characteristics of most registration algorithmsheavily depend on the data used, and thus on the environment itself. As a result, it isdifficult to compare these algorithms data independently. Therefore, in this chapter wediscuss first the mathematical foundations that are common to the most widely used 3Dregistration algorithms, and secondly we compare their robustness and precision in a realworld situation.

As a result of this study and in view of using these registration algorithms for creatinga realistic environment model so that it can be used for realistic radio propagation mod-eling, more research is done towards 3D-mapping applications [Jrvelinen et al., 2016].Within the research of Jrvelinen et al. [2016] a 3D-point cloud is captured in order to useit for radio propgation loss modeling. Furthermore, this point cloud data was capturedby a laser without a moving robot. This results in the restriction of capturing only oneroom. Because of noisy depth image measurements and accumulated registration errors,the result of a 3D-SLAM does not cover the entire environment where ceilings, floor,walls, and objects are included.

As a second contribution, this chapter includes the Mapfuse system to create a com-plete model of an environment of which we have foreknowledge. In order to benefit thecompleteness of our result, we used a probabilistic, volumetric mapping approach calledOctoMap [Hornung et al., 2013]. In contrast to using point clouds, OctoMap allows us torender a model which contains information about occupied spaces, free spaces, and un-

38

3.2. Application Domains

known spaces. Also, we benefit from the probabilistic nature of an OctoMap, as it allowsus to update an initial guess model of the environment with real sensor measurements.

Current research about completing an environment model based on the surface ofa captured point cloud has rather been limited [Breckon and Fisher, 2005, Turner andZakhor, 2015]. Breckon and Fisher [2005] presented a method to complete partiallyobserved objects by deriving plausible data from known portions of the object. However,this method is time-consuming and involves complex calculations. Furthermore, thecompletions are not an accurate reconstruction of reality, as they are only meant to bevisually acceptable for the viewer [Breckon and Fisher, 2005]. In [Turner and Zakhor,2015], laser range data is used to create a 2D floor plan, which can be extruded to a 2.5Dmodel. By aligning this simplified model with a complex octree of the environment, theresearchers were able to build a final model which includes previously hidden surfaces.However, automatic generation of a 2.5D model is challenging when a flawed dataset isused as input. Also, this approach assumes floors and ceilings to be horizontal and tohave fixed heights. Moreover, model merging is not done in a probabilistic fashion.

The main contributions of MapFuse regarding the state-of-the-art are two differentapproaches for merging an initial environment model with real measurements. First, theinitial model can be merged iteratively with the final SLAM result. With this technique,the accuracy can be regulated by changing the amount and sequence of both mod-els. Second, an online merging process, which updates the initial model while SLAMis processing, can be applied. Both methods use OctoMap to probabilistically build acomplete volumetric model. In order to create and evaluate MapFuse, we have comparedthree different camera types in a simulated environment: a basic monocular camera, awide field-of-view camera and a depth-sense camera. With these cameras, datasets wererecorded to be used as input for a visual SLAM algorithm such as Large-Scale DirectSLAM (LSD-SLAM) or feature-based RGB-D SLAM. Afterwards, the simulated SLAMresults were validated in four real environments.

This chapter is outlined as follow: Section 3.2 briefly discusses several importantapplication domains of 3D registration algorithms. Section 2.1.2 explains the most im-portant rigid registrations algorithms, which are PCA, SVD, ICP point-to-point, ICPpoint-to-surface, ICP non-linear and Generalized ICP. Section 3.3 provides a discussionof the precision and the robustness of each of these methods in a real world setting.Furthermore, section 3.4 describes the three main blocks of which the Mapfuse systemconsists of. In section 3.5, the results of our approach are discussed. Next, section 3.6discusses this chapter. Finally, section 6.6 concludes this chapter.

3.2 Application Domains

This section holds the explanation of commonly used application domains of both rigidand non-rigid registration methodologies, which are robotics, healthcare, and more. Inthese application domains, the common goal is to determine the position or pose of anobject with respect to a given viewpoint. Whereas rigid transformations are defined by6 Degrees of Freedom (DoF), non-rigid transformations allow a higher number of DoF inorder to cope with non-linear or partial stretching or shrinking of the object [Rueckertet al., 1999]. Following subsections will give an overview of the robotic applications andhealthcare applications where 3D rigid registration methods are being applied.

39

3. Realistic Environment modeling

3.2.1 Robotics

Since the introduction of inexpensive depth sensors such as the Microsoft Kinect camera,great progress has been made in the robotic domain towards Simultaneous LocalizationAnd Mapping (SLAM) [Endres et al., 2012, Aulinas et al., 2008, Berger et al., 2013]. Thereconstructed 3D occupancy grid map is represented by a set of point clouds, which arealigned by means of registration and can be used for techniques such as obstacle avoid-ance, map exploration and autonomous vehicle control [Kerl et al., 2013, Sprickerhofand Nuchter, 2009, Huang and Bachrach, 2011]. Furthermore, depth information is oftencombined with a traditional RGB camera [Newcombe et al., 2011, Ruhnke et al., 2013]in order to greatly facilitate real-world problems such as object detection in clutteredscenes, object tracking and object recognition [Savarese, 2007]. The main goal in roboticapplications is to develop a robust, precise and accurate algorithm that can execute al-most at real-time. In order to reach this goal much research is nowadays focusing towardgraphical processing unit (GPU) and multicore processing, which enables the executionof many computation task during one time-slot on multiple processing cores [Shams et al.,2010, Lee et al., 2013].

3.2.2 Healthcare

Typical applications of non-rigid registration algorithms can be found in healthcare,where a soft-body model often needs to be aligned accurately with a set of 3D measure-ments. Applications are cancer-tissue detections, hole detection, artefact recognition,etc. [Rueckert et al., 1999, Crum, 2004]. Similarly, non-rigid transformations are usedto obtain a multi-modal representation of a scene, by combining magnetic resonanceimaging (MRI), computer tomography (CT), and positron emission tomography PETvolumes into a single 3D model [Rueckert et al., 1999].

3.3 Bechmark Survey Results

In this section, we illustrate the performance of the different registration methods thatare based on an iteratively approach. In order to illustrate the performance we testedthe precision and the robustness of the different methods. The robustness factor ofan algorithm will explain how well an algorithm performs during a period of time ondifferent input parameters. Besides the robustness, the precision factor will clarify howwell an algorithm performs on the same input parameter. The results for precision androbustness are based on a set of 3D point clouds that are included in a dataset. All resultsare generated using the Robot Operating System (ROS) and the Point Cloud Library(PCL) [Rusu and Cousins, 2011]. Furthermore, the execution processes of the differentmethods are calculated by an Asus Zenbook UX32VD, core i7-3517U in combinationwith 10 GB of RAM-memory.

3.3.1 Dataset

The dataset that we used to benchmark the performance is built by a Pioneer-3dx robotand consists of a laser scanner, odometry hardware and 3D point cloud data. ThePioneer-3dx robot is a commonly used robot for academic and research purposes. SeeFigure 3.1 for the robot used to build this dataset. To ensure that all sensor measurements

40

3.3. Bechmark Survey Results

have a timestamp and transformation with respect to the center of the robot, we haveused the ROS.

Figure 3.1: The mobile Pioneer-3dx robot with a mounted Microsoft Kinect Camera,Laser scanner and Sonar sensor.

On the one hand, the Robotic Operating System (ROS) is used as a tool to record allsensor measurement including the timestamps, while on the other hand, ROS is used as aplatform to schedule the different 3D point clouds based on their timestamps. To reducethe size of the dataset, we decreased the number of point clouds per second. Figure 3.2visualizes the dataset by means of an occupancy grid map and a traveled path. In thisfigure, each pixel can have a probability that indicates if it exists in reality or not. Thewhite pixels have probability zero and thus empty. Furthermore, the black pixels arelabeled as occupied and has a probability that is larger than 90 percent. The gray pixelshave a probability of 50 percent since these areas are yet undiscovered.

Figure 3.2: Occupancy grid map from SLAM approach and the smoothed traveled path

41

3. Realistic Environment modeling

The occupancy grid map is the result of a Rao-Blackwellized particle filter SLAMalgorithm with a Bayesian probability distribution [Grisetti et al., 2007]. The implemen-tation that we used utilizes the laser range scanner and odometry data to generate anoccupancy grid map. However, the location updates performed by the algorithm are notused to recalculate the traveled path, resulting in a periodically erratic trajectory. Toobtain a smooth trajectory, we used the occupancy grid map calculated by the SLAMalgorithm to perform adaptive Monte Carlo localization [Fox et al., 1999]. Because Iknew the initial position of the robot, the algorithm did not have to perform globallocalization, but simply had to track the robot during the complete run. This ensuresthat location corrections are applied incrementally, resulting in the smooth trajectory.Thus, after the SLAM method has calculated an occupancy grid map, the trajectory wascalculated by an Adaptive Monte Carlo Localization approach.

3.3.2 Robustness

To measure the robustness of the rigid 3D point cloud registration algorithms, I appliedthem at various times on different corresponding point clouds and recorded their errorand computation time. Next, to analyze which algorithm performs the best in a realworld scenario, the robustness of the different rigid registration algorithms is analyzed.This can be done by averaging the results of the data points. This obtains the informationabout the robustness of a specific algorithm. The scenario was focused on mapping anindoor environment to generate a 3D model in which all spatial objects are visible andcorrectly aligned. Thanks to the timestamps and playback mechanisms in ROS, analgorithm is able to iterate over every point cloud that is recorded. Figure 3.3 shows aone dimensional axis with vertical markers. Each of these markers represent a 3D pointcloud, which was taken at a certain time with respect to the start pose or the beginningof the dataset.

Figure 3.3: The benchmark robustness scheme includes a set of two 3D point clouds.Each set contains a source point cloud Si, a target point cloud ti, and a transformation.Every point cloud is indicated as an individual marker on the time line t.

For each set of two point clouds, the fitness score, the averaged and normalized errorafter registration and alignment between the two point clouds, and the computation timeof each algorithm are computed to measure the robustness of the different algorithms.Subsequently, the error means the average distance between all corresponding points ofboth point clouds. In this case, there were 165 sets of point cloud pairs or 330 singlepoint clouds.

42

3.3. Bechmark Survey Results

Figure 3.4 compares the number of iterations to the averaged fitness score after ge-ometric alignment for each iterative registration process. The result of this correlationcan be seen on the green curves, which all converge towards a minimum at 40 itera-tions. Within this dataset the average of the ICP point-to-point algorithm reaches thelowest minimum in comparison to the other ICP variants. As already stated in the in-troduction an ICP approach is often used after a coarse registration that can lead to alower minimum. Figure 3.4, shows that the lowest error value at 40 iterations occursfor SVD ICP. This means that a coarse SVD registration has been applied onto thepoint cloud pair after which an ICP point-to-point is applied. Secondly, the figure showsthe computation time for each algorithm at a specific number of iterations. GICP hasthe worst computation time while ICP point-to-point has the fastest computation time.The reason why ICP point-to-surface is slower than ICP point-to-point is mainly dueto the surface normal vector computation. This normal vector computation time couldbe decreased if the number of nearest neighbor points that should be included onto thesurface, is lower. This will change the behavior, so it will gradually perform more likean ICP point-to-point approach.

Figure 3.4: This figure shows the comparison between the number of registration iterationand the time logarithmic in red and the comparison between the number of iterationsand the fitness score in green for the average of ICP point-to-point (ICP), SVD appliedbefore ICP (SVD ICP), ICP point-to-surface (ICP pts), ICP non-linear (ICP nl) andGeneralized ICP (GICP)

The previous paragraph stated the robustness as the average fitness score after align-ment while this paragraph will define the robustness by the sum of the average and thedistance of one variance. Thus, the robustness factor is not only the average of eachregistration method, measured on a set of different 3D point clouds, but it also dependson the variance of the averaged fitness score or how far the fitness score will change overtime. These results are visualized in Figure 3.5. In this graph, the number of iterationsis shown on the x-axis and the sum of the average with the distance of one standard

43

3. Realistic Environment modeling

deviation onto the y-axis. The robustness of the ICP point-to-surface method is verygood due to the constant behavior during the entire dataset. This behavior is normalbecause the number of new surfaces will not decrease over time whilst two point cloudsare being registered. In contrast to the previous method, the robustness of the otherICP approaches ranges from worst in the beginning to better at the end due to the manychanges in correspondences while registering two point clouds. When applying a coarseregistration before an ICP approach, the robustness will be much better at convergencethan when using all other stated methods.

Figure 3.5: The horizontal axis represents the number of registration iterations and thevertical axis represents the sum of the average and the variance for ICP point-to-point(ICP), SVD applied before ICP (SVD ICP), ICP point-to-surface (ICP pts), ICP non-linear (ICP nl) and Generalized ICP (GICP)

3.3.3 Precision

To illustrate the behavior of the results of the different stated registration algorithmsduring a certain period of time, the robustness was computed. In order to analyze theprecision of the different registration algorithms, the rotation and translation part ofthe transformation matrix after alignment will be discussed separately. The precisionof the different algorithms will illustrate how well they perform on the same two pointclouds but with different correspondences. Figure 3.6 expands the flow that is used tocompute the precision of the stated registration algorithms. Depending on the numberof precision iterations, more or less subsamples will be computed. Thus, each subsampleof the source point cloud will be registered with the target point cloud, which results in aseries of alignment transformations that are compliant with the lowest fitness score at 40

44

3.3. Bechmark Survey Results

iterations. Furthermore, the list of transformations will be divided into a list of rotationmatrices and a list of translation matrices. In order to compare the different rotationsindependently, the 3 × 3 rotation matrices had to be converted into Euler angles. Thismeans that each rotation around the x, y, and z axes can be represented by yaw, pitchand roll. The different subsamples of the source point cloud has less points than theinitial source point cloud and they are created on set of random set of correspondences,which are based on the set of correspondences of the source point cloud.

Figure 3.6: The benchmark precision scheme

To illustrate the precision of the translation part, we computed the average transla-tion of the x, y, and z direction. Since the points in the source point cloud are randomlyselected on each precision iteration, different correspondences between the source andtarget point cloud are observed, leading to a slightly different transformation. The stan-dard deviation of this translation that is calculated for each x, y and z element in thetransformation matrix, gives us the precision and is shown in the following three figures,3.7, 3.8, and 3.9.

The standard deviation of the x-translation can be observed in Figure 3.7. As canbe seen, the value of the variance of PCA is zero. This is because of the different stepsPrincipal Component Analysis (PCA) undergoes to achieve an affine transformation.The variance on the centroid position of the source point cloud will not change a lotif a few points are missing. ICP point-to-surface has a lower variance in x-translationthan ICP point-to-point due to surface normal estimation. The advantage of the surfaceestimation makes the ICP point-to-surface approach more precise due to low changesof surfaces. In comparison to the results of the robustness the variance of applyingan SVD approach before an ICP point-to-point method is worse than without a coarseregistration approach. Solving the problem by a non-linear cost-function, such as aHuber-Loss function, will result in the worst precision. These benchmark results areonly applicable for indoor environmental data, that is retrieved with a Microsoft KinectCamera.

When observing the variance of the translation in the y direction, a remarkable resultfor the non-linear approach can be seen in Figure 3.8. These results are much worse thanin the x direction. This could be the result of setting the number of ICP iterations toolow. As for the non-linear approach it is important to choose this number of iterationscorrectly because of the different minimization cost-function. In order to ensure a faircompetition between the different algorithms we set the number of ICP iteration fixedto 40. As can be seen in Figure 3.5, each algorithm has reached a global minimum at40 iterations. The other approaches have a similar result for the y direction as for the x

45

3. Realistic Environment modeling

Figure 3.7: The horizontal axis represents the different methods and the vertical axisrepresents the variance of the precision test in the x direction

direction.

Figure 3.8: The horizontal axis represents the different methods and the vertical axisrepresents the variance of the precision test in the y direction

46

3.3. Bechmark Survey Results

The precision benchmark for the z direction gives better results than the x and y di-rection. We expected that the z direction, unlike the x and y directions, which representsthe depth measurement, will give a worse result due to noisy point clouds. The result ofthe variance of the z direction conclude that the precision of PCA is zero in all directions.This is because PCA translates the centered source point cloud against the centroid ofthe target point cloud and secondly, because PCA will not optimize the result. Thus, wecan conclude that ICP point-to-surface has the best precision for the translation part.

Figure 3.9: The x-axis represents the different methods and the y-axis represents thevariance of the precision test in the z direction

The following figures show the results of the precision for rotational part of the trans-formation. The 3× 3 rotation matrix has been converted to Euler angles, in which eachrotation is represented independently from each other by yaw, pitch and roll. First,Figure 3.10 gives more insight into the variance of the different registration methodsfor the yaw rotation. The figure shows a remarkable difference for the PCA approach.This is because PCA observes the whole point cloud through the correlation betweenthe different points by using the covariance matrix, while the ICP and SVD approacheswill look for point correspondences. The variance in yaw direction is large due to thedifferent subsamples, which will create point clouds where the density can change a lotin the direction of the smallest eigenvalue. This means that the probability of changingthe direction of the largest eigenvector is large and thus the yaw rotation has a lowerprecision than the correspondence based approaches.

To illustrate the precision of the transformation matrices after aligning with thedifferent registration methods in the pitch direction. This result can be seen in Figure3.11. The PCA method will perform more precisely in the pitch direction than the yawdirection. Secondly, the ICP point-to-surface approach will give the best results due tonormal vector extension, which is a good parameter that is not changing a lot in the

47

3. Realistic Environment modeling

Figure 3.10: The horizontal axis represents the different methods and the vertical axisrepresents the variance of the precision test in the yaw direction

different subsamples of the source point cloud. Additionally, the variance of the methodwhere the ICP approach is applied after a SVD is worse than the ICP point-to-point andthe ICP point-to-surface methods.

Figure 3.11: The horizontal axis represents the different methods and the vertical axisrepresents the variance of the precision test in the pitch direction

The variances of the roll rotations are visualized in Figure 3.12. The algorithm that

48

3.3. Bechmark Survey Results

performs best is the ICP point-to-surface approach. Additionally, GICP performs betterthan ICP point-to-point method, the difference between these algorithms is negligible.

Figure 3.12: The horizontal axis represents the different methods and the vertical axisrepresents the variance of the precision test in the roll direction

The different visualizations show that the result of the ICP point-to-surface methodis the most rotation precise registration method. Followed by the GICP that has the bestprecision in yaw direction and the third in pitch. Due to the fact that the yaw directionis more valuable than the pitch, GICP is the second most precise algorithm based on therotational part of the transformation. The reason why yaw is more valuable than pitchis specific for this case where we want the most precise algorithm for a mobile robotSLAM application where the yaw rotation can change a lot in comparison to the pitchrotation. The ICP point-to-point algorithm results in the third most precise algorithm.This result is based on the rotational part of the transformation.

49

3. Realistic Environment modeling

3.4 Mapfuse System

In order to build a complete model, we implemented a system that consists of three steps.Figure 3.13 displays a basic schematic that represents the MapFuse work-flow. Firstly,visual information from the camera is recorded into a dataset, and an initial guess model(IGM) is modeled. Secondly, the dataset that was gathered in the first step is used asinput for a SLAM algorithm. Finally, OctoMap is used to merge the SLAM point cloudwith the initial guess.

Step 2

Step 1

IGM

OctoMapSLAM MapOptimisationDatasets

Figure 3.13: With MapFuse, a dataset that is recorded in a simulated or real environmentis used as input for a SLAM algorithm. In the map optimisation component, the resultingSLAM point cloud is merged with an initial model which was modeled based on exactdimensions of the environment. The final MapFuse result is a complete volumetric modelof the environment.

3.4.1 Dataset

Since we have a priori knowledge of the environment, an IGM can be created. In orderto do so, we resort to OpenSCAD. This 3D modeling software allows us to build amodel that matches the exact dimensions of the real environment. This IGM can beextracted by two approaches: One, by measuring the dimensions a very simplistic modelcan be modeled. Two, a 2D environment map can be used to create a 3D environmentwhich is more realistic than the very simple model. The most important goal for sucha model is to provide an incomplete SLAM map with complementary information aboutthe environment. The amount of detail that has to be included in the initial guessmostly depends on the quality of the dataset. Since, RGB-D SLAM uses visual featuredescriptors, the dataset needs a lot of visual features.

Figure 3.14 displays two different initial guess models. In figure 3.14a, a bounding boxof an indoor environment with doors and windows is shown. For visualization purposes,ceilings were not included in this model. In figure 3.14b, we created a simplified modelof an industrial train cart.

An additional benefit is that the model can be imported in a simulator, which inducesnumerous advantages. Above all, simulation is time-saving and abates the risk of crashinga robot or drone. It has allowed us to experiment with multiple camera types andalgorithms in order to design an optimal work flow. Therefore, our system was assessedby employing Gazebo, which is a simulation software. This software allows us to spawn

50

3.4. Mapfuse System

(a) (b)

Figure 3.14: An initial guess point cloud will be used as to complete the unfinished SLAMpoint cloud.

a drone or robot as well as our initial guess model. However, our model needs to beextended with color and objects, as Visual-SLAM algorithms require visual features tobuild a map of the environment. In figure 3.15, an example of the simulated environmentis shown.

Figure 3.15: With Gazebo, we are able to simulate quadcopter flight and sensor mea-surements in order to gather an ideal dataset. This dataset was used to evaluate whichSLAM algorithm was most suitable for our approach.

As our drone employs a ROS-based operating system, trajectories can be scripted andtested in Gazebo before real world tests are conducted. Furthermore, in the simulation,the drone will always follow the same trajectory. Hence, a better comparison betweendifferent camera types can be made. Figure 3.16 shows which cameras we have evaluatedin our experiments.

However, scripting a trajectory requires some form of ground truth such as GPS.Since we are operating the quadcopter indoors, we can not rely on GPS communication.Alternatively, an accurate indoor ground truth pose estimation system would have to beimplemented in order to use these trajectory scripts in reality, which is an expensive andtime-consuming process [Sturm et al., 2012]. Therefore, our real world implementation

51

3. Realistic Environment modeling

(a) Logitech C615 (b) Genius WideCam (c) Microsoft Kinect

Figure 3.16: In both simulation and reality, we conducted tests with a common webcam-era (3.16a), a wide field-of-view webcamera (3.16b) and a Microsoft Kinect (3.16c).

of the system will control the quadcopter via a remote controller.After setting up the simulator with an environment, a flying quadcopter and a camera,

different datasets can be recorded via ROS. Such a recording exists of different ROS-topics, which holds the data. Such a ROS-topic can hold camera images, odometry,or information about the relationship between all coordinate frames. With the latter,it is possible to deduce the camera pose relative to the quadcopter. Consecutively, wecan deduce the initial pose of the quadcopter relative to the map. A Visual-SLAMalgorithm combines all this information with visual odometry of the camera, with thepurpose to obtain a more accurate trajectory estimate and thus a better belief of howthe environment looks like.

Datasets that were recorded in Gazebo were used as input for several Visual-SLAMalgorithms in order to determine which camera and which algorithm is most suitablefor our method. In order to validate our simulation results, we implemented the sameprocess to gather datasets in real environments.

3.4.2 SLAM

The second step in Figure 3.13 adopts the dataset as input for a ROS implementation ofa visual SLAM algorithm. By playing back the datasets, camera images will be publishedto ROS-topics required by the SLAM algorithm. The playback speed of the dataset canbe slowed down, so that the applied SLAM algorithm has more time to detect and processvisual features. We have experimented with LSD SLAM as well as RGB-D SLAM in orderto analyze which of these algorithms is most suitable for our method. As discussed inSection 3.5, parameters for both algorithms were changed empirically until we found anoptimal result.

The final step in our system combines the initial guess point cloud of figure 3.14and the SLAM point cloud into a single OctoMap. In order to obtain an accurate Oc-toMap, these point clouds have to be aligned as well as possible. Point cloud alignmentis achieved by empirically transforming the initial guess coordinate frame to the SLAMcoordinate frame. After the transformation is regulated correctly, both clouds are sentinto an OctoMap server node. This way, the initial guess model will be updated withreal measurements from a camera. Because SLAM is not able to map all elements in theenvironment, e.g. ceilings or walls that are blocked by furniture, our initial guess modelwill provide the OctoMap server with information about these missing elements, and up-dates the occupancy probability accordingly. A basic schematic of our map optimization

52

3.5. Mapfuse Results

component is shown in figure 3.17.

Figure 3.17: Basic schematic of the optimization block of our system.

Two options can be considered to merge point clouds. On the one hand, the finalSLAM result can be merged iteratively with the initial guess. Occupancy estimationscan be altered by changing the initial OctoMap occupancy probability or by insertingboth point clouds multiple times. When using this method, a balance between mapcompleteness and detail has to be mediated. On the other hand, the merging processcan be effected while SLAM is building a point cloud. After sending the initial guesspoint cloud to the OctoMap server a single time, node probabilities will be updatediteratively as the SLAM algorithm refines its point cloud based on current and previousmeasurements. Due to the fact that multiple measurements are taken into account, theoccupancy probability will be more conclusive. This methods proofs to be the best aswe will see in the results section. Figure 3.18 demonstrates the difference between bothmerging methods.

(a) Iterative merging (b) Online merging

Figure 3.18: In figure 3.18a, the initial guess (IG) is iteratively merged with the completeSLAM point cloud (SL). A balance between map completeness and detail is regulated bythe amount of IG or SL point clouds we merge. Figure 3.18b illustrates another option,where a single IG is merged with partial online SLAM clouds (SLn). The online mergingprocess is finished when SLAM has completely processed the dataset. The differencebetween both merging methods is discussed in detail in section 3.5.3

3.5 Mapfuse Results

MapFuse was evaluated using four different datasets, three of which we have recordedourselves. Dataset 4 is publicly available via the RGB-D benchmark dataset [Sturmet al., 2012]. Dataset 1 was recorded with a wide field-of-view camera, all other datasetswere recorded with a Microsoft Kinect. For all datasets, we have built an initial guessmodel with OpenSCAD.

53

3. Realistic Environment modeling

• Dataset 1: Room V329 at the University of Antwerp. This meeting room containsmany empty tables and closets.

• Dataset 2: Room V315 at the University of Antwerp (6.67m x 7.02m x 3.77m).The adjacent room V317 was also included in this dataset (4.12m x 3.42m x 3.77m).Both rooms contain desks and closets with a high amount of clutter.

• Dataset 3: An industrial tank car located in a small hangar at the port of Antwerp(9.2m x 2.45m x 3.75).

• Dataset 4: The ’freiburg1 room’ dataset provided by Sturm et al. [2012] Thisdataset is recorded in a small office environment.

These datasets were used as input for the two Visual-SLAM algorithms that werediscussed in chapter 2: LSD SLAM and RGB-D SLAM.

Finally, the MapFuse optimization step of section 3.5.3 was evaluated by applyingiterative merging and online merging on the initial model and the SLAM output.

All tests were performed on a Dell Inspiron 15 5548 laptop, which is provided withan Intel i7 5500U 2.4 GHz, 8 GB RAM, and an AMD Radeon R7 M265 graphics card.Ubuntu 14.04.5 LTS was used as operating system.

3.5.1 LSD SLAM

Our first tests were conducted with LSD SLAM. We connected a wide field of view webcamera (120) to a laptop and down-sampled the image to a 640x480 resolution in orderto evaluate the algorithm. With this setup, we recorded dataset 1 by walking around theroom in a sideways motion. This was necessary to ensure sufficient camera translation,which is required for LSD SLAM. In order to optimize the map with loop closures, thesame trajectory was repeated multiple times.

When running LSD SLAM, a few important parameters have to be taken into account.First, a pixel noise threshold is set to handle faulty sensor measurements. Second, theamount of key-frames to be saved is defined. This amount is based on the image overlapand the distance between two consecutive key-frames. A large number of key-frames willresult in an accurate trajectory, but also induces more noise in the map.

Figure 3.19b illustrates the point cloud and trajectory estimate of LSD SLAM. Em-pirical comparison with the real environment of figure 3.19a leads us to conclude thatLSD SLAM produces an accurate trajectory estimate. However, the point cloud holds ahigh amount of noise. As our approach requires a dense and detailed SLAM point cloudwith little noise, we will not pursue LSD SLAM in our research any further.

3.5.2 RGB-D SLAM

For our tests with RGB-D SLAM, we mounted a Kinect Camera to an Erle-Copteras shown in Figure 3.20 [Erle Robotics]. The Kinect was slightly tilted downwards tocapture as many visual features as possible. The camera was connected to a laptopwhich ran the camera driver correctly. Contrary to LSD SLAM, RGB-D SLAM canhandle camera translation as well as camera rotation. We found that the best trajectoryfor this algorithm is to rotate the camera 360 degrees at the center of the room, and then

54

3.5. Mapfuse Results

(a) The meeting room that was used torecord dataset 1.

(b) Resulting point cloud

Figure 3.19: LSD SLAM result

Figure 3.20: For our research, we mounted a Kinect camera to an Erle-Copter

apply coastal navigation. This process should be repeated for every new room that isentered.

RGB-D SLAM allowed us to configure numerous parameters. First, a feature ex-tractor had to be chosen. Our tests indicated that the SIFTGPU extractor, combinedwith FLANN feature matching induces a satisfying result. Second, we filtered the depthimage by implementing a minimum and maximum processing depth. As a result, noisymeasurements outside the valid range were diminished. The optimal values for theseparameters depend on the environment that was recorded. E.g. for figure 3.21, we haveset these parameters to 0.5 meters and 7 meters respectively. Last, the computed pointcloud was down-sampled, as we noticed that RGB-D SLAM failed to process new visualfeatures when the CPU is overloaded. Down-sampling the point cloud with a factor nsignificantly decreases CPU usage, while maintaining an acceptable point cloud density.Normally, the Kinect outputs a 640x480 array (307200 entries). By down-sampling thisarray, RGB-D SLAM keeps every nth entry in the Kinect array. E.g. if n equals 4, only76800 entries (25%) are being kept to be processed by RGB-D SLAM.

After configuring the RGB-D SLAM parameters, we managed to build the pointclouds shown in Figure 3.21. When we compare these results with the LSD SLAM resultin Figure 3.19, it becomes clear that RGB-D SLAM builds point clouds with higherdensity and less noise. This is mainly due to the fact that RGB-D SLAM inserts all

55

3. Realistic Environment modeling

visual information into the point cloud, whereas LSD SLAM creates a point cloud whichmerely consists of pixels that were used for depth map estimation. However, the RGB-DSLAM point clouds still contain gaps due to registration errors and limited observations.E.g. the ceilings of dataset 2 are not visible in Figure 3.21b and the industrial train carof dataset 3 is incomplete in Figure 3.21e.

Inquiring the accuracy of a trajectory requires the implementation of a ground truthestimation system. As mentioned in section 3.4.1, implementing such a system does notlie within the scope of our research, so we adopt the accuracy measurements of Endreset al. [2012]. In their research, the authors state that the RGB-D SLAM trajectoryestimate has an average Root Mean Square Error (RMSE) of 9.7 cm and 3.95 degrees ifSIFTGPU is used as feature extractor. This number was obtained by testing the SLAMalgorithm with datasets which include ground truth information [Sturm et al., 2012].

We conducted our own tests in order to determine the XYZ precision of the trajectoryestimate. RGB-D SLAM was launched several times, each time with the same parame-ters. One reference trajectory estimate and ten test trajectory estimates were extractedfrom these tests. For every trajectory, we plotted the error relative to the reference tra-jectory. Since it is nearly impossible to capture ground truth data with the current setup,we used the first trajectory as reference. In order to evaluate precision against groundtruth data, a motion capture system had to be installed which could estimate 6-DoF.Boxplots for the x-, y- and z-axis error can be found in Figures 3.22, 3.23 and 3.22.

In Figure 3.22a, we can observe that the test trajectories correspond well to ourreference trajectory, as well as to each other. Errors relative to the reference remain verylimited for all trajectories. Nonetheless, we also detect outliers with a difference of upto 80 cm relative to the reference trajectory. Figure 3.22b illustrates when these outliersoccur in time. This plot shows high precision until a certain point where the trajectoriesstart to spread out. At this point, RGB-D SLAM was not able to process visual features.Thus, the trajectory estimate could not be calculated correctly until visual features weretracked again. Compared to the y- and z-axis, the x-axis trajectory accumulated moreerrors due to the fact that the camera mainly traveled along the x-axis.

Similarly to the x-axis trajectory, figure 3.23 demonstrates a high precision on they-axis. As this axis contains fewer translations than the x-axis, outliers are less distinct.

Finally, the z-axis boxplot in figure 3.24 exhibits precision results that are comparablewith the x and y precision plots.

In general, we can conclude that the RGB-D SLAM algorithm results in precisetrajectory estimates, as long as visual features are continuously detected while mappingan environment. In order to ensure continuous feature tracking, the dataset can beplayed out at a lower speed. By doing so, RGB-D SLAM will have more time to processnew images which is beneficial for feature extraction. Additionally, when such a MapFuseresult will be used for radio propagation simulation, it is important to reduce this RGB-DSLAM precision by adding more objects in order to improve the visual feature detection.

3.5.3 Optimization Results

Although RGB-D SLAM has provided us with an accurate and dense point cloud, Fig-ure 3.21 has shown us that the map merely contains information about all environmentalelements that were visible in the dataset images. E.g. ceilings were not recorded, sothey will not be included in the resulting map. When we use this point cloud to render

56

3.5. Mapfuse Results

(a) Room V315 at the University of Antwerp,where we recorded dataset 2.

(b) RGB-D SLAM result for dataset 2.

(c) RGB-D SLAM result for dataset 4

(d) Dataset 3 was recorded with the purpose of modelingan industrial tank car at the port of Antwerp.

(e) RGB-D SLAM result for dataset 3.

Figure 3.21: We assessed the RGB-D SLAM algorithm in several environments. First,we tested indoor environments as shown in figures 3.21a, 3.21b and 3.21c. Second, weapplied the algorithm to map an industrial train cart (figures 3.21d and 3.21e).

57

3. Realistic Environment modeling

1 2 3 4 5 6 7 8 9 10

Trajectory

-0.4

-0.2

0

0.2

0.4

0.6

0.8

Err

or

[m]

(a) This boxplot demonstrates the x-axis precision error of all trajectories relative to a test trajectory.

1.49398664 1.49398666 1.49398668 1.4939867 1.49398672 1.49398674 1.49398676

Time [epoch] #109

-2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

X-A

xis

Traj

ecto

ry [

m]

(b) This plot shows the x-axis precision error over time of all trajectories relative to a test trajectory.

Figure 3.22: RGB-D SLAM X-axis precision

58

3.5. Mapfuse Results

1 2 3 4 5 6 7 8 9 10

Trajectory

-0.3

-0.2

-0.1

0

0.1

0.2

Err

or

[m]

Figure 3.23: RGB-D SLAM Y-axis precision

1 2 3 4 5 6 7 8 9 10

Trajectory

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

Err

or

[m]

Figure 3.24: RGB-D SLAM Z-axis precision

an OctoMap, only a partial volumetric model of the environment is obtained, as seen infigure 3.25.

Our approach resolves this issue by combining the SLAM point cloud with an initialguess. Merging these clouds can be achieved in two ways: iterative and online. Thefirst method builds an OctoMap by inserting the initial guess model and the final SLAMresult in a new OctoMap. The Occupancy probability is steered by iteratively inserting

59

3. Realistic Environment modeling

Figure 3.25: An OctoMap created from our RGB-D SLAM result of figure 3.21c.

the point clouds multiple times. However, this form of map completion also updatesvalid measurements with free space, causing the map to lose some of its detail. Fig-ure 3.26 demonstrates this problem. In Figure 3.26a, both point clouds were insertedonce. The initial guess has successfully filled in gaps that were present in the SLAMresult of Figure 3.25, although it has also caused doors and windows to disappear. Byadding another instance of the SLAM result (Figure 3.26c), doors and windows startedto reappear along with unwanted gaps in the floor. Another factor that has to be takeninto account is the order in which point clouds are being merged. As can be seen inFigures 3.26e and 3.26g, inverting the merging sequence has a significant effect on theoccupancy probability calculation. Concisely, balancing the amount of point clouds anduncovering an appropriate merging sequence is a troublesome task.

A second method to merge point clouds was mentioned in section 3.4.2. With thismethod, point clouds are already being merged while SLAM is running. Instead of usinga single SLAM point cloud, RGB-D SLAM constantly pushes its current online pointcloud. The main advantage of this method is that OctoMap can now render a volumet-ric model based on previous and current observations, which leads to a more conclusiveprobability calculation. Contrary to iterative merging, online merging allows us to ob-tain an adequate balance between map completeness and detail. This is demonstratedin figure 3.27a: undesirable gaps were completed by the initial guess model, withoutcompletely closing up doors and windows.

Both optimization methods were evaluated using our own datasets as well. First,we investigated the results of Figure 3.28a. An initial guess model was created withOpenSCAD and converted to a point cloud (Figure 3.14a). Also, we recorded a datasetin our indoor environment (Figure 3.21a) to generate online point clouds via RGB-DSLAM. Through an OctoMap server, these online point clouds were constantly mergedwith our initial guess until the entire dataset was played.

Second, Figure 3.28c shows. In order to build this model, RGB-D SLAM was usedto create a point cloud of an industrial environment, as can be seen in Figures 3.21dand 3.21e. Before merging the SLAM point cloud with our initial guess, we removedall unnecessary data, as we only wished to obtain a model of the train cart. Next,this point cloud was aligned and iteratively merged with the initial guess point cloud

60

3.5. Mapfuse Results

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure 3.26: Iterative merging of our initial guess model (IGM) with the complete SLAMpoint cloud (SL).

61

3. Realistic Environment modeling

(a) (b)

Figure 3.27: Online merging

(a) Online merging of an initial guess model (figure 3.14a)with live RGB-D SLAM output.

(b)

(c) Iterative merging of figure 3.14b with fig-ure 3.21e. The model consists of one initialguess and one SLAM point cloud.

(d)

Figure 3.28: Optimisation results for our own datasets. For figure 3.28a, online mergingwas applied. In figure 3.28c, we conducted iterative merging of 2 point clouds.

of Figure 3.14b. In this case, both the initial guess as the SLAM result were merged asingle time.

62

3.6. Discussion

3.6 Discussion

This section provides a discussion about this chapter in order to give a better knowledgehow to interpret the results. First, the benchmark of the seven rigid registration methodsevaluated the average distance between the set of correspondences of two consecutivepoint clouds that were captured with a moving robot in a regular office environmentregarding to the separate translation and rotation parameters. Since the evaluation ofthe robustness can be seen as an evaluation of the precision, the term robustness isselected because of the random set of point clouds. Alternatively, the robustness interms of accuracy is not evaluated since no ground truth data was available. Accordingto the idea of evaluating the seven rigid registration algorithms in a real world scenariowhere a driving robot is used to capture the data, it is not possible to have groundtruth data that is noise-free. Furthermore, as found in the evaluation, applying a closedform alignment before an ICP algorithm increases the performance of a registrationalgorithm. This idea is applied in RGB-D SLAM so that an optimal alignment is possible.Since the principle of coastal navigation is applied in order to get the best environmentmap, the environment model only contains data of the outer walls, objects, and a partof the floor. The result of this model can not be used to apply a radio propagationsimulation because of registration errors due to accumulated odometry drifts, holes inthe walls, no ceiling, etc. The MapFuse system reduces these problems by creating aninitial environment model that acts as a bias to reduce the registration errors and to fillup the holes. Furthermore, this initial model contains the actual environment in termsof walls, floor, ceiling, doors, and windows and hold the exact dimensions. By insertingthis model in the initial Octomap, the result after inserting each sensor measurementthat is transformed relative to the first location can be used to apply a simulation of aray-launching propagation loss model. Before we can you use such a 3D model for radiopropagation, different uncertainties has to be taken into account such as the relativeprecision of the RGD-D SLAM before and after the merging process. Because of the factthat an initial guess model is improving the result of RGB-D SLAM, the relative changesbetween maps can be extrapolated onto the trajectory. This will enable us to indicatethe worst case precision. Furthermore, according to Popleteev [2017] this worst caseprecision has an impact on the prediction of the signal strength for a specific locationwhen this precision is higher then 0.4× λ.

3.7 Conclusion

This chapter provides in the first place an overview of six rigid 3D registration meth-ods commonly used in robotics and computer vision. We discussed their mathematicalfoundations that are common to each of these algorithms and showed that each of themrepresents different approaches to solve a common least-squares optimization problem.

Next, we compared the methods with a critical view on their performance on adataset, that was created with a Pioneer-3DX robot and a Microsoft Kinect Camera.To illustrate the performance, we quantified the robustness and the precision of the dif-ferent registration methods. As result for the robustness we can conclude for this datasetthat a combination of applying a ICP point-to-point method after an SVD method givesthe minimum error based on 165 different point cloud pairs. On the other hand, the ICPpoint-to-surface is the most precise algorithm based on the rotational and translational

63

3. Realistic Environment modeling

part of the transformation after analyzing the results of the precision benchmark.In the second place we present an efficient, robust method for completion and opti-

mization of 3D models using MapFuse. Based on a simulator as well as in reality, we haveevaluated combinations of proven open-source technologies in order to attain a realisticmap optimization technique. Apart from these technologies, our method does not requireadditional complex calculations for map optimization.

Several aspects affect the quality of our final result. Firstly, the accuracy of the initialguess model has to be considered. For known environments, the accuracy is assumed tobe 100%, as exact measurements can be collected. In other situations, the user hasto speculate about dimensions based on visual observations or the result of a SLAMalgorithm. Also, the amount of detail that is included in the initial guess - e.g. windows,doors, furniture, etc. - will affect the occupancy probability for those elements withinthe model. In general, outlines of the filtered SLAM environment are sufficient to serveas initial guess, as detail will be provided by SLAM.

Finally, we have to choose a method for bringing both point clouds together. Iterativemerging fuses complete point clouds by aligning them and sending them to an OctoMapserver. A balance between map completeness and detail is set by regulating the amountof point clouds that is being forwarded, as well as implementing an appropriate mergingsequence. However, obtaining this balance has proven to be a difficult exercise. Amain advantage of the iterative merging method is that the SLAM point cloud canbe edited before using it in the merging process. Online merging starts by sending asingle initial guess to an OctoMap server and continues with running the RGB-D SLAMalgorithm. After an initial map alignment, online SLAM point clouds are continuouslymerged with the initial guess, leading to an improved balance between map completenessand detail. For both merging methods, OctoMap parameters such as initial probabilityand resolution can be altered in order to influence the final result. Also, MapFuse couldcope with dynamic environments by setting a occupancy probability threshold whichcancels out moving objects.

As initially intended, MapFuse is suitable to create 3D models of various environmentsfor the purpose of validating wireless propagation models. Due to the realistic nature ofour approach, such validations could improve control systems which work with complex3D objects. Additionally, the accuracy and precision of radio propagation validation willbe affected by the resolution of the map, as we will see in chapter 4.

64

Chapter 4

Realistic Indoor Ray-launchingPropagation Loss Model

This chapter presents a novel ray-launching propagation model that is categorized as adeterministic model. This propagation loss model uses an environment model that wasgenerated by measuring a real environment, using sensors attached to a moving robot.More precisely, this environment model is built from the combination of a laser scanner ora depth-sense camera and the relative motions of the robot. In order to combine both datastreams, a simultaneous localization and mapping (SLAM) algorithm is applied. Afterthis, the individual points of the obtained occupancy map are inserted in a spatial datastructure so that it can be used by the propagation loss model for computing the receivedsignal strength between two devices. The ray-launching propagation loss model is basedon four steps that are processed in sequential way [Bellekens et al., 2016]. In addition tothe implementation of the propagation loss model, this chapter covers the contributionsregarding to the validation approach that evaluates the best accuracy and the precisionof this model. Besides, the best accuracy and precision, the model is evaluated towardsthe optimal simulation parameters [Bellekens et al., 2016]. As a result of the validationapproach, the model can be applied in different applications, such as indoor localizationor indoor coverage algorithms, to optimize and validate their accuracy.

This chapter is structured as follows: In Section 4.2, the implementation of the dif-ferent methods regarding to the ray-launching propagation loss model are described.Section 4.3 explains the automatic validation system, which is used in section 4.4 toevaluate both environments towards a robust accuracy and precision. Furthermore, adiscussion concerning the results is also explained in section 4.4. Finally, the conclusionsare proposed in section 4.5.

4.1 Introduction

Nowadays, the need for a good and stable wireless connectivity at any indoor locationhas grown to a necessity in our current daily lives due to the growing demands of mobile

65

4. Realistic Indoor Ray-launching Propagation Loss Model

phones and IoT applications. Because of this, the need for methods that are able toindicate the places where a limited connection or no reception is possible has grown.Such an algorithm or propagation loss model simulates the Received Signal Strength(RSS), or more fundamental, the electrical field according to a model that represents theenvironment and the principals of electromagnetic wave propagation. Generally, radiopropagation models are subdivided in two types [Iskander and Yun, 2002, Almers et al.,2007, Forooshani et al., 2013]. First, deterministic algorithms require a full environmentmodel that has all objects included with their specific electrical material parameters sothat the influence of the wave propagation phenomena can be included in the resultedsignal strength computation [Azpilicueta et al., 2014, Subrt and Pechac, 2011, Yun andIskander, 2015, Bellekens et al., 2016, Jrvelinen et al., 2016]. Secondly, statistical algo-rithms do not take the different phenomena into account, which makes them less complex,fast to compute, and less suitable for validating an accurate and precise indoor localiza-tion algorithm because of the large impact of multipath [Letourneux et al., 2013, Iskanderand Yun, 2002, Ayadi et al., 2015, Tam and Tran, 1995].

The opportunity of mapping a real indoor environment related to the current locationand vice versa leads to following advantage: using a robotic algorithm in combinationwith a propagation model has the ability of estimating the locations relative to the firstlocation. This makes it possible to compute all locations where signal strength measure-ments were taken and will result in an automated validation solution that empowers theusability of radio propagation algorithms [Bellekens et al., 2016]. Furthermore, applyinga model of the real environment where all details such as holes, pillars, chairs, and fur-niture are included, results in a better idea of how the different signals are reflecting. Inpractice this is a difficult task, as shown in previous chapter 3, different solutions are pro-posed in order to model a realistic environment while assuming that the environment isstatic. Such an automated solution can be used in different application domains such asindoor/outdoor localization systems, telecommunications systems, and wireless networksystems in order to optimize the results based on realistic signal strengths simulations[Plets et al., 2012, Forooshani et al., 2013, Torres, 1997].

To validate and quantify our ray-launching propagation loss model so that a optimaland best simulation parameters can be found, two different 2D indoor office environ-ments are used. Both environments were equipped with several transmitters that wereoperating at 433.0 MHz and were able to broadcast a packet according to the sub-GHzmid-range DASH7-standard [Weyn et al., 2015]. On the other hand, the robot that wasalso equipped with a receiver that was able to receive a DASH7-packet. By using the au-tomated validation approach, a large number of individual links can be analyzed in orderto get a robust evaluation of the propagation model in 2D. As a result, the accuracy thatis defined as the RMSE, the Mean Absolute Error (MAE), and the Mean Error (ME)will be evaluated regarding to the resolution of the environment, the number of rays, andthe maximum level of reflections that is allowed. The aim of this research is to indicatethe optimal parameters where the accuracy and the precision are minimal regarding tothe validated environments.

4.2 Methods

The proposed ray launching propagation model is built upon four methods. Each methoddefines a specific task and requires the correct input in order to produce the appropriate

66

4.2. Methods

output. An overview of the individual tasks is illustrated in Figure 4.1, which fits inthe global perspective where the result of a SLAM algorithm is used by the propagationmodel. Subsequently, the result of this propagation model can be used for localizationor coverage applications.

Since we are proposing a ray-launching propagation loss model that simulates thereceived signal strength and more fundamental the electrical field based on an 2D mapthat was modeled with a SLAM algorithm, different assumptions need to be taken. First,the environment model is 2D. This includes that no reflections from ceiling and floorsare included. Secondly, the thickness of the wall is not considered. Thirdly, diffractionsare not included and finally, only vertical polarizations are assumed.

Line Segment extraction

Ray Calibration

Device Configuration

Electrical Field ComputationSLAM Applications

Figure 4.1: overview ray launching propagation loss model

In this section, each method is described according to the mathematical backgroundthat is necessary for each method. Furthermore, the algorithmic implementation of eachmethod is explained so that it is possible to indicate all contributions, which are proposedin following overview of the different methods:

1. Line-segment extraction: this method, explained in Subsection 4.2.1, contributesto the extraction of line segments and the segmentation of walls and objects byapplying a region growing connected component labeling algorithm on top of theQuadtree that contains points.

2. Device configuration: as described in Subsection 4.2.2, each device holds a simula-tion configuration that contains necessary parameters.

3. Ray Calibration: during this process that is explained in Subsection 4.2.3, everyray will be launched according to a transmitter device configuration and will reflectand refract in the environment model that is defined by a Quadtree or an Octree.Because each environment model is modeled with a maximum resolution level, everyray will be launched according to that level. In order to increase the performanceof this process, all rays are processed in a parallel fashion. The result of this raycalibration method is a list for every ray that exists of all visited cells. This listcontains the total distance from transmitter to receiver together with the Fresnelreflection coefficients. Besides the fact that every ray is calibrated towards themaximum number of reflections and refractions, all binary addresses of the visitedcells were stored in the object. This enables the usability to re-use this calibration

67

4. Realistic Indoor Ray-launching Propagation Loss Model

data in order to simulate the signal strength for a different frequency or to applya simulation where a subset of the rays is being used, which makes it scalable andefficient to apply a benchmark.

4. Electrical Field Computation: Due to the previous calibration process, all param-eters in order to compute the electrical field at every visited cell for each ray areknown. Subsection 4.2.4, computes for each visited cell the electrical field relatedto these parameters. As a result, a coverage map can be created by computing thesum of all electrical fields at every cell that was visited.

Finally, this propagation model delivers a generic abstraction layer that makes itpossible for applications to start, configure, and to query the outcome of the simulation.Such a query can be the computation of the total electrical field, the received signalstrength, the impulse response or the individual electrical field regarding to each ray ata specific location in the environment model. For the sake of visualization purposes,a generic environment model is used in this section so that the different parts can bevisualized in a clear way. Furthermore, an analysis is made to address the computationalcomplexity of the ray-launching propagation loss model.

4.2.1 Line Segment Extraction

To extract line segments from individual points, a region growing connected componentlabeling algorithm is applied in Algorithm 4.2. Such an algorithm is able to clusterpoints based on local feature descriptors. In order to do this, a K-nearest neighbor(KNN) algorithm is required to find a nearest neighbor according to a specific locationthat is defined by a given seed location in an environment model Q.

To cluster the environment so that all occupied nodes that are part of the same wallcan be distinguished, a classification is necessary. This includes the computation of afeature descriptor such as the principal components. Next, a KNN algorithm gets thenearest neighbor of the given seed location, which enables the comparison of the featuredescriptors based on both Quadtree nodes.

This results in the implementation of Algorithm 4.1 that is able to do the classifica-tion based on the eigenvectors that belong to the largest eigenvalues of the covariancematrix of both Quadtree nodes. Both eigenvectors are compared to each other by ap-plying a threshold ε on the angle that is specified between the eigenvectors and thex-axis. Additionally, a region growing solution is implemented that will change the seedlocation according to the nearest neighbor. This process maintains a list of all visitednodes and stops when all occupied nodes are visited. Given the constraint of a regularoffice or room environment, the angle between walls is around 90, which means that theeigenvectors of two walls are orthogonal to each other. Furthermore, because none of thecorners are perfectly aligned in reality, two walls can be classified when both angles ofone eigenvector related to the x-axis does not fit in the range of the angles of the othereigenvector that is computed with ε.

The implementation of the region growing connected component labeling algorithmuses a Quadtree, which is explained in chapter 2, that contains points and a seed locationas input to initialize the region growing process. The output of the algorithm is aQuadtree that contains lines and a list of clusters that indicates which line segmentsbelong to each other and thus represents a wall or the same object. Figure 4.2 illustrates

68

4.2. Methods

Algorithm 4.1 compare feature(v1, v2, ε)

1: inclusion← False2: θ11, θ

21 = compute eigenvector xaxis angle(v1)

3: θ12, θ22 = compute eigenvector xaxis angle(v2)

4:min θ12,max θ

12

=

(θ12 − (π/180) ∗ ε), (θ12 + (π/180) ∗ ε)

5:min θ22,max θ

22

=

(θ22 − (π/180) ∗ ε), (θ22 + (π/180) ∗ ε)

6: if min θ12 ≤ θ11 ≤ max θ12 or min θ22 ≤ θ21 ≤ max θ22 then7: inclusion← True8: end if9: return inclusion

10: function compute eigenvector xaxis angle(v)11: θ1 ← atan2−1 (v [1, 0] , v [0, 0])12: θ2 ← atan2−1 (v [1, 1] , v [0, 1])13: return θ1, θ214: end function

all different steps that are necessary to process Algorithm 4.2, which is further elaboratedin following paragraph.

Figure 4.2: region growing based line extraction.

The initial Quadtree that exists of points is processed according to two main sub-algorithms: First, the region growing process maintains a list of location addresses thatwere visited and a list of the classified clusters. An initial seed location is manuallydefined. Next, the clustering process classifies the points of the closest neighbor in threesteps. The implementation of this process can be found from line 12 to 36 and followsthe following steps:

1. The first step in the clustering process is the extraction of all environment pointsat location Pseed by recursively visiting the environment model. Furthermore, theprincipal components will be computed for these points. In the case of the firstseed location, the address of the nearest neighbor node will be added to the list ofseed locations by applying the KNN algorithm.

2. The clustering process defines if both datasets are part of the same cluster or not.This process uses a predefined threshold ε. In the case of the simplified example,

69

4. Realistic Indoor Ray-launching Propagation Loss Model

this threshold was configured to 90

3. The region growing process selects the next nearest neighbor node in order to clusterthe new node. This process keeps searching for nodes that contain environmentalpoints until all nodes are visited. Finally, line segments has to be created based onthe clusters that were found. To do this, two approaches are possible:

a) a RANdom SAmple Consensus (RANSAC) algorithm, which automaticallyexcludes outliers, so that the optimal line segment can be found regardingto the points of each cluster. One of the advantages of this approach is thatan optimal line segment will be found according to the given points. Whenthe number of points is low, it is hard to get an optimal line segment sinceRANSAC needs to sample random samples of different point pairs.

b) connecting the centroids of the set of points of each cluster. The main disad-vantage of this approach is that the number of clusters has to be large so thatit performs an accurate segmentation.

As a conclusion of the line segment extraction algorithm, an example is visualized inFigure 4.2, which indicates the classified clusters in Figure 4.3a and the line-segments inFigure 4.3b. In this example, the first seed location was configured at location 2, 3. Inorder to use this model to calibrate each ray, each clusters needs to have a permittivity.This value is assigned in a supervised fashion through iteration.

(a) Overview region growing based line extrac-tion.

(b) Line extraction

Figure 4.3: clustering + line Segment extraction

4.2.2 Device Configuration

Because every simulation starts with a specific purpose and a certain expectation of whatthe result should be, a good configuration is necessary. Such a configuration containsdifferent parameters about the environment model, transmitter devices, receiver devices,and antennas. This means that a priori knowledge about the propagation model is re-quired in order to get a correct result given the input data. Furthermore, as a result ofthe application that is specified, the conclusions will be different. For example, local-ization applications that use the AoA requires the electrical field as a complex numbersince it includes phase information whereas traditional RSS localization uses the Received

70

4.2. Methods

Algorithm 4.2 Region Growing CCL (Q, Kneighbors, ε)

1: INITIALIZATION : visited, cluster, clusters, growing2: seed = x, y3: Pseed ← getV oxelPoints(seed)4: A = cov(Pseed)5: λ,v← Av = λv6: for i = 0 to visited do7: if seed 6= visited [i] then8: visited; cluster ← seed [i] , λ,v9: end if

10: end for11: kpoints ← get knn(Q,K = 1, seed)12: for n = 0 to kpoints do13: for m = 0 to visited do14: if kpoints [n] == visited [m] then15: growing ← 116: Pseed ← getV oxelPoints(kpoints)17: A = cov(Pseed)18: λ,v← Av = λv19: visited← kpoints [n] , λ,v20: if length(cluster) > 0 then21: cluster ← kpoints [n] , λ,v22: end if23: if compare feature(visited m , visited m− 1 , ε) == True then24: if kpoints [n] 6= cluster [length(cluster)− 1] then25: cluster ← kpoints [n] , λ,v26: end if27: else28: cluster ← kpoints [n] , λ,v29: clusters← cluster30: INITIALIZE : cluster31: end if32: else33: growing ← 034: end if35: end for36: end for37: seedprevious ← seed38: if 1 in growing then39: seed← visited [length(visited)− 1]40: else41: if length(visited) < length(Q.occupied) then42: seed← visited [length(visited)− 1]43: if seed == seedprevious then44: seed← find clusters(Q)45: clusters← cluster46: INITIALIZE : cluster47: end if48: else49: return clusters← cluster50: end if51: end if52: Region Growing CCL(Q, k neighbors)

71

4. Realistic Indoor Ray-launching Propagation Loss Model

Signal Strength, which is expressed in dBm. A general configuration contains followingparameters:

• Transmitter location x, y, z

• Transmitter power (dBm)

• Transmitter antenna gain (dB)

• Radiation patterns Rx, Tx (monopole, dipole)

• Number of rays

• Maximum level of the reflection tree

• Receiver location x, y

• Receiver antenna gain (dB)

• Maximum level of the environment model

• Environment model

4.2.3 Ray Calibration

The core of this propagation model is based on the principle of ray launching. Theserays are launched in different directions specified by an antenna radiation pattern. Afterthe rays are generated, each ray will be inserted in the environment model at the lowestlevel of the environment model so that the space between the transmitter and the rayendpoint can be addressed and traced. Because of the fact that each ray is able to reflectand to refract in the environment when it intersects with an environment segment, eachray needs to be traced using a depth-first traversal algorithm. This traversal algorithmfacilitates the process of maintaining a list that holds the total distance between thevisited cell and the transmitter location, the Fresnel reflection coefficients, and the binaryaddress of each visited cell. Thus, each time that a ray intersect with a line segment ofthe environment, algorithm 4.3 enables the computation of the reflection and refractionparameters in line 3 and 6. When Preflection is defined at line 1, a reflection is computedand a new ray is traced until depthcurrent is equal to the maximum reflection value, whichis given by the configuration. Next, a refraction ray is computed and traced.

Moreover, the estimation of a reflection and a refraction is divided in two steps: First,the computation of the reflection angle is computed according to Fermat’s principal,which states that the angle of reflection is equal to the angle of incident θ1. Whereas thereflection vector r can be computed with following equation (4.1)

r = d− 2(d · n)n (4.1)

where d · n is the dot product of the normalized incident vector d and the normal-ized normal vector n of the environment line segment. The normal vector is computedaccording to the following (−(y1 − y2), (x1 − x2)) formula. The second step assumesSnell’s law to compute the transmission angle that appears when a ray refracts. Thislaw defines the ratio between the incident angle and the transmission angle according to

72

4.2. Methods

Algorithm 4.3 compute 2D geometric(Q, P1, P2, depthcurrent, visited)

1: Prefl, Line ← compute ray intersection(P1, P2, Q.points)2: if Prefl then3: Prefl end, θrefl,ΓFresnell ← compute reflection(Prefl, Line)4: visited← ΓFresnell5: trace(Q,Prefl, Prefl end, depthcurrent+1, visited)6: Prefr end, θrefr, TFresnell ← compute refraction(Prefl, Line)7: visited← TFresnell8: trace(Q,Prefr, Prefr end, depthcurrent+1, visited)9: rayend = True

10: end if

the refractive index n of the material it is penetrating. This correlation is defined withfollowing formula:

sin(θ1)

sin(θ2)=η1η2

(4.2)

where θ1 and θ2 are the incident and transmission angles respectively. Furthermoreη1 and η2 are the refractive indexes of the specific material as illustrated in figure 4.4. Inmost common applications η1 is the refractive index of air, while η2 can be any materialsuch as wood, concrete, plasterboard, metal, stone, etc.. This refractive index can befurther explained according to following equation:

η =√εrµr (4.3)

where ε denotes the permittivity of the material, which is described as ε = εrε0, whereεr is the relative perimittivity and ε0 the absolute permittivity. This absolute permittivityor the permittivity of free space is measured in vacuum and is approximately 8.854 ×10−12 F/m. On the other hand, the permeability µ can be described as µ = µrµ0, whereµr is the relative permeability of a material and expresses the influence of the material onthe wave resulting from its magnetic properties. Because most of the materials becomenon-magnetic when the frequency is higher then 1 MHz, the relative permeability is 1and thus Equation (4.3) becomes η =

√ε [Richards, 2008]. For this reason, the material

parameter of each material in the environment model includes only the permittivity sincethe propagation model is simulating the electrical fields at frequencies higher then 1 MHz.

Furthermore, to describe the amount of power that will be lost due to reflection ortransmission, both incident and transmission angles are used to compute the Fresnelcoefficients. These Fresnel coefficient describes the correlation between the reflectivity,the angle of incident and the polarization, and the computation of reflection Γ andtransmission T related to a vertical polarization is given by following Fresnel equation.

Γ⊥ =η1 cos(θ1)− η2 cos(θ2)

η1 cos(θ1) + η2 cos(θ2)(4.4)

T⊥ =2η1 cos(θ1)

η1 cos(θ1) + η2 cos(θ2)(4.5)

where θ1, θ2 corresponds to the incident and transmission angles.

73

4. Realistic Indoor Ray-launching Propagation Loss Model

n2

n1

r1d

r2

Φ1 Φ1

Φ2

n

Figure 4.4: Reflection

As previously stated, a depth-first traversal algorithm is implemented to trace eachray in the environment until a preconfigured level of depth is reached. The meaning ofthis preconfigured level applies to the number of recursions that each ray can take. Toillustrate this, Figure 4.5 shows a binary tree data structure that indicates all reflectionand refraction interactions of the ray that is visualized in Figure 4.6 starting from location5.5, 4.2. This traversal algorithm will process the illustrated ray, which is defined bytwo points P1, P2, in environment model Q as follows:

1. Line 2 to 10 implements the depth-first search algorithm that recursively traces thequadtree data structure until location P1 fits within the range of the node and thenumber of leafs is zero.

2. Line 12 to 23 covers the implementation that keeps track of the visited cell list andensures that the recursion level of the reflection is smaller than the preconfiguredlevel. Because of the fact that each ray is inserted in the environment model bysplitting each ray in small line segments with a permittivity of 1, algorithm 4.3is able to compute an intersection point between an environment line segmentand a ray segment with traditional mathematics. This intersection only occurswhen the quadtree node contains an environment object. Since each reflection andtransmission is traced in a recursive fashion, following Figure 4.5 illustrates therecursion order that each ray undergoes. At the first intersection Γ0, a reflection Γ1

and transmission T1 occurs. Secondly, because of the depth first implementation, Γ2

is the next reflection intersection. Since Γ2 appears at level 2 and the preconfiguredlevel is equal to 3 this ray will be traced in the direction of the reflection angle andwill stop at the following intersection. Subsequently, the returning recursion willtrace Γ2 in the direction of the refraction. Thirdly, as a result of the recursiveimplementation, the intersection at refraction T2 is found and traced. Finally, theother reflections and refractions are found and traced in a similar approach.

3. Line 24 until 38 holds the implementations of detecting if the ray is part of thecurrent Quadtree node. This process includes the estimation where the ray segmentdenoted by P1 and P2 is crossing the axis- aligned bounding box defined by Pminand Pmax. Since each ray segment can be expressed as y = mx + b, where mdenotes the slope and b the location where the segment crosses the y-axis. Onthe other hand, an edge of an axis-aligned bounding is always parallel with one

74

4.2. Methods

level 3

level 2

level 1 1

2

0

T1

T2 T'2

'2

0

1

2 3

4

5 6

Figure 4.5: Reflection binary tree data structure + ray order illustration

of the coordinate axis. This means that y = mx expresses a line that is parallelwith the x-axis. In order to check if the ray crosses this Quadtree node, Pmin andPmax needs to be incorporated so that all line equation that refers to the edges ofthe bounding box are crossing with the ray segment or not. In case when a rayis crossing any ray segments, the algorithm increase the ray segment so that P1 islocated in the next Quadtree cell. This makes it possible for the next recursion tolocate the next Quadtree node that is part of the ray.

4. Since Algorithm 4.4 traverses a ray in a Quadtree data structure according to adepth first search algorithm to localize each node, line 40 to 51 implements a tailrecursion that keeps the ray traversal algorithm running until the end of the ray isreached or the outer boundaries of the environment model are reached.

Because of the independence of each ray, this process can be executed in parallel whichwill increase the performance of a single simulation as can be seen in Section 4.4. Theimplementation of this solution is based on the generation of a single environment modelwhich is used to calibrate each ray. Subsequently, a temporary copy of this environmentmodel is used to trace every ray. When a ray is traced, the list of visited cells is storedand the temporary copy of the environment model is erased. Therefore, the memorythat was used by the temporary model can be reused for other rays, which results in anefficient algorithm.

4.2.4 Electrical Field Computation

The objective of first launching the rays in the environment model delivers the possibilityof computing the electrical field according to a specific simulation configuration. Inaddition to Subsection 4.2.2, a configuration can be extended so that it can contain arange of frequencies or a different number of maximum reflections that a ray can reflectand refract. As a result of the ray calibration process, a list for each ray is created thatcontains all the addresses of the visited environment cells together with the computed

75

4. Realistic Indoor Ray-launching Propagation Loss Model

Algorithm 4.4 trace(Q, P1, P2, depthcurrent, visited)

1: INITIALIZE : rayend = False2: Pmax = (Q.x+Q.width), (Q.y +Q.height)3: Pmin = Q.x,Q.y4: if Pmin ≥ P1 ≤ Pmax then5: if length(Q.node) 6= 0 then6: for i = 0 to length(Q.node) do7: if rayend = False then8: trace(Q[i], P1, P2, depthcurrent, visited)9: end if

10: end for11: else12: if length(Q.points) > 0 then13: if depthcurrent ≤ depthmax then14: Pcenter = (Q.x+Q.width/2), (Q.y +Q.height/2)15: if Q.binary index 6= visited then16: dtotal =

√(Pcenter.x− Ptx.x)2 + (Pcenter.y − Ptx.y)2

17: visited← Q.binary index, dtotal, Ptx18: compute 2D geometric(Q,P1, P2, depthcurrent, visited)19: end if20: else21: rayend = False22: end if23: end if24: if rayend == False then25: hit, t ← aabb intersection(Pmin, Pmax, P1, P2)26: if hit == True then27: t = t+ 0.000128: P1 = P1 + (P2 − P1)t29: hit, t ← aabb intersection(Pmin, Pmax, P1, P2)30: if hit == True then31: rayend = True32: else33: rayend = False34: end if35: else36: rayend = True37: end if38: end if39: end if40: while Q.level == 1 and rayend == False do41: if length(Q.node) > 0 then42: for n = 0 to length(Q.node) do43: trace(Q,P1, P2, depthcurrent, visited)44: end for45: end if46: end while47: else if (outer boundaries.x) > P1.x > (outer boundaries.x +

outer boundaries.width) then48: rayend = True49: else if (outer boundaries.y) > P1.y > (outer boundaries.y +

outer boundaries.height) then50: rayend = True51: end if

76

4.2. Methods

0

12

T1

'2

T2

Figure 4.6: Example of a ray calibration.

reflection coefficients. Furthermore, this list is used to compute the electrical field ateach visited location according to following equation (4.6)

Ex,y = E0e−jkdx,y

dx,y

n∏g=1

Γg

m∏h=1

Th (4.6)

where dx,y defines the total distance between location x, y and the transmitter loca-tion. When one or more reflections occur, this total distance is computed as the sumof the distance between location x, y and the last reflection and the distances betweenall reflections and the transmitter location. Subsequently, this distance represents thetotal length of the ray. Furthermore, n and m defines the number of reflections Γ andrefractions T , respectively. In accordance to Equation (4.6), a multiplication of the cal-ibrated reflections and refractions is required in order to compute the electrical field atlocation x, y relative to the transmitter location. Next, E0e

−jkdx,y specifies the complexelectrical field related to the transmitted power, the total distance and the frequency.The parameter k in this formula defines the wave constant 2π

λ and E0 is the referenceelectrical field computed according to following equation (4.7)

E0 =

√( η04π

)PtGt (4.7)

where η0 is the intrinsic impedance of free space or 120π. Next, Pt is the transmittedpower and Gt is the gain of the used transmitted antenna.

77

4. Realistic Indoor Ray-launching Propagation Loss Model

4.2.5 Application Methods

In order to use this propagation loss model in applications such as wireless coverage op-timization or wireless localization, a total electrical field needs to be computed accordingto equation (4.8).

Etot =

q∑p=1

Ep (4.8)

In this formula, q is number of the individual electrical fields Ep at location x, y.Due to the computation of the electrical field at location x, y for every ray, all electricalfields with the same reflection location can be neglected and need to be removed fromthe list. This list includes all the electrical field Ep of every ray that has traversedthe individual node during the calibration process, which can be summed to obtain thetotal electrical field. As a result of applying the sum, constructive and destructive wavesbehaviors are included because of the presence of the phases. Moreover, when a receivedsignal strength (RSS) is necessary, the total electrical field has to be converted into apower density which is relative to the intrinsic impedance of air. This conversion can becomputed with following equation (4.9).

Pr =E2totλ

2GtGrη04π

(4.9)

Figure 4.7a shows the result of the computation of the total electrical field in termsof a power density for one ray at every location that was calibrated for one ray. Whileon the other side, Figure 4.7b shows the result of the total electrical field converted intoa power density, when 1000 rays are launched in the environment model.

(a) one ray (b) example of 1000 rays

Figure 4.7: example single ray propagation

In addition to applications that are applied in indoor environments, the term multi-path is often used to indicate the influence of different propagation phenomena that aray undergoes. This can be described more in details as the influence due to reflections,

78

4.2. Methods

refractions, and diffractions. Since each location contains a list of all electrical fields ofeach ray that traversed the specific node, every distance can be converted in a time t,which indicates the duration of each ray to travel from transmitter to location x, y. Thisconversion is computed by dividing the distance with the speed of light as follows:

t =d

c(4.10)

where, d is the distance of that was used to compute the electrical field and c is thespeed of light. As a result of this approach, an approximation of the impulse responseat location x, y is possible. Because of the deterministic flavor of the ray-launchingpropagation model, the approximation can be reached when the number of rays is veryhigh.

4.2.6 Complexity Analysis

To investigate the computational complexity of the implemented ray-launching propaga-tion loss model, an analysis is made in order to find the time and space complexity of thealgorithm. When applying the propagation loss model according to the four steps thatwere discussed in section 4.2, a real simulation is usually being processed in two steps:First, the line segment extraction is done separately because we need to address eachcluster of line segments with a certain material property. Second, the device configura-tion, ray calibration, and electrical field computation are processed in a sequential way.Moreover, the ray calibration methods take the longest computation time because of theray traversal algorithm. Within this section, the analysis will focus on the complexityof this ray traversal algorithm when a real environment is used as input. In order toanalyze this, three different aspects are evaluated:

1. The computation time and the average number of Quadtree cells that were tra-versed when the ray traversal algorithm was applied for a specific set of rays and aspecific Quadtree depth. As a result of increasing the Quadtree resolution depth ordividing each leaf in four sub-cells, the time complexity of the traversal algorithmis exponential. This can be analyzed from Figure 4.8 where the average numberof cells doubled each time that the Quadtree resolution raised with one level. Fur-thermore, the computation time, which is projected on the left y-axes doubled eachtime that the resolution increases.

2. The computation time and the average number of line segments that were stored inan occupied Quadtree cell when the traversal algorithm was applied for a specificset of rays and a specific Quadtree depth. The result of this analysis can be seenin Figure 4.9 where the number of line segments decreases to 1 in an exponentialway when the resolution increases with one level.

3. The correlation between the number of rays and the time computation time whenthe resolution level increases by one. According to both complexity analysis itis clear the number rays has no impact on the number of cells that the traversalalgorithm has to traverse. Subsequently, due to the sequential procedure of thisevaluation, the time complexity of a simulation with multiple rays can also be seenas exponential as shown in the relative computation times of each curve.

79

4. Realistic Indoor Ray-launching Propagation Loss Model

2 4 6 8 10 12 14Quadtree depth

0

100

200

300

400

500

600

700

800

900

com

puta

tion t

ime (

s)

100

101

102

103

104

avera

ge t

ravers

ed

cells

time 5 raystime 10 raystime 25 raystime 50 raystime 100 rayscells 5 rayscells 10 rayscells 25 rayscells 50 rayscells 100 rays

Figure 4.8: Result of complexity analysis where the x-axis represents the Quadtree depth,the right y-axis represents the average number of cells that were traversed by the rays,the left y-axis represents the computation time in seconds

2 4 6 8 10 12 14Quadtree depth

0

100

200

300

400

500

600

700

800

900

Com

puta

tion t

ime (

s)

0

10

20

30

40

50

60

70

avera

ge lin

e s

egm

ents

in c

ells

time 5 raystime 10 raystime 25 raystime 50 raystime 100 rayslines 5 rayslines 10 rayslines 25 rayslines 50 rayslines 100 rays

Figure 4.9: Result of complexity analysis where the x-axis represents the Quadtree depth,the right y-axis represents the average number of line segments in the occupied cells thatwere traversed by the rays, the left y-axis represents the computation time in seconds

In order speed-up the computation time, the ray calibration method uses multiprocessingtechniques. Furthermore, because of the exponential complexity of this method, ananalysis has to be made in order to evaluate the optimal number of rays regardingto the accuracy of the propagation loss model. In Section 4.4, different analysis areevaluated with in terms of computation time when multiple cores are used, accuracy ofthe propagation loss model with respect to the number of rays, number of reflections,and the Quadtree resolution.

4.3 Materials

In order to proof the objective of simulating the signal strength in a realistic way, avalidation approach is necessary. To validate our propagation loss model, different as-

80

4.3. Materials

pects are analyzed and evaluated regarding to real measurements that were taken in twodifferent indoor office environments. This section first explains the general validationmodel in order to validate each simulation so that the different results can be analyzedand evaluated in the same way. Secondly, two test environments and the respectivelyhardware is explained that was used to take all the measurements.

4.3.1 Validation model

This automated validation model is based on four layers as can be seen in Figure 4.10.

SLAM

Propagation Model

RF Measurements

Transformation Tree

Validation

Rx Configuration

Figure 4.10: overview of the validation approach

The first layer specifies at one side the SLAM algorithm that enables it to localizea robot relative to a map that is created by sensing the environment with a laser thatwas mounted on the robot. While at the other side, radio frequency measurements aretaken at different locations with a DASH7-receiver that was mounted on the robot. Asa result of different losses due to passive components, a calibration measurement thatis measured in the antenna’s far field is necessary. To incorporate this calibration mea-surement, the transmitted power is being calculated according to the Friis Transmissionequation (2.34), which is then used in Equation (4.7). Second, a transformation treeis maintained during the measurement process. This transformation tree includes allgeometric transformations between all transmitters that were placed in the environmentand the first robot location. This means that every receiver location can be found inthe trajectory estimation after a SLAM algorithm has been applied. Moreover, to makeadvantage of this transformation tree, the first geometrical transformations between thefixed transmitters and the first robot location have to be measured. Next, as a resultof this transformation tree, the propagation model is able to simulate the total elec-trical field relative to each transmitter configuration and regarding to the environmentcreated by the SLAM algorithm. Such a transmitter configuration exists of the trans-mitted power, antenna gain, the antenna radiation pattern, the number of rays, and thefrequency that was used. As a second result of the transformation tree, each receiverlocation can be extracted because at every receiver location a specific time delay of wasapplied. Lastly, a validation towards a specific transmitter configuration can be madeby computing the accuracy in terms of the RMSE, the MAE and the ME between themeasured signal strength and the predicted signal strength computed at the same loca-

81

4. Realistic Indoor Ray-launching Propagation Loss Model

tion. To analyze both the correlation between the number of rays versus the environmentresolution and the level of the reflection tree, different transmitter configurations haveto be validated. This makes it possible to evaluate the result. Because of the genericimplementation, the list of the rays that includes all the results can be used to validate adifferent simulation. For example, a simulation that holds the result of 6400 rays makesit possible to analyze the result of a simulation where 3200, 1600, 800, 400, 200, 100 raysare launched. This will decrease the computation time that is necessary to evaluate theindividual validations.

4.3.2 Robot

The robot that was used to map all test environments is the compact differential-drivePioneer 3-DX. This robot is worldwide being used for academic research about environ-ment mapping, localization, autonomous driving, etc. It contains two wheel encodersthat exist of 500-ticks, which result in a high accuracy odometry reading. Second, itis powered by three lead-acid batteries each that delivers 7.2 Ah. Next, an aluminumframe is mounted on top of the robot that contains 4 radio receivers, a stereo camera,and a depth-sense camera. In addition to this, a LIDAR is mounted in front of the robottogether with a high performance computer and an access point to monitor the robotremotely as can be seen in Figure 4.11.

Figure 4.11: Picture of the Pioneer 3-DX robot that was used for validating the propa-gation model.

This computer operates the robot via the ROS, which provides all kinds of hardwaredrivers, libraries to map an environment, and tools to control the robot on one hand andread all the sensor measurements on the other hand. Furthermore, ROS is a centralizedtime-based framework that can process different applications or nodes according to aspecific geometrical transformation that is related to the time that the measurement wasprocessed. This delivers an important advantage to validate the propagation loss modelbecause all radio measurements can be mapped onto the geometrical transformation ofthe mapped environment and thus each transmitter location in the environment relative

82

4.3. Materials

to the initial robot location is known. The indicated advantage reduces the installationtime to validate the propagation model because only the locations of the fixed nodes inthe environment need to be measured relative to the first robot location.

4.3.3 Test Environments

As stated before, the propagation loss model is validated according to two indoor officeenvironments. The first environment is situated in the center of Antwerp. The envi-ronment consists of two adjacent rooms that are divided by a plasterboard wall thathas a door. Both rooms were empty during the measurements where six transmitterswere placed in both rooms at fixed locations, which are indicated in Figure 4.12b as bluecrosses. The dimensions of the first room are 8.6 m × 6 m, whereas the dimensionsof the second room are 12 m × 6 m. Next, the robot performed the measurements on16 different locations that are illustrated in Figure 4.12b with red stars. Subsequently,96 individual links can be validated. On the other hand, simultaneously with the radiomeasurements, the robot processes the measurements according to a SLAM algorithmso that an environment map is created. The result of this map, modeled as a Quadtreewith a resolution up to level 8 (0.234 m × 0.078 m) and the trajectory of the robot, canalso be seen in Figure 4.12.

(a) SLAM result

length (m)-5 0 5 10 15 20 25

wid

th (

m)

-3

-2

-1

0

1

2

3

4

5

1 2 34 5 6

rx locationtx location

(b) Environment Model with transmitter and re-ceiver locations

Figure 4.12: Office environment that was used to validate the propagation loss model.

To validate the propagation model, radio measurements at different locations duringone minute on one sub-1GHz frequency, 434.56 MHz were made. During this period allthe received signal strength values that were sent every second from our sub-1GHz lowpower transmitter nodes were captured together with the laser and wheel odometry data.On one hand, the embedded hardware that we used for this environment was the CC1101radio chip of Texas Instruments in combination with the Giant Gecko development kitof SiLabs as can be seen in Figure 4.13. In addition to the micro-controller and theradio chip, a monopole antenna is used in order to emit the signals on the medium. Thismonopole antenna has a peak gain of +3 dBi and is matched for 433 MHz. On theother hand the low power mid-range DASH7 open source stack was used to program theembedded hardware [Weyn et al., 2015].

The second environment that was used to validate the propagation loss model islocated in the iGent Tower in the city of Gent. This environment is also divided bytwo rooms, one large conference room and another smaller meeting room. Both room

83

4. Realistic Indoor Ray-launching Propagation Loss Model

Figure 4.13: Hardware that is used in the CPM environment for transmitter and receiver

were equipped with the regular conference and meeting room equipment such as tables,chairs, and projector. Subsequently, the dimensions of the large conference room is9.4 by 6.8 meter and the small meeting room is 4.6 by 6.8 meter. As a result of therealistic modeling solution, the quadtree that is modeled on resolution 8, which results ina minimum cell size of 3.6 by 2.6 centimeter, can be seen in Figure 4.14b. Together withthe environment model, the trajectory, the location of 10 transmitters and 20 receiversare illustrated, which result in 200 individual links to be validated. In addition to this,the robot was equipped with 4 receivers, so these 200 individual link can be multipliedby 4 so that the total number of links that can be used to validate the propagation modelis 800 links.

(a) SLAM result (b) Environment Model with transmitter and re-ceiver locations

Figure 4.14: iTower Gent

The hardware that was used in this environment that makes a validation of thepropagation model possible, is based on a System-on-the-chip (SoC) design of SiLabsthat integrates the Si4460 radio chip with a Leopard Gecko Micro-controller, which isbased on the ARM Cortex-M3 architecture. This SoC is the core component of theDASH7-USB design that can be seen in Figure 4.15. Furthermore, every link that will

84

4.4. Results and Discussion

be evaluated exists of measurements that were captured during one minute at a receiverlocation. Because of the wireless aspect, a monopole antenna that is matched for 433 MHzis used. This antenna has a maximum peak gain of +3 dBi.

Figure 4.15: Hardware that is used in the iTower Gent environment for transmitter andreceiver

4.4 Results and Discussion

This section holds the individual results of both office environments and the discussionof these results. The two environments are evaluated in the same way according tothe general validation model, which is explained and discussed in section 4.3. In orderevaluate this propagation loss model, four statistical estimators are used to quantify theperformance of the propagation loss model. First, the mean error is used to quantify ifthe propagation model is performing better or worse than what was measured. The MEEME is computed according to following equation:

EME =1

n

n∑i=1

Pmeasured − Psimulated (4.11)

where Pmeasured is the average signal strength that was measured at a receiver locationn during a period of one minute and Psimulated is the signal strength that was simulatedat the same receiver location n. The second estimator that is used to express the perfor-mance of the propagation loss model is the MAE EMAE . This estimator is mainly usedto find the average error and is calculated with following equation:

EMAE =1

n

n∑i=1

|Pmeasured − Psimulated| (4.12)

The third estimator that is applied to illustrate the performance of the propagation lossmodel, is the RMSE ERMSE . This estimator is widely used to indicate the performanceof system and is computed in Equation (4.13). Since it is sensitive to outliers, this valueis not always the best option.

ERMSE =

√∑ni=1(Pmeasured − Psimulated)2

n(4.13)

85

4. Realistic Indoor Ray-launching Propagation Loss Model

Finally, precision is used to quantify the standard deviation of the errors. These estima-tors will be used to evaluate each environment in four different perspectives:

1. By analyzing the correlation between the number of rays, the resolution of theenvironment, and a specific level of the reflection tree, the best global accuracy interms of RMSE of all individual links is evaluated.

2. The influence of the reflection tree level or changes in small scale fading is ana-lyzed by simulating different set of rays related to the RMSE according to differentresolution levels of the environment.

3. The accuracy of all transmitters is evaluated by analyzing the differences betweenthe simulated signal strength and the measured signal strength that was measuredduring one minute at the different receiver locations in terms of the ME, the MAE,the RMSE, and the precision.

4. The performance of the propagation loss model is analyzed related to the numberprocessing units and the resolution of the environment. Furthermore, this analysistakes the optimal level of the reflection tree, and the optimal number of rays asinput.

4.4.1 Office Environment 1

The first office environment exists of 221 line segments and contains 6 transmitters thatoperates at 433 MHz. Furthermore, 16 different locations were used to receive DASH7messages that were send periodically. This results in 96 links that are used to evaluatethe ray launching propagation loss model. In following sections, the results of the fourperspectives in order to evaluate the propagation models are shown.

4.4.1.1 Resolution

In order to analyze the correlation between the amount of rays that are launched and theglobal accuracy in terms of RMSE, the resolution of the environment model is changed.Because of the spatial data structure modeling technique, an environment can be de-scribed according to a specific cell size. This results in the possibility to represent anenvironment with a lot of details or without any details, which influences the global ac-curacy when validating the propagation loss model. Figure 4.16a visualizes the globalaccuracy on the y-axis and the number of rays on the x-axis. This figure shows the resultof a set of simulations that were computed according to a set of resolution levels rangingfrom level 6 to 10 and a set of rays (200, 400, 800, 1600, 3200, 6400). As can be seen inthe figure, the global accuracy of a simulation at resolution level 6, represented by thered line, first decreases to a minimum when the number of rays increases. Furthermore,when the number of rays further increases, the global accuracy get worse. In contrast tothis observation, the global accuracy of a simulation at resolution level 10, representedby the cyan line, converges to a minimum when the number of rays increases. In additionto this trend, each resolution level has a different minimum.

As a result of these observations, two principles are further analyzed in order toget a more fundamental understanding about these results. The first principle coversthe inclusion of the neighborhood of the cell that is used for retrieving the simulatedsignal strength. In order to analyze this principle, the RMSE is calculated based on the

86

4.4. Results and Discussion

0 1000 2000 3000 4000 5000 6000 7000Rays

8

10

12

14

16

18

20

22

24

26

28

RM

SE (

dB

)

Level 6

Level 7

Level 8

Level 9

Level 10

(a) Zero neighbors are included.

0 1000 2000 3000 4000 5000 6000 7000Rays

8

10

12

14

16

18

20

22

24

26

RM

SE (

dB

)

Level 6Level 7Level 8Level 9Level 10

(b) Eight neighbors are included.

Figure 4.16: RMSE validation with phases shifts included of office Environment 1

difference between the measured signal strength and the simulated signal strength at thespecific cell. In case that the neighborhood is included, the average signal strength ofthe specific cell and the 8 surrounding cells are used to compute the RMSE. Moreover,as a result of reflections, the phase of the electrical field shifts so that constructive anddestructive phenomena occurs. This can lead to local changes when a fine resolutionis applied. An example of such local changes can be seen in Figure 4.17, where thehighlighted surface that is part of the heatmap is visualized by Figure 4.18, has differentcells that differ a lot in signal strength than others. So is the signal strength of the cellthat is indicated by A much less than the neighboring cell that is indicated by B.

A

B

Figure 4.17: Local constructive and destructive phenomena

The result of this evaluation for the first office environment can be seen Figure 4.16where Figure 4.16a illustrates the validation where no neighbors are included and Fig-ure 4.16b illustrates the validation when the 8 surrounding neighbors are included. Themain difference between both figures is the fact that the RMSE when zero neighbors areincluded is only 3 to 5 dB lower then the result where eight neighbors are included.

87

4. Realistic Indoor Ray-launching Propagation Loss Model

Figure 4.18: RSS Heatmap of transmitter that is located at position 2

The second principle that is analyzed covers the idea of removing the phase shiftsthat are introduced by any reflection. This means that only the constructive behavioris applied. Thus the total electrical field, which is used in equation (4.9), is describedas the sum of the individual electrical fields that were computed in a cell by each raythat traversed that specific cell. When the absolute value of each individual electricalfield is used to compute the total electrical field, no phase shifts are included. The sameobservation can be made as previously, Figure 4.19b where phase shifts are included givesa result where the RMSE is 3 to 5 dB worse then Figure 4.19a where no phase shifts areincluded when the number of rays are limited to 1600.

0 1000 2000 3000 4000 5000 6000 7000Rays

8

10

12

14

16

18

20

22

24

26

RM

SE (

dB

)

Level 6Level 7Level 8Level 9Level 10

(a) No phase shifts are included.

0 1000 2000 3000 4000 5000 6000 7000Rays

8

10

12

14

16

18

20

22

24

26

28

RM

SE (

dB

)

Level 6

Level 7

Level 8

Level 9

Level 10

(b) Phase shifts are included.

Figure 4.19: RMSE validation of the first office environment where eight neighbours areincluded.

4.4.1.2 Reflections

To evaluate the influence of reflections, a benchmark is applied that analyzes the globalaccuracy of a validation that is simulated with a specific resolution level. This results infour graphs that can be seen in Figure 4.20, each graph represents the benchmark of avalidation where different reflection recursion levels are applied at a specific resolutionlevel. These graphs illustrate the number of rays that were used to validation eachreflection recursion level at the x-axis and the RMSE expressed in dB at the y-axis. The

88

4.4. Results and Discussion

meaning of the reflection recursion level is previously explained in section 4.2.3 and isvisualized in Figure 4.5. Thus, each reflection recursion level represents the lowest levelof such a reflection tree. As a result of the correlation between the number of rays andthe RMSE when different resolution levels are applied, the validation in order to evaluatethe influence of the reflections, is based on the result where zero neighbors are includedand the phase shifts are allowed.

0 1000 2000 3000 4000 5000 6000 7000Number of Rays

5

10

15

20

25

30

RM

SE (

dB

)

0 reflections1 reflections2 reflections3 reflections4 reflections5 reflections6 reflections7 reflections8 reflections9 reflections

(a) Reflection benchmark at level 6

0 1000 2000 3000 4000 5000 6000 7000Number of Rays

8

10

12

14

16

18

20

22

24

26

28

RM

SE (

dB

)

0 reflections1 reflections2 reflections3 reflections4 reflections5 reflections6 reflections7 reflections8 reflections9 reflections

(b) Reflection benchmark at level 7

0 1000 2000 3000 4000 5000 6000 7000Number of Rays

5

10

15

20

25

30

35

RM

SE (

dB

)

0 reflections1 reflections2 reflections3 reflections4 reflections5 reflections6 reflections7 reflections8 reflections9 reflections

(c) Reflection benchmark at level 8

0 1000 2000 3000 4000 5000 6000 7000Number of Rays

5

10

15

20

25

30

35

40

RM

SE (

dB

)

0 reflections1 reflections2 reflections3 reflections4 reflections5 reflections6 reflections7 reflections8 reflections9 reflections

(d) Reflection benchmark at level 9

Figure 4.20: Reflection benchmark for office environment 1

Figure 4.20 shows the results of such a reflection benchmark of this environment thatis modeled at resolution level 6 (34 cm x 12.5 cm), 7 (17 cm x 6.25 cm), 8 (8.6 cm x3.12 cm), and 9 (4.3 cm x 1.56 cm). Since this benchmark validates every simulationaccording to the environment model, which is modeled according to Algorithm 4.2, amaterial parameter needs to be assigned in order to evaluate the influence about thereflections recursion level and the number of rays. Because of the fact that all the wallswere the same material in this environment, one material parameter ε is assigned for allcomponents, which was set to 4 since it were dry brick walls. Furthermore, nine differentreflection recursion levels are evaluated. From these graphs can be seen that when noreflections are configured, the RMSE is large and not converging when the resolutionlevel is 6, 7. In the case, when a reflection recursion level of 1 to 9 is configured, theRMSE is converging where the resolution levels are equal to 8 and 9. On the other hand,when the resolution levels are equal to 6 and 7 the result of the RMSE first decreasestowards an optimal number of rays. Secondly, when the number of rays further increases,the RMSE get worse. This behavior shows that the cell size at resolution level 6 and 7is too large when the number of ray is larger than 800 for resolution level 6 and 1600 for

89

4. Realistic Indoor Ray-launching Propagation Loss Model

resolution level 7. As the evaluation shows, a good trade-off can be found at a reflectionrecursion level of 5 since the RMSE is not improving when the reflection recursion levelis larger.

4.4.1.3 Validation

As shown in Figure 4.12, the result of the SLAM approach can be used to extract theline segments according to Algorithm 4.2 in order to assign a material parameter to eachline segment. As a result of the line segment extraction process, all line segments areinserted in a Quadtree data structure as can be seen in following figure 4.21 where thetransmitters are indicated by the blue crosses and the receiver locations by red stars.Subsequently, this Quadtree is modeled with a resolution level of 8. This means thateach cell has a width of 8.6 cm and a height of 3.1 cm.

length (m)-5 0 5 10 15 20 25

wid

th (

m)

-3

-2

-1

0

1

2

3

4

5

1 2 34 5 6

rx locationtx location

Figure 4.21: Environment Model with transmitter and receiver locations

Next, to evaluate the propagation loss model, the correlation between the differentparameters needs to be analyzed. As a result of this analysis, a best result for thisenvironment is retrieved by applying a simulating on resolution level 9 and 3200 rays.Furthermore, the best recursion level of the reflection tree was found at level 5. Next,the transmitted power Ptx was configured by computing the transmit power accordingto the Friis Transmission Equation and a calibration measurement that was taken inthe far-field 2λ. The gain Gtx and Grx was configured to −5.6 dBi. This value wasmeasured in a anechoic chamber, where we could measure the real output power withoutinterference of reflections. On the other hand, with a receiver we were able to receive asignal from the transmitter, which made it possible to calculate both antenna gains andextra losses due to passive components according to the Friis Transmission Equation.This simulation results in following figure, where all differences between the averagereceived signal strength that was measured and simulated are illustrated regarding toeach transmitter.

This result is evaluated by computing on one hand the global accuracy and precisionof all transmitters. As stated before, the global accuracy is described by four parametersthat uses the error between the average signal strength that was measured and simulated.The accuracy of each simulation that was performed for every transmitter can be seenin Table 4.1.

90

4.4. Results and Discussion

1 2 3 4 5 6

Transmitter Location

Err

or

(dB

)

Figure 4.22: Validation of the different transmitters

Table 4.1: Validation Results of office environment 1

Transmitter EMAE (dB) EME (dB) ERMSE (dB) σ (dB)1 8.17 3.99 8.20 4.122 9.21 1.67 9.28 7.603 5.38 -0.29 5.50 5.194 10.32 3.09 10.40 6.275 6.92 4.19 6.94 5.0386 15.55 -15.55 15.56 7.13

Figure 4.22 described the results of all receiver location related to each transmitter.Another result of the propagation loss model can be seen in Figure 4.18, which visualizesthe signal strengths in the environment of the transmitter that is located at position 5.The global accuracy of this office environment can be found in Table 4.2.

Table 4.2: Overall Results of office environment 1

EMAE (dB) EME (dB) ERMSE (dB) σ (dB)9.26 -0.48 9.32 5.89

4.4.1.4 Performance

This section illustrates the performance of the simulation that was used to validate thisenvironment model by an Asus Zenbook UX32VD, core i7-3517U in combination with 10GB of RAM-memory. Based on the simulation parameters of the simulation that gavethe best results, the computation time was measured when applying the simulation withone, two, and four processes for the different resolution levels. The result can be seen in

91

4. Realistic Indoor Ray-launching Propagation Loss Model

Figure 4.23, where the different resolution levels are shown on x-axis and the measuredcomputation time expressed in seconds on the y-axis.

6 7 8 9Resolution Level

200

400

600

800

1000

1200

1400

1600

1800

Com

puta

tion t

ime (

sec)

4 workers2 workers1 worker

Figure 4.23: The performance of a simulation where 3200 rays are launched with differentresolution levels.

As a result of this analysis, applying the simulation with 4 workers results in lowestcomputation times. The computation of a simulation at level 8 with 4 worker takes644 seconds or 10 minutes and 43 seconds, which is 1.47 times faster than applyingthe simulation with 1 worker. Moreover, the computation time is not divided by twobecause each simulation is based on three steps that are processed in a sequential orderas discussed in section 4.2.6.

4.4.2 Office Environment 2

The second office environment exists of 453 line segments and contains 10 transmittersthat were able to send DASH7 messages every 2 seconds at 434.56 MHz. Furthermore,the robot stopped at 22 location and stored all messages that were received during aperiod of 1 minute. This results in 220 individual links that are used to evaluate theray launching propagation loss model. The result after applying GMapping in order tocreate a map and a transformation tree that enables it to transform all receiver locationuntil the first robot location can be seen Figure 4.14.

4.4.2.1 Resolution

This section includes the results related to the correlation between the number of raysand the RMSE when validating the propagation model with different resolution levels ofthis environment model. Additionally, the two principles that were explained in para-graph 4.4.1.1 are also analyzed with this environment model. First, an evaluation ismade if the average signal strength of the eight surrounding cells improves the resultswhen a fine resolution level is applied.

92

4.4. Results and Discussion

Figure 4.24a shows the result of a benchmark where zero neighbors are includedand the number resolution levels ranges from 6 to 11. on the other hand, Figure 4.24billustrates the result when eight neighbors are included. The main difference that can beobserved from both graphs is that the RMSE of resolution level 6 when eight neighborsare included increases when the number of rays increases. This means that the cell sizeis to large when the resolution level is equal to 6 and the influence of including the 8surrounding cell make it even worse because of the fact that the average signal strength isused. This results in the under-sampling problem. In addition to this result, the resultsof the validation converges when the resolution level is equal to 8 or higher.

0 500 1000 1500 2000 2500 3000 3500 4000Rays

6

8

10

12

14

16

18

20

22

RM

SE (

dB

)

Level 6Level 7Level 8Level 9Level 10Level 11

(a) Zero neighbours are included

0 500 1000 1500 2000 2500 3000 3500 4000Rays

5

10

15

20

25

30

RM

SE (

dB

)

Level 6Level 7Level 8Level 9Level 10Level 11

(b) Eight neighbours are included

Figure 4.24: RMSE validation of the second office environment where no phase shifts areincluded

Furthermore, an analysis of the influence whether the inclusion of the phase shifts,which happens when a ray reflects, is relevant or not when eight neighbors are included.As can be seen in both Figures 4.25b and 4.25a the only difference can be seen atresolution level 6 for the other resolution levels there is no difference.

0 500 1000 1500 2000 2500 3000 3500 4000Rays

5

10

15

20

25

30

RM

SE (

dB

)

Level 6Level 7Level 8Level 9Level 10Level 11

(a) no phase shifts are included.

0 500 1000 1500 2000 2500 3000 3500 4000Rays

6

8

10

12

14

16

18

RM

SE (

dB

)

Level 6Level 7Level 8Level 9Level 10Level 11

(b) phase shifts are included.

Figure 4.25: RMSE validation of the second office environment where eight neighborsare included

In order to investigate the improvements of an environment model that was retrievedwith a moving robot regarding to an environment where only straight lines with the

93

4. Realistic Indoor Ray-launching Propagation Loss Model

real dimensions of the environment, the same evaluation is made. Because none of thecommercial software platforms are using an environment model that is made from a robot,it is impossible to get a good evaluation. Furthermore, current commercial softwareplatforms are not able to configure the resolution level an environment model. This leadsto an evaluation where we analyzed the propagation loss model with an environmentmodel that only includes the outer boundaries of the environment. Following figuresillustrates the differences of both environment models at resolution level 8.

(a) Environment model after SLAM. (b) Environment model of the outer boundaries

Figure 4.26: Difference between a real environment model after SLAM and a environmentbased on the outer boundaries of the real environment.

To indicate the improvements, the result of a benchmark is included where no neigh-bors were configured and the number of resolution levels ranges from 6 until 9. As canbe seen in Figure 4.27, the result of an environment that was captured with a robot hasan average RMSE that is 3 to 4 dB more accurate than an environment where only theouter boundaries are modeled.

0 1000 2000 3000 4000 5000 6000 7000

Rays

6

7

8

9

10

11

12

13

14

15

16

RM

SE (

dB

m)

Level 6

Level 7

Level 8

Level 9

(a) RMSE validation of resolution levels(6,7,8,9) of the Environment model afterSLAM

0 1000 2000 3000 4000 5000 6000 7000

Rays

6

8

10

12

14

16

18

RM

SE (

dB

m)

Level 6

Level 7

Level 8

Level 9

(b) RMSE validation of resolution levels(6,7,8,9) of the environment model of theouther boundaries

Figure 4.27: RMSE validation between a real environment model after SLAM and aenvironment based on the outer boundaries of the real environment.

94

4.4. Results and Discussion

4.4.2.2 Reflections

The evaluation of the reflection benchmark for this environment is also applied on fourresolution levels, which are level 6 (29.7 cm× 20.3 cm), 7 (14.8 cm× 10.1 cm), 8 (7.4 cm×5.08 cm), and 9 (3.7 cm× 2.5 cm) because every validation requires an environment modelwhere all line segments exist of a material parameter ε. As a result of this environment,two values are assigned and this will be explained in the following Section 4.4.2.3. Theresults of this benchmark illustrate the global accuracy in terms of the RMSE relatedto the number of rays for validation where nine reflection recursion levels were applied.As can be seen in the following Figure 4.28, the results where the reflection recursionlevel was configured to zero, the RMSE decreases when the number of rays increases.Moreover, the results that were observed from the first environment in Section 4.4.1.2can also be observed with this environment. This means that the global accuracy wherethe resolution level was 6 first decreases to a minimum where the number of rays wasequal to 400 and 1600 when the resolution level was configured to 7. On the other hand,when the number of rays further increases, the RMSE gets worse for both resolutionlevels. Besides this behavior, in the case when resolution levels are 8 and 9, the globalaccuracy converges to a minimum. As a conclusion for this environment, the reflectionrecursion level 5 was found to be optimal for validation of the propagation loss model.

0 1000 2000 3000 4000 5000 6000 7000Number of Rays

0

5

10

15

20

25

RM

SE (

dB

)

0 reflections1 reflections2 reflections3 reflections4 reflections5 reflections6 reflections7 reflections8 reflections9 reflections

(a) Reflection benchmark at level 6

0 1000 2000 3000 4000 5000 6000 7000Number of Rays

5

10

15

20

25

30

RM

SE (

dB

)

0 reflections 1 reflections 2 reflections3 reflections4 reflections5 reflections6 reflections7 reflections8 reflections9 reflections

(b) Reflection benchmark at level 7

0 1000 2000 3000 4000 5000 6000 7000Number of Rays

5

10

15

20

25

30

35

40

RM

SE (

dB

)

0 reflections1 reflections2 reflections3 reflections4 reflections5 reflections6 reflections7 reflections8 reflections9 reflections

(c) Reflection benchmark at level 8

0 1000 2000 3000 4000 5000 6000 7000Number of Rays

5

10

15

20

25

30

35

RM

SE (

dB

)

0 reflections1 reflections2 reflections3 reflections4 reflections5 reflections6 reflections7 reflections8 reflections9 reflections

(d) Reflection benchmark at level 9

Figure 4.28: Reflection benchmark for office environment 2

95

4. Realistic Indoor Ray-launching Propagation Loss Model

4.4.2.3 Validation

In order to validate the ray launching propagation loss model with this office environment,a Quadtree is built according to the region growing line extraction process, which isdescribed in section 4.2.1. This algorithm allows the user to assign a permittivity ε toeach segmented cluster that was found. As a result of this, two values are used in thisenvironment, which can be seen in Figure 4.29. Because of the fact that a 2D map andmonopole antennas are applied to validate the propagation loss model, all cells that areindicated by a green color gets a permittivity of 1, which is the same as air. Thesecells were in reality table legs and did not have any influence on the received signalstrength. Furthermore, all cells that are indicated by a red color represent a wall thatget a permittivity of 3.

Figure 4.29: Environment Model where blue dots are indicating the transmitter locationsand the red stars are indicating the transmitter locations. Next, the green cells representobjects and the red cells represent a wall

To evaluate the validation process of this office environment, Figure 4.30 shows theresults of the individual transmitter location related to the error between the averagedmeasured signal strength and simulated signal strength. The results that were foundoptimal, is based on 2700 rays, resolution level 8 (7.4 cm by 5.08 cm), and a reflectionrecursion level of 5. In addition to this configuration, phase shifts were included and 0surrounding cell were used.

The global accuracy of all transmitters that was analyzed for this environment canbe seen in Table 4.3. According to these validation results, the simulation of the trans-mitter at location 1 performed the best in terms of the different statistical estimators.Furthermore, a global result can be extracted by computing the average of the individualstatistical estimators and can be found in Table 4.4. As a result of this evaluation, thepropagation loss model performs best related to real measurements when the transmitteris located at position 1. This global RMSE of 7.69 dB is acceptable according to thedifferent assumptions that we made. Since the RMSE is sensitive to outliers, the mean

96

4.4. Results and Discussion

Transmitter Location

Err

or

(dB

)

Figure 4.30: Validation of each individual transmitter

Table 4.3: Validation Results of office environment 2

Transmitter EMAE (dB) EME (dB) ERMSE (dB) σ (dB)1 4.17 0.27 4.22 3.492 7.72 2.66 7.74 6.093 6.65 -1.60 6.69 5.824 9.76 3.23 9.79 6.835 5.91 0.58 6.02 4.766 4.99 0.49 5.09 4.477 10.79 -8.39 10.85 6.578 10.24 -5.66 10.28 4.969 8.63 -3.64 8.64 5.5910 7.49 3.96 7.59 4.44

Table 4.4: Overall Results of office environment 2

MAE (dB) ME (dB) RMSE (dB) σ (dB)7.64 -0.81 7.69 5.30

error (ME) is often used to indicate how good a radio propagation model is performinggiven a set of measurements. This value has to be zero in an ideal scenario. As a result ofour analysis, this value is −0.81 dB, which indicates that our model is performing well.Figure 4.31 shows the result in the form of a heatmap, which illustrated the simulatedsignal strength on top of the map that was generated by a moving robot.

97

4. Realistic Indoor Ray-launching Propagation Loss Model

Figure 4.31: Heatmap of transmitter number two

4.4.2.4 Performance

The ability of applying the ray calibration process and electrical field computation inparallel, improves the computation times in such a way that the speed-up when usingfour workers is 1.58 times faster then one worker. The main difference between theseperformance results and those of the first office environment, is that the computationtime of this environment is higher than the first office environment. This difference canbe explained by the fact that the number of line segments in this environment is 453compared to 221 of the first environment, which results in more calculations to findintersections between a ray and a line segment.

The computation time of a simulation that performs best according to the simulationparameters found in Section 4.4.2.3 is 18 min and 22 s.

The computation time of a simulation that performs best according to the simulationparameters found in 4.4.2.3 is 1103 seconds or 18 minutes and 22 seconds.

4.5 Conclusion

In this chapter, a realistic ray launching propagation loss model is discussed based onthe implementation that contains four different parts. First, a line segment extractionalgorithm is explained, which enables the classification and segmentation of a 2D-mapthat is created by a moving robot. Secondly, a specific device configuration is discussedthat makes it possible to configure a ray-launching propagation simulation. Thirdly, thecore of the propagation model, which is called the ray calibration process, is discussed.The fourth part is the electrical field computation process, which uses the result of theray calibration process as input to compute the individual electrical field values at eachlocation that every ray visited. This results in the possibility to compute a heatmap thatcan be used for applications such as the optimization of localization algorithms. In addi-tion to the implementation of the propagation loss model, this chapter proposed a genericvalidation model that uses the result of a SLAM algorithm in combination with a set ofradio measurements that were received from a transmitter at different locations. This

98

4.5. Conclusion

6 7 8 9Resolution Level

0

500

1000

1500

2000

2500

3000

3500

Com

puta

tion T

ime (

sec)

4 workers2 workers1 worker

Figure 4.32: The performance of a simulation where 2700 rays are launched with differentresolution levels.

will be used to validate and evaluate the propagation loss model by two environments.The evaluation of both environments is discussed by four perspectives, which describes atfirst hand the correlation between the number of rays and the resolution of the modeledenvironment. This evaluation is further analyzed by researching the influence of phaseshifts and including the surrounding signal strengths. As a conclusion of this evaluation,every resolution level has a global accuracy in terms of RMSE that is below 8 db, whichcan be found when the phase shifts are included and the neighborhood is not included.Additionally, optimal results were found for both environments when the resolution levelis equal to 8 or higher. Besides the correlation between the number of rays and the globalaccuracy, an analysis is made to evaluate the reflection recursion level regarding to thenumber of rays and the resolution level. As a result of this evaluation, an optimal reflec-tion recursion level of 5 is found since the accuracy is not improving when a higher levelis applied. When combining these conclusions, a validation can be applied where 96 linksare used for the first environment, which results in a RMSE of 9.31 dB, ME of −0.48 dB,MAE of 9.26 dB, and a precision of 5.89 dB. The results for the second environment ,which are based on 220 links, after applying the validation model are 7.69 dB for RMSE,−0.81 dB for ME, 7.64 dB for MAE, and a precision of 5.30 dB. Finally, the performancein terms of computation time on a high performance notebook is analyzed based on thesimulation configuration that gave the best results. This results in a computation timeof 10 minutes and 43 seconds for the first environments and 18 minutes and 22 secondsfor the second environment. Both results were computed 1.5 times faster with 4 workercompared to the simulation where 1 worker was used.

99

Chapter 5

Indoor RF-propagation Applications

One of the research goals is to investigate how applications can benefit from the ray-launching propagation loss model, which was implemented in the previous chapter. Thischapter elaborates on current research of three different localization algorithms where theray-launching propagation loss model of chapter 4 is applied to optimize and enhance thecurrent results and implementations. First, when probabilistic localization based on theReceived Signal Strength is applied, a probability likelihood map of a certain receivedsignal strength is calculated in order to estimate the most likely location where a receiveris located [Berkvens et al., 2017]. Within this area, the ray-launching propagation lossmodel is able to simulate a more realistic likelihood since it incorporates the influenceof reflections and refractions. Secondly, device free localization and more specific RadioTomographic Imaging (RTI) is a technique where entities can be localized in an environ-ment by measuring the attenuation caused by those entities on a set of radio links [Deniset al., 2017, 2016]. An important aspect of these entities is that they do not wear anydevice that is able to transmit or receive any signal. Since these kind of algorithms onlywork on the attenuation loss between two deployed devices that are in the vicinity of eachother, the ray-launching propagation loss model can be used to research the influence ofthe attenuation loss of reflections and refractions in order to improve the algorithm ina fundamental way. Thirdly, AoA localization enables the localization of a transmitterbased on the angle that is received by an antenna array [BniLam et al., 2017]. Due to thetime difference that occurs when signals arrive at the individual antennas of the array, thephase difference between the different signals can provide an angle of arrival. This anglecan then be used to estimate a location by probabilistic triangulation. Since this angleof arrival is sensitive to multipath or small scale fading, localizing individuals in indoorenvironments is very difficult. In order to get a more fundamental knowledge about howto cope with these multipath signals, the ray launching propagation loss model is ableto simulate the electrical field when no reflections, one reflection, or multiple reflectionsoccur.

101

5. Indoor RF-propagation Applications

5.1 Signal based Localization

In this section, current research about signal based localization is explained and howsuch an algorithm can profit from a ray-launching propagation loss model. Traditionalsignal based localization algorithms use the RSS as indicator to localize a mobile device,which is configured as a transmitter, based on the distance that is calculated by a radiopropagation loss model [Bensky, 2016, Weyn, 2011, Farid et al., 2013]. Additionally,one or more receivers that are called gateways are placed at fixed locations in the en-vironment. As a result of this idea and setup, it is assumed that the received signalstrength is in direct correlation with the distance. When applying this assumption forlocalization in indoor environments, the accuracy will decrease because of the influence ofreflections and refractions that occurs in the environment at objects, walls, floor, ceiling,or a human body. In order to cope with this influence, a possible solution is to generatea likelihood distribution [Berkvens et al., 2017]. Such a likelihood distribution describesthe probability of being at a location X given a certain signal strength Pm. Further-more, this likelihood distribution is unique for each gateway and thus a joint likelihooddistribution can be estimated by multiplying the individual likelihood distributions inorder to estimate the most probable location according to Bayes’ rule. As a result of theassumption of the direct correlation between the distance and the signal strength, thelikelihood distribution of a gateway is not realistic. In order to illustrate the influenceof reflections and refractions clearly, the ray-launching propagation loss model is used tosimulate the likelihood distribution of such a gateway for a certain signal strength Pm interms of a power density based on a simple 2D indoor environment model. Subsequently,three simulations are applied where the number of rays is set to 1000, the resolution ofthe environment model is configured to 3.9cm by 3.9cm, and the reflection recursion levelis configured to 0, 1, and 5. In addition to this, the likelihood distribution p(Pm | X) isbased on a Gaussian distribution that models the difference between the simulated signalstrength Ps and a certain signal strength Pm that indicate a receiver measurement ascan be seen in following equation:

p(Pm | X) =1√2πσ

exp− (Ps − Pm)2

2σ2(5.1)

where the mean is configured to the signal strength Pm as simulated at two meter fromthe gateway and a standard deviation of 1 dB. As a result of the first simulation wherethe reflection recursion level is configured to 0 as can be seen in Figure 5.1b, the prob-ability distribution shown in Figure 5.1a is circular because all rays where launchedomni-directional. Figure 5.1d illustrates the simulation when the reflection recursionlevel is configured to 1. As a result of this simulation, the probability distribution is notcircular due to influence of one reflection as can be seen in Figure 5.1c. Since the valida-tion of the ray-launching propagation loss model that is discussed in the previous chaptershows that a recursion level of 5 performs best for both use cases, a more realistic resultof the likelihood distribution can be seen in Figure 5.1e. When comparing this with theresult of the first simulation where no reflections were allowed, the number of locationswhere the probability, of being at a location, given a certain RSS-value is higher thanzero, is drastically increased due to the influence of multipath. When such a simulation isused with real measurements of different gateways in an indoor environment in order toapply probabilistic signal based localization, a more accurate signal propagation model

102

5.2. Device-free localization & Radio Tomographic Imaging

will lead to a more accurate distance model, which in turn will lead to more accuratelocalization.

5.2 Device-free localization & Radio Tomographic Imaging

The second localization algorithm that can benefit from the ray launching propagationloss model, is a device free localization technique. Such a device-free localization tech-nique makes use of a tomographic sensor network to locate entities in an environment,which can be a human body or any other object that is located in an environment. RTIis a technique that enables device free localization, which is able to localize a individ-ual based on the relative attenuation loss of a radio link. Such a technique is in directcontrast to what is called tagged localization, device-free localization does not requirethe individual to wear an active or passive hardware device such as NFC or RFID. In-stead, the influence of the physical presence of an entity on its environment is used todetermine its location [Youssef et al., 2007]. Other examples of this type of localizationinclude camera-based localization [Fleuret et al., 2008], passive infra-red [Kemper andHauschildt, 2010] and passive radio mapping [Seifeldin et al., 2013]. Depending on theapplication, tag-less localization offers important advantages compared to tagged sys-tems. First, tagged systems requires the idea of continuously wearing a hardware device,which can be difficult or even outright impossible in some use cases. Secondly, someclassic examples where device-free localization is applied are the localization and track-ing of patients in elderly care institutions [Kaltiokallio et al., 2012b] and applicationswhere emergency services need to quickly locate people in danger [Wilson and Patwari,2011]. Additionally, some tag-less localization techniques like RTI and active infra-redcan locate entities in an environment without being capable of identifying them. Thiscan prove very advantageous in a privacy-related context.

An RTI tomographic sensor network consists of transceiver nodes which are placed inan environment. These nodes will repeatedly transmit messages to each other, therebyestablishing a multitude of communication links as can be seen in Figure 5.2.

An individual or obstacle that is present in the environment will influence the RSS-value of these links. Based on this influence, a location is estimated. Communicationusually occurs on the 2.4 GHz band [Wilson and Patwari, 2010, Kaltiokallio et al., 2012a,Bocca et al., 2014], but successful experiments have been performed with sub-GHz RTI-systems (433 & 868 MHz) [Fink et al., 2015, Denis et al., 2016, 2017]. As a result ofthese experiments, the resulting image vector for 868 MHz when a human individual waspresent in the location is shown in 5.3.

5.2.1 The RTI-algorithm

In this paragraph, the basic workings of an RTI-algorithm will be explained as describedin [Wilson and Patwari, 2010].

First, we represent the environment in which the sensor network is installed by a gridwhich consists of N amount of equally sized pixels. Next, we define a vector y of sizeM which contains the RSS-data of each communication link in the network. In mostsystems, this data consists of the RSS-differences between a live measurement and aset of earlier calibration measurements when the environment was entity-free. We thencreate a weighting matrix W of size M x N . This matrix quantifies the exact relationship

103

5. Indoor RF-propagation Applications

Length (pixels)

Wid

th (

pix

els)

(a) Likelihood where reflection recursion levelwas configured to 0

(b) Simulation where reflection recursion levelwas configured to 0

Length (pixels)

Wid

th (

pix

els)

(c) Likelihood where reflection recursion levelwas configured to 1

(d) Simulation where reflection recursion levelwas configured to 1

Length (pixels)

Wid

th (

pix

els)

(e) Likelihood where reflection recursion levelwas configured to 5

(f) Simulation where reflection recursion levelwas configured to 5

Figure 5.1: Results of three simulations where the reflection recursion level was configuredto 0, 1, and 5.

104

5.2. Device-free localization & Radio Tomographic Imaging

0 5 10 15x (m)

0

1

2

3

4

5

6

7

8

9

y (

m)

Figure 5.2: Schematic overview of a test environment which consists of 2 rooms connectedby a hallway. The red asterisks indicate the locations of the nodes

(a) Image where red star is indicating the truelocation and the blue star is indicating the cal-culated location

0 5 10 15x (m)

0

1

2

3

4

5

6

7

8

9

y (

m)

(b) Schematic overview of the RTI-transceiverlocations and the location where a person waslocated

Figure 5.3: RTI image and overview of the deployment environment.

between the links and the pixels. It contains for each link what part of the environmentit provides information about. Finally, a vector x of size N is defined which contains theattenuation image. This attenuation image is a collection of dB-values, which define foreach possible location the average attenuation that a link will experience when its mainpaths (as defined in the weight matrix) cross this location. Pixels with a high amount ofattenuation are assumed to be more likely to contain locatable entities. The entire goalof an RTI-algorithm is to approximate this image vector shown in Figure 5.3.

5.2.2 The Weighting Matrix

While many different RTI-variants do exist in which the weighting matrix is calculated ina slightly different manner (e.g. the value is simply proportional to the inverse of the areaof the ellipse [Alippi et al., 2016] ) the basic principle remains the same. RTI considersonly entities which are present in the line-of-sight of a link to be able to influence thatlink. Furthermore, this influence is only considered to be a decrease of the receivedRSS-value. Multipath effects in the environment are merely considered to be part ofthe noise. A slight exception to this can be found in fade-level based RTI, where each

105

5. Indoor RF-propagation Applications

link has a unique value for the parameter λ based on its experimentally determined fadelevel [Kaltiokallio et al., 2014].

This simplification of reality is feasible as long as the line-of-sight is clearly the mostimportant communication path between two nodes. However, this is not always the casein more complex environments, which causes a less accurate location estimation [Deniset al., 2017]. In order to create a weighting matrix which represents the main path(s)of a communication link in a far more realistic manner, the use of the ray launchingpropagation loss model is being researched.

For each link, the propagation model is used to calculate the most important pathsbetween the nodes as can be seen in following figure.

Weights are allocated proportionally to each path based on the strength of the electricfields. Each path then equally distributes these weights among the pixels that it contains.These steps can be performed when the system is off-line and therefore have no impacton the required amount of calculations in a real-time system.

An example can be seen in figures 5.4a and 5.4b, where two corresponding rows of aclassic weighting matrix and the newly developed matrix can be compared visually.

(a) Visual overview of a single row of a weight-ing matrix using the old elliptical method.

(b) Visual overview of a single row of aweighting matrix based on the RF-propagationmodel.

Figure 5.4: Visual overview of the difference between the LoS weighting matrix and theweighting matrix where the reflections are included

Subsequently, following image 5.5 illustrates the result when the weighting matrix ischanged so that the influence of the important paths is included.

As can be seen, due to the multipath influence, the standard deviation of the attenu-ation losses becomes larger. It is important to note that merely changing the weightingmatrix to be more representative of reality is likely not sufficient to improve the accuracyof the system. Many parameters (e.g. noise variance) will have to be updated. Further-more, once multipath effects are being considered, it is no longer valid to assume thatthe effect of an entity being present will only lead to a simple decrease in RSS-value ofthe link. Research in this direction is currently being performed.

5.3 Angle of Arrival Localization

The third localization application that is explained and can benefit from the ray launch-ing propagation loss model, is the AoA based localization technique. This localizationtechnique uses the phase difference between signals that has been received by an antenna

106

5.3. Angle of Arrival Localization

(a) Image where red start is indicating the truelocation and the blue star is indicating the cal-culated location

0 5 10 15x (m)

0

2

4

6

8

10

y (

m)

(b) Schematic overview of the RTI-transceiverlocations and the location where a person waslocated

Figure 5.5: Overview of the tomographic environment system together with the RTIlocation estimation where the influence of multipath is incorporated

array. In addition to this, the AoA based localization can be divided into two stages.First, the angle of arrival estimation and secondly the location estimation, which can bedeterministic or probabilistic triangulation. Subsequently, the angle of arrival estimationuses the underlying electrical field strength, which is a complex number and is expressedin V/m. Because of the fact that on one hand the phase difference between two or moresignals is used and on the other hand the phase difference is not dependent on the signalstrength, it is very suitable for active device localization. This kind of localization as-sumes that the transmitter is actively sending a message. According to this assumptionand the gaining popularity of Low Power Wide Area Networks (LPWAN) such as LoRa,Sigfox, or DASH7, the need for a good AoA-localization solution that is able to localizeassets or individuals in indoor and outdoor environment grows. As a consequence of us-ing the phase difference, this localization solution is very sensitive to multipath. In orderto cope with this sensitivity, the ray launching propagation model makes it possible toapply different simulations where no reflections, one reflection, or multiple reflections oc-curs. Furthermore, as a result of this sensitivity and the scope of this thesis, the locationestimation is not included. More information about the location estimation can be foundin following articles [BniLam et al., 2017]. To illustrate the influence of multipath on an-gle of arrival localization, three simulations were applied, which are based on the secondenvironment model that was used in the previous chapter to validate the ray launchingpropagation loss model. Two transmitter locations indicated by numbers 2 and 3, whichcan be found in Figure 5.6, are used to indicate the influence of the reflection recursionlevel and the angle of arrival localization.

Figure 5.7a and 5.7b show the localization result of a simulation where zero reflectionswere configured. The localization error at location 6 and 7 was 33 cm and 15 cm, respec-tively. When the same localization solution is applied to the result of two simulationswhere the reflection recursion level was configured to 1 and 5, similar probability mapscan be analyzed for both locations in Figures 5.7c, 5.7a, 5.7d, and 5.7b. In additionto these results, the estimation error between 1 and 5 recursion levels is for location 6equal to 0 and for location 7 equal to 4 cm. According to these preliminary results, theinfluence of multipath is significant compared to a simulation when no reflections weresimulation. Furthermore, the increasing the level of reflection recursions will not have alarge impact on the result of the AoA-localization.

107

5. Indoor RF-propagation Applications

x(m)

y(m)

12345

x

y

x

y

x

y

Figure 5.6: Simplistic environment model where the red diamonds indicates the receiverlocations and the black squares indicates the transmitter locations.

2

(a) Location 6, 0 reflections

3

(b) Location 7, 0 reflections

2

(c) Location 6, 1 reflections

3

(d) Location 7, 1 reflections

5.4 Conclusion

Three different localization applications that can benefit from applying a propagation lossmodel were explained in this chapter. First signal based localization is briefly explainedwith the eye on the added value of the ray launching propagation loss model. Becauseof the influence of reflections, refractions and other phenomenas, the likelihood distribu-tion that is computed to indicate the probability of being at a location given a certainsignal strength will be different. In order to illustrate this difference, three simulationwere applied on a very basic environment model. As a result of these simulations, the

108

5.4. Conclusion

2

(a) Location 6, 5 reflections

3

(b) Location 7, 5 reflections

Figure 5.7: Preliminary results of AoA localization based on propagation loss modelsimulations where reflection recursion level of 0, 1, and 5 was configured

influence on the likelihood distribution is significant when the reflection recursion levelwas configured to 5. This makes it possible to create a more accurate distance modelwhen such a simulation is applied with real measurements, which may result in a betterlocation estimation. Secondly, the principles of RTI are explained. This localization ap-proach is able to localize individuals based on the attenuation loss that a person has ona set of radio links. This set of radio links is defined by an RTI-network, which exist ofdifferent radio transceivers that are able to communicate with each other. After applyingan RTI-measurement, a weighting matrix is created that defines the exact relationshipbetween the links and the pixels of the grid that describes the environment. Since, thisweighting matrix describes only line of sight links and thus not including any influenceof reflections or refractions, the use of the ray launching propagation loss model makesit possible to investigate the influence of these phenomena. Preliminary results show asignificant difference in attenuation loss, which result in a less accurate location estima-tion. Thirdly, the angle of arrival localization is considered suitable for localization inoutdoor and indoor environments when the signal strength is low. The angle of arrivalestimation is based on the phase difference that is calculated from two or more signalsthat are received by an antenna array. When applying this localization technique inindoor environments, the phase difference is highly influenced by reflections, refractions,and other phenomena. To get a more fundamental knowledge about these influences, theray launching propagation model is able to simulate different reflection recursion levels.When comparing the probability maps after AoA-localization of a simulation where noreflection was simulated and two simulations where a reflection recursion level of 1 and 5were simulated, a significant difference can be observed between the simulation where noreflection was simulated and the two other simulations. Furthermore, similar results canbe observed between the simulations where the reflection recursion levels were configuredto 1 and 5. By introducing the influence of reflections and refractions for indoor local-ization solutions such as signal based, radio tomographic imaging, and angle of arrivallocalization, a more realistic probability map and attenuation loss is retrieved, which willlead to a more realistic understanding and different solutions in order to cope with theseradio phenomena.

109

Chapter 6

Outdoor IEEE 802.11.ah RangeCharacterization using ValidatedPropagation Models

This chapter contains the research about the conference paper named ”Outdoor IEEE802.11ah Range Characterization using Validated Propagation Models” which is pub-lished at IEEE Globecom 2017 [Bellekens et al., 2017]. In this paper, the range charac-terization of the IEEE 802.11.ah standard using seven widely applied radio propagationloss models is investigated based on measurements. Due to the recent solutions in thefield of Internet of Things, such as smart cities and smart homes, the research of wirelesscommunication nowadays is more focused on reliable medium-range and long-range com-munication. In this research it is necessary to use a propagation model that models thetransmission losses in a realistic way. This chapter will target an automatic classificationof seven outdoor propagation models based on two measurement datasets. This classifiedpropagation model will comprise a realistic connectivity coverage of the recent medium-range IEEE 802.11.ah standard based on measurements. Furthermore, the behaviour ofthe MAC-layer is being simulated and applied according to the two datasets.

6.1 Introduction

The new IEEE 802.11ah standard, marketed as Wi-Fi HaLow, is a low-power wirelesscommunication PHY and MAC layer protocol that operates in the unlicensed sub-1 Ghzfrequency bands (i.e., 863–868 Mhz in Europe and 902–928 Mhz in North-America).It was designed to provide communications among densely deployed energy-constrainedstations at ranges up to 1 km, while maintaining a data rate of 150 Kbps Khorov et al.[2015]. Moreover, its flexible data rate allows it to achieve up to 78 Mbps at shorter dis-tances. This makes it especially suited for flexible Internet of Things (IoT) and MachineTo Machine (M2M) communications. A major improvement of 802.11ah, compared toprevious 802.11 standards, is its ability to scale to thousands of stations per access point

111

6. Outdoor IEEE 802.11.ah Range Characterization using ValidatedPropagation Models

(AP) by introducing restricted access window (RAW) feature. RAW allows AP to dividestations into groups, limiting simultaneous channel access to one group and thereforereducing the collision. Evaluating the scalability of new PHY and MAC amendments for802.11ah on such a scale using real hardware is obviously infeasible. Simulation is conse-quently the preferred route. To this end, we previously developed an 802.11ah simulationmodule for the ns-3 event-based network simulator Tian et al. [2016], which is availableas open source software 1. Realistic modeling of the underlying physical medium is ofcritical importance to obtain realistic results in terms of throughput and packet loss asa function of distance between transmitter and receiver.

The physical wireless medium is generally modeled using path loss (also referred toas path loss) models, which simulates the transmission loss between two antennas. TheIEEE TGah working group, which standardizes 802.11ah, proposed empirical outdoorand indoor path loss models based on the 3GPP spatial channel model (SCM) and TGn(MIMO) model respectively Hazmi et al. [2012]. The original models were devised forLTE and 802.11n respectively, operating at frequencies around 2 and 2.4 GHz. For usewith 802.11ah, they have been transformed to the sub-1GHz frequency bands, but havenot been validated using realistic and extensive measurements under varying conditions.Moreover, existing simulation studies use radio transceiver parameters (e.g., noise fig-ure or transmission power) based on conjecture and non-validated assumptions. Thiscombination of non-validated path loss models and radio transceiver parameters leads toinaccurate simulation results.

In this chapter the aforementioned limitations are addressed by proposing a realisticwireless channel model for 802.11ah. It incorporates outdoor path loss models validatedusing real measurements, as well as radio transceiver parameters based on actual 802.11ahradio hardware. As a first contribution, four sub-1 GHz path loss data sets for outdoorurban environments have been collected, a near line-of-sight (LoS) macro deployment, aLoS pico scenario, a non-LoS pico deployment with transmitter at height 12 m, and anon-LoS pico deployment with transmitter at height 1.5 m that includes interference ofdifferent buildings. The macro scenario has the transmitter antenna placed above rooftoplevel, while in the pico scenario it is placed below rooftop level COST Action 231 [1999].Based on these measurements, seven widely used outdoor path loss models are comparedand evaluated. As a second contribution, the most accurate models, which fits thebest to our measurements, are implemented in the open source 802.11ah ns-3 simulatorand evaluated in combination with the PHY and MAC implementation, using realisticradio transceiver parameters obtained from the radio prototype recently presented byBa et al. Ba et al. [2016]. This allows determining the maximum transmission range,throughput and packet loss for 802.11ah under realistic conditions. The improvements tothe path loss model implementation are made freely available as part of the open source802.11ah ns-3 simulation module.

The remainder of this chapter is structured as follows. Section 6.2 introduces theIEEE 802.11.ah standard in the area of path loss modelling and range characterization.Section 6.3 introduces the methodology used to gather the path loss data sets. Section 6.4introduces the different outdoor path loss models used in the evaluation. Subsequently,Section 6.5 compares and validates the path loss models and presents MAC-layer simu-lation results of the most accurate ones. Finally, Section 6.6 provides conclusions.

1https://github.com/MOSAIC-UA/802.11ah-ns3

112

6.2. IEEE 802.11.ah Range Characterization

6.2 IEEE 802.11.ah Range Characterization

Even though the IEEE 802.11ah standard has not been officially published, researchershave been investigating it for a few years, both in terms of PHY and MAC layer aspects.Several works provide a deep overview of the key mechanisms of the protocol Khorovet al. [2015], Park [2015], including advantages and challenges in the design of physicallayer and MAC schemes. Several studies have been performed to assess the feasibilityand performance of 802.11ah for a variety of scenarios. Due to a lack of commerciallyavailable hardware, these studies are based on mathematical models or simulation results.Adame et al. [2014] conducted a performance assessment of IEEE 802.11ah in four com-mon machine-to-machine (M2M) scenarios, i.e. agriculture monitoring, smart metering,industrial automation, and animal monitoring, using theoretical models. Several recentworks study physical layer aspects of 802.11ah and sub-1Ghz communications. Linkbudget, achievable data rate and optimal packet size of 802.11ah is studied by Hazmiet al. [Hazmi et al., 2012]. They evaluated the feasibility of using 802.11ah for IoT andM2M use cases, based on the 2 path loss models proposed by the IEEE TGah workinggroup (i.e., the 3GPP spatial channel model (SCM) and TGn (MIMO) model). More re-cently, Banos et al. [Banos-Gonzalez et al., 2016a,b] also evaluated the theoretical rangeof 802.11ah using the TGah proposed path loss models. Li and Wang [Li and Wang, 2014]present indoor coverage performance and time delay comparison between IEEE 802.11gand 802.11ah for wireless sensor nodes in M2M communications. Aust and Prasad [Austand Prasad, 2014] proposed a software defined radio (SDR) platform for 802.11ah ex-perimentation, operating at the 900MHz ISM-band, and used it to perform an over-the-air protocol performance assessment. Moreover, Aust, Prasad and Niemegeers [Austet al., 2013] built a real-time MIMO-OFDM testing platform for evaluating narrow-bandsub-1GHz transmission characteristics. Casas and Papaparaskeva [Casas et al., 2015]introduced an architecture for a programmable IEEE 802.11ah Wi-Fi modem based onCadence-Tensilica DSP. Finally, Ba et al. [Ba et al., 2016] developed an 802.11ah fully-digital polar transmitter, this hardware prototype passes all the PHY requirements ofthe mandatory modes in IEEE 802.11ah with 4.4% error-vector-magnitude (EVM), whileconsuming only 7.1 mW with 0 dBm output power.

In summary, past research either focused on small-scale (i.e., up to 2 devices) evalu-ation using a simplified hardware prototype [Aust and Prasad, 2014, Aust et al., 2013,Casas et al., 2015], or performed simplified simulation or modelling for large-scale net-work evaluation. In this research, we aim to improve the accuracy of the latter, bythoroughly evaluating and optimizing the path loss models used for these simulations.Moreover, we propose a set of PHY and radio simulation parameters derived from actual802.11ah hardware [Ba et al., 2016].

6.3 Measurement methodology

In order to evaluate the accuracy of the different path loss models in different outdooruse cases, four data sets were collected at the University of Antwerp:

1. Macro LoS deployment scenario (20852 measurements) with the transmitter thatuses a transmission power of 13 dBm and a transmitter antenna height at 30 m.

113

6. Outdoor IEEE 802.11.ah Range Characterization using ValidatedPropagation Models

2. Pico LoS deployment scenario (874 measurements) with the transmitter that usesa transmission power of 2.4 dBm and a transmitter antenna height of 1.5 m.

3. Pico non-LoS with transmitter at height 1.5 m deployment scenario (1168measurements) with the transmitter that uses a transmission power of 0 dBm.

4. Pico non-LoS with transmitter at height 12 m deployment scenario (26366measurements) with the transmitter that uses a transmission power of 13 dBm.

The receiver antenna height was 1.5 m for all scenarios. To properly compare thesedata set, all received link budgets will be normalized to a transmission power of 0 dBmand will be further explained in Section 6.5. To fit each dataset with the different pathloss models in a robust manner, many measurements were collected for each of them,as shown in brackets above. Each measurement was performed by receiving a packetwith a payload of 2 bytes every 2 seconds at a center frequency of 868.1 MHz with abandwidth of 150 kHz using the Silicon Labs Sub-GHz EZR32 Leopard Gecko WirelessStarter Kit. Furthermore, the receiver receives the packet from the transmitter andoutputs the reception time, received signal strength and GPS coordinates (obtained fromthe mounted GPS) to a log file. During the measurement campaigns, the transmitterwas kept static, while the receiver moved between different geographical locations (cf.Figure 6.1). Both the pico and macro deployment LoS scenarios have the occasional cars,pedestrians, bicycles, and trees as obstacles. Both non-LoS deployment scenarios havemultiple buildings between the transmitter and receiver. According to the four data sets,a best-fit path loss model can be determined together with a realistic fade margin for thedifferent use cases. This model and fade margin are used as a basis for the MAC-layersimulations presented in Section 6.5.

6.4 path loss models

As shown in chapter 2, the link budget between transmitter and receiver can be describedusing the following generic equation:

Prx = Ptx +Gtx +Grx − PL+X (6.1)

where Prx and Ptx are the received and transmitted power expressed in dBm, Gtx andGrx are the transmitter and receiver gain, and PL is the path loss. The path loss PL isdependent on the environment, used frequency, and the distance between both devices.PL can be simulated with a path loss model, which can empirically or deterministicallycompute the signal loss. Additionally, a factor X is add to the link budget which indicateda log-normal variation which is Gaussian distributed. In this section, the seven widelyused empirical outdoor path loss models that are explained in chapter 2 fits the scope ofthe deployments use case of the IEEE 802.11ah. These propagation loss models have beenproven suitable given specific environments and constraints, are evaluated and comparedby Sarkar et al. [2003]. The remainder of this section briefly summarizes and discussesthe considered models.

1. Free Space path loss: The most naive and basic model that expresses the freespace path loss as inversely proportional to the squared distance of a wave that ispropagating in free space.

114

6.4. path loss models

(a) Macro LoS scenario (b) Pico LoS scenario

(c) Pico non-LoS with transmitter height at 12 mscenario

(d) Pico non-LoS with transmitter height at 1.5 mscenario

Figure 6.1: Transmitter (cross) and receiver locations (dots) of both LoS and non-LoSscenarios applied in a macro and pico deployment.

2. Two-ray path loss: This model includes on one hand the LoS signal and on theother hand the non-LoS signal. This non-LoS signal enables the inclusion of theground reflection and is based on the calculation of the Fresnel reflection coefficientsat reflection intersection with the soil. In order to use this path loss model, theheights of the transmitter and receiver antennas have to be known to calculate thereflection intersection.

3. COST-231 Hata: An outdoor path loss loss model that is used in urban and subur-ban environments. It has some restrictions that limit the heights and the frequencyrange of the used devices. These restrictions will limit the height of transmitting

115

6. Outdoor IEEE 802.11.ah Range Characterization using ValidatedPropagation Models

devices from 30 meter to 200 m and 1 to 10 m for receiving devices. The frequencyrange of both devices should be below 1 GHz [COST Action 231, 1999].

4. COST-231 Walfisch-Ikegami: This model adds the average rooftop heights of nearbybuildings, and the antenna heights to compute the path loss. It is mostly used inurban environments [COST Action 231, 1999].

5. AH Macro deployment: The first outdoor model proposed for use with 802.11ah bythe IEEE TGah working group, based on the 3GPP SCM for LTE [Hazmi et al.,2012]. It assumes an antenna height 15 m above rooftop level.

6. AH Pico deployment: The second outdoor model proposed for use with 802.11ahby the IEEE TGah working group, based on the 3GPP SCM for LTE [Hazmi et al.,2012]. It assumes an antenna height at rooftop level.

7. ITU-R street canyon: This model is characterized by two slopes defined by twoindividual models and a break point. This break point has a dependency on theused wavelength and the different antenna heights. The first slope is defined by theFree Space path loss model for distances smaller than the break point. Beyond thebreak point, the LoS path loss model with a different path loss exponent is used,which represents the worst case path loss [ITU, 2015].

This results in a list of propagation models that can be applied for this research. Thefirst propagation model that is selected is the most naive and basic model that computesthe line-of-sight (LoS) path loss. The second model, two ray path loss model, includesat one hand the LoS signal and at the other hand the Non-LoS signal. This Non-LoSsignal enables the inclusion of the ground reflection and is based on the calculation ofthe Fresnel reflection coefficients at reflection intersection with the soil. In order to usethis propagation model the different heights of the transmitter and the receiver has to beknown in order to calculate the reflection intersection. Thirdly, the COST 231 Okomura-Hata model is being considered as an outdoor propagation model that is used in urbanand suburban environments. This model has some restrictions that limits the heightsof the devices and the frequency range until 1 Ghz. Next, the COST-231 Walfisch-Ikegami adds the average rooftop heights of the nearby buildings, the different antennaheights to compute the path loss. This model is mostly used in urban environments.Fifthly, the macro deployment model and the micro deployments model of the 3GPPstandard has included because of the IEEE802.11.ah specifications. According to thespecification, the most realistic coverage can be reached with this model. Next, the log-normal has been applied in urban environments due to the Gaussian fading componentthat implements the fast fading component. This will appear from several reflectionson objects or buildings. Finally, the ITU-R suburban propagation model is included inthe propagation loss model list. This model describes in addition to the slow fading thefast fading as well by defining a break point distance. Due to this break point, a secondmodel that predict the signal loss can be added. So the worst case propagation loss iscalculated with this model.

6.5 Evaluation and Results

The goal of this section is twofold. First, the seven path loss models described above arefit to the four data sets, in order to determine the most accurate one for each scenario.

116

6.5. Evaluation and Results

Second, the 802.11ah ns-3 simulator [Tian et al., 2016] is used to calculate accuratethroughput and packet loss values as a function of distance, using the best fitting pathloss models, as well as realistic fade margins and radio parameters for a typical and anideal IEEE 802.11.ah use case.

6.5.1 Path loss model comparison

In order to determine the optimal path loss loss model for each of the four outdoorscenarios under study, the seven models are fit to all data sets and compared in terms ofthe normalized root-mean-square error (NRMSE). Figure 6.2 compares the seven modelsto the different dataset measurements in terms of the normalized received signal strength(RSS) as a function of distance. This normalized received signal strength is the actuallyreceived signal power subtracted with the transmitted output power. Each path lossmodel is simulated with a transmission power of 0 dBm, a center frequency of 868.1 MHz,and an antenna gain of +3 dBi that is applied for transmitter and receiver.

Table 6.1: Normalized RMSE comparison of the path loss loss models for all scenarios

ModelNRMSE

Pico LoS Pico NLoS Pico NLoS macro LoSwith transmitter with transmitterheight at 1.5 m height at 12 m

Free space 0.26 0.006 0.05 0.29AH macro 0.14 0.004 0.01 0.01AH pico 0.04 2.15e-5 0.001 0.27ITU-R street canyon 0.10 0.006 0.05 0.26COST231-Hata 0.09 0.027 0.044 0.82COST231-WalfishIkegami

0.21 0.003 0.029 0.16

Two Ray 0.20 0.003 0.052 0.28

As shown in Table 6.1, the AH-macro model proposed by the TGah working groupfits best to the macro LoS dataset, while the AH-pico model proposed by the TGahsuits best for the pico LoS and non-LoS datasets. A very noteworthy observation is thefact that the data of the macro scenario dataset and the pico non-LoS with transmitterheight at 12 m scenario dataset has a significant offset with the path loss models whenthe distance is small. This observation can be explained to the fact that the path lossmodels consider a perfect isotropic antenna, while our antenna has a monopole radiationpattern. The AH-macro deployment model, which is the optimal choice for the macroLoS scenario, can be characterized as follows:

PL = 8 + 36.7 log10(d) (6.2)

where d is the distance between the receiver and the transmitter. Next, the AH-picomodel is the optimal algorithm for pico LoS and pico non-LoS scenarios and is definedwith the formula:

PL = 23.3 + 36.7 log10 d+ cpico (6.3)

117

6. Outdoor IEEE 802.11.ah Range Characterization using ValidatedPropagation Models

10-1 100 101 102 103 104

distance (m)

−150

−100

−50

0

50

100

150

RSS (dB )

Free Space

AH macro

AH pico

ITU Street Canyon

COST231-Hata

COST231-Walfisch-Ikegami

Two Ray

(a) Macro LoS scenario

10-1 100 101 102 103

distance (m)

−140

−120

−100

−80

−60

−40

−20

0

20

RSS (dBm)

Free Space

AH macro

AH pico

ITU Street Can)on

COST231-Hata

COST231-Walfisch-Ikegami

Two Ray

(b) Pico LoS scenario

10-1 100 101 102 103

distance (m)

(150

(100

(50

0

50

100

RSS (dBm)

Free Space

AH macro

AH pico

ITU Street Ca yo

COST231-Hata

COST231-Walfisch-Ikegami

Two Ray

(c) Pico non-LoS with transmitter height at 12 mscenario

10-1 100 101 102 103

distance (m)

−120

−100

−80

−60

−40

−20

0

20

40

60

RSS (dBm)

Free Space

AH macro

AH pico

ITU Street Can)on

COST231-Hata

COST231-Walfisch-Ikegami

Two Ray

(d) Pico non-LoS with transmitter height at 1.5 mscenario

Figure 6.2: Correlation between the measurements and path loss models in terms of RSSand as a function of distance

Where cpico is the correction function for suburban environments and is defined as:

cpico = 21 log10

f

900(6.4)

Subsequently, the fade margin needs to be computed to enable realistic simulationsand packet loss calculations. This fade margin is calculated as the root mean square errorof the differences between the simulated and real reception power Prx. The resulting fademargin for each scenario is listed among the other simulation parameters in Table 6.2.

6.5.2 MAC-layer performance

This section characterizes the packet loss and throughput of IEEE 802.11ah for the fourscenarios, based on the best fitted path loss models and realistic fade margins that werepreviously determined, as well as realistic radio transceiver parameters. Furthermore, thissection was written and part of the research of my colleague, Le Tian. Two different radio

118

6.5. Evaluation and Results

Table 6.2: Physical layer parameters used for simulation

Common parameters ValueFrequency (Mhz) 868Modulation scheme BPSKBandwidth (MHz) 1Data rate (kbps) 300Coding method BCCError rate model YansErrorRatePacket size (bytes) 256Transceiver parameters Prototype IdealTransmission power (dBm) 0 14Transmission antenna gain (dBi) 0 0Reception antenna gain (dBi) 0 3Noise figure (dB) 6.8 3Scenario parameters Macro LoS Pico LoSPath loss model AH Macro AH PicoTransmitter antenna height (m) 30 1.5Receiver antenna height (m) 1.5 1.5Fade Margin (dB) 0.99 3.60Scenario parameters Pico non-LoS Pico non-LoS

with transmitter with transmitterat height 12 m at height 1.5 m

Path loss model AH Pico AH PicoTransmitter antenna height (m) 12 1.5Receiver antenna height (m) 1.5 1.5Fade Margin (dB) 7.67 2.62

transceiver configurations are used to analyze the 802.11ah MAC-layer performance: (i)prototype and (ii) ideal. The prototype configuration is based on the 802.11ah radiohardware prototype developed by Ba et al. [Ba et al., 2016], and has a transmissionpower of 0 dBm, a gain of 0 dBi for both antennas and noise figure of 6.8 dB. Theideal configuration is based on the maximum recommended transmission power valuesas proposed by the CEPT Electronics Communications Committee (ECC) [CEPT ECC,2016], and has a transmission power of 14 dBm, a 0 dBi transmit antenna gain, a 3 dBireceiver antenna gain and noise figure of 3.0 dB [Hazmi et al., 2012]. Table 6.2 givesa complete overview of the different PHY parameters that are used in the simulationto evaluate the packet loss and throughput. The evaluation is performed using the802.11ah ns-3 simulation module [Tian et al., 2016], which includes both a MAC andPHY implementation of 802.11ah. Concerning the MAC layer analysis, the channel timeis ensured to be fully utilized by allowing a single station to constantly send packets tothe AP over a period of 60 seconds using a 1 MHz bandwidth (MCS0) . The results areaveraged over 10 simulation runs, and are depicted in Figure 6.3.

Figure 6.3 shows the 802.11ah transmission range separately for the four data sets.Each graph visualizes the results for the prototype and ideal radio transceiver configura-tions. From Figure 6.3a, it can be derived that for the prototype hardware configuration,stations in the macro LoS scenario can transmit up to 445 and 480 m at 1.29% and 9.63%

119

6. Outdoor IEEE 802.11.ah Range Characterization using ValidatedPropagation Models

Packet loss, ideal Packet loss, prototype Throughput, ideal Throughput, prototype

0

20

40

60

80

100

0 500 1000 1500 2000 2500

0

20

40

60

80

100

120

140

160

Packet lo

ss (

%)

Thro

ughput (K

bps)

Distance (m)

(a) Macro LoS scenario

0

20

40

60

80

100

0 200 400 600 800 1000 1200

0

20

40

60

80

100

120

140

160

Packet lo

ss (

%)

Thro

ughput (K

bps)

Distance (m)

(b) Pico LoS scenario

0

20

40

60

80

100

0 200 400 600 800 1000 1200

0

20

40

60

80

100

120

140

160

Packet lo

ss (

%)

Thro

ughput (K

bps)

Distance (m)

(c) Pico non-LoS scenario with transmitter height at12 m

0

20

40

60

80

100

0 200 400 600 800 1000 1200

0

20

40

60

80

100

120

140

160

Packet lo

ss (

%)

Thro

ughput (K

bps)

Distance (m)

(d) Pico non-LoS with transmitter height at 1.5 mscenario

Figure 6.3: Packet loss and throughput as a function of distance for the actual radiohardware as well as ideal case

packet loss respectively. In contrast, the ideal configuration can achieve up to 1640 and1780 m respectively at 1.21% and 10.7% packet loss. The maximum transmission rangefor the other 3 scenarios is much lower. The results of the pico LoS scenario are shown inFigure 6.3b and show that a transmission range of 120 and 150 m can be achieved witha prototype hardware configuration, with a packet loss of 1.24% and 9.89% respectively.Additionally, 440 and 550 m can be achieved at 1.18% and 9.5% packet loss with an idealhardware configuration. For the pico non-LoS with transmitter at height 12 m scenario,as shown in Figure 6.3c, the prototype hardware configuration only achieves distancesof 65 and 105 m at 1.14% and 9.41% packet loss respectively. The ideal configurationachieves up to 240 m and 400 m at 1.0% and 10.31% packet loss respectively. Finally,Figure 6.3d depicts the result of the pico non-LoS with transmitter at height 1.5 m sce-nario. It suggests that stations can transmit messages up to 136 and 162 m at 1.0% and9.8% packet loss respectively with a prototype hardware configuration. This transmissionrange can be increased to 500 and 600 m at 1.02% and 10.19% packet loss by using anideal hardware configuration. The control frames and data frame are both used in IEEE

120

6.6. Conclusion

802.11ah, while only data frames are counted as throughput. This result in a super-linearinverse relationship between throughput and packet loss. These results show that themacro LoS and pico LoS scenarios, using state-of-the-art low-power hardware, have amaximum range of 450 and 130 m, while maintaining a throughput of 150 kbps. On theother hand, for the pico non-LoS with transmitter at height 1.5 m and 12 m scenariosthe maximum range is 80 and 150 m. These results need to be interpreted as a worstcase situation that can be improved at the cost of a higher power consumption. An idealhardware configuration could achieve 1700, 490, 300 and 550 m for the four scenariosrespectively. The results also reveal that with the same hardware configuration, packetloss increases dramatically at a specific tipping point. This can be seen in the macro LoSscenarios due to the small fade margin, while the packets loss increases more slowly inthe other three scenarios.

6.6 Conclusion

This chapter provides a realistic characterization of outdoor IEEE 802.11ah MAC-layerthroughput and packet loss as a function of distance based on sub-1GHz radio trans-mission measurements and realistic radio transceiver parameters. The measurements areused to classify seven popular path loss models for near-LoS macro, near-LoS pico, non-LoS pico with transmitter at height 1.5 m and 12 m scenarios. The results showed thatthe path loss models proposed by the IEEE TGah working group for 802.11ah indeedprovide a good fit to the real data. However, while the standard promises a range ofup to 1 km at 150 kbps, the results paint a less optimistic picture when using realisticlow-power station hardware parameters. Simulation results indicate a maximum rangeof 450, 130, 80 and 150 m at 150 kbps for these four scenarios. For more ideal hard-ware with the maximum allowed transmission power, this range could be increased up to1700, 490, 300 and 550 m respectively for the four different scenarios. Furthermore, theretrieved results are found optimal for this specific geographical location and thereforecontext dependent.

121

Chapter 7

Conclusion

This chapter summarizes the conclusions of the research objectives of this thesis, whichaddresses the following main research question: Which methods are possible toapply an indoor radio propagation model that makes use of a 2D or 3Drealistic environment model an how can both models be combined in anefficient way. As a result of this main research goal, four research objectives weredescribed in Chapter 1:

1. How to model a 2D and 3D geometric model of the real environment where allobjects, walls, floor are segmented and classified?

2. What is the most efficient method to implement a ray launching propagation lossmodel?

3. How to integrate the geometric model of the real environment with a ray launchingpropagation loss model?

4. How to validate a propagation loss model where a large scale wireless sensor networkcan be used together with the geometric model?

In order to answer these research questions, the state-of-the-art literature about Si-multaneous Localization and Mapping, Spatial data structures, and radio propagationwas discussed in Chapter 2. Next, the benchmark of different Iterative Closest Pointsalgorithms and the implementation of the MapFuse system was described in Chapter 3.This Chapter answers research question 1 and resulted in contribution A. Furthermore,the objectives of research question 2, 3, 4 indicate the following contributions B, C, D,which were found in Chapter 4. Furthermore, Chapter 5 described three different appli-cations that can benefit from the ray launching propagation loss model by integratingthe influence of multipath. Finally, Chapter 6 explored the realistic coverage that canbe retrieved when using the IEEE 802.11.ah standard. Subsequently, this chapter willsummarize the contributions that are introduced in chapter 1. Furthermore, it will pro-vide a future work section related to applications and optimizations of the ray launchingpropagation loss model.

123

7. Conclusion

Figure 7.1: Visual illustration of the PhD structure and the indication of the individualcontributions.

7.1 Contributions

The contributions related to the targeted research questions are described as follows andcan be found in Figure 7.1:

A Study on robust geometric techniques that captures and segment an environment in2D and 3D so that it can be used for a radio propagation simulation: To capture areal environment, the concept of SLAM is first introduced in chapter 2 and furtherapplied in chapter 3 to create a realistic environment model. SLAM solves robotlocalization and environment mapping in a simultaneous fashion. Because of thereason that it strongly depends on the quality and the number of sensor measure-ments, a realistic environment model can be modeled when the correct hardwareis used. Due to the fact that the alignment of sensor measurements such as twoconsecutive 3D point clouds is one of the key algorithms of a SLAM algorithm.Chapter 3 implemented a benchmark system to validate different alignment algo-rithms. As a result of the benchmark we can conclude for the given dataset that acombination of applying a ICP point-to-point method after an SVD method givesthe minimum error. On the other hand, the ICP point-to-surface method is themost precise algorithm based on the rotational and translational part of the trans-formation after analyzing the results of the precision benchmark. In order to usethe environment model for the implemented ray launching propagation model, aspatial data structure is applied. Such a data structure is based on a tree structurethat enables an algorithm to traverse the tree in a fast and recursive way so that aspecific node which can indicate a location can be found. According to the SLAMsystem that was used for creating a realistic environment model, the result was notuseful due to the absence of a floor, ceiling, and walls. In order to cope with this,chapter 3 presented an efficient, robust method for completing and optimizing a3D model using MapFuse. A balance between map completeness and the level ofdetails is analyzed to create a realistic environment model, as well as implementingan appropriate merging sequence. The most optimal result was found when the

124

7.1. Contributions

3D point clouds of an online SLAM are continuously merged with the initial guessmodel. As initially intended, MapFuse is suitable to create 3D models of variousenvironments.

B Implement a ray launching propagation loss model that can be used for indoor radiolocalization algorithms such as Angle of Arrival, Radio Tomographic Imaging, andRSS localization.: In chapter 4, a realistic ray launching propagation loss model isdiscussed based on the implementation that contains four different parts. First, aline segment extraction algorithm is explained, which enables the segmentation of a2D-map that is created by a moving robot. Secondly, a specific device configurationis discussed that makes it possible to configure a ray-launching propagation simula-tion. Thirdly, the core of the propagation model, which is called the ray calibrationprocess, is discussed. The fourth part is the electrical field computation process,which uses the result of the ray calibration process as input to compute the indi-vidual electrical field values at each location that every ray visited. This results inthe possibility to compute a heatmap that can be used for applications such as theoptimization of localization algorithms. In addition to the heatmap, three differentlocalization applications that can benefit from applying a propagation loss modelwere explained in chapter 5. By introducing the influence of reflections and re-fractions for indoor localization solutions such as signal based, radio tomographicimaging, and angle of arrival localization, a more realistic probability map andattenuation loss is retrieved.

C Design an automated validation system that combines geometrical environment el-ements with radio transceiver measurements by applying a SLAM algorithm with arobot : In addition to the implementation of the propagation loss model, chapter 4proposed a generic validation model that uses the result of a SLAM algorithm incombination with a set of radio measurements that were received from a transmit-ter at different locations. This was used to validate and evaluate the propagationloss model.

D Evaluate the integration of applying a realistic environment model in a radio prop-agation loss model by the automated validation model.:

The evaluation of the ray launching propagation loss model was discussed in chap-ter 4. As a result of this evaluation, optimal results were found when the resolutionlevel of the geometric model is equal to 8 or higher when the accuracy in terms ofRMSE was analyzed based on the number of rays. Besides the correlation betweenthe number of rays and the global accuracy, an analysis is made to evaluate thereflection recursion level regarding to the number of rays and the resolution level.As a result of this evaluation, an optimal reflection recursion level of 5 is foundsince the accuracy is not improving when a higher level is applied. When com-bining these conclusions, a validation was applied where 96 links are used for thefirst environment, which results in a RMSE of 9.316 db, ME of −0.479 db, MAEof 9.26 db, and a precision of 5.895 dB. The results for the second environment,which were based on 220 links, after applying the validation model are 7.69 dB forRMSE, −0.809 dB for ME, 7.637 dB for MAE, and a precision of 5.307 dB. Finally,the performance in terms of computation time on a high performance notebook isanalyzed based on the simulation configuration that gave the best results. In gen-

125

7. Conclusion

eral the simulations were computed 1.5 times faster with 4 workers compared tothe simulations where 1 worker was used.

E Coverage Characterization of the IEEE 802.11.ah standard using validated propaga-tion models.: Chapter 6 provides a realistic characterization of outdoor IEEE 802.11ahMAC-layer throughput and packet loss as a function of distance based on sub-1GHzradio transmission measurements and realistic radio transceiver parameters. Themeasurements are used to classify seven popular path loss models for near-LoSmacro, near-LoS pico, non-LoS pico scenarios. The results showed that the pathloss models proposed by the IEEE TGah working group for 802.11ah indeed providea good fit to the real data. However, while the standard promises a range of up to1 km at 150 kbps, the results paint a less optimistic picture when using realisticlow-power station hardware parameters. Simulation results indicate a maximumrange of 450, 130, 80 and 150 m at 150 kbps for these four scenarios. For more idealhardware with the maximum allowed transmission power, this range could be in-creased up to 1700, 490, 300 and 550 m respectively for the four different scenarios.Furthermore, the retrieved results are found optimal for this specific geographicallocation and therefore context dependent.

7.2 Future Work

As final thought, I will summarize some ideas regarding to future work by explaining thecross sections F, G, H that are shown in Figure 7.1.

F. This Section indicates the research about the evaluation of an indoor or outdoorradio propagation loss model without a realistic environment model regarding tostate-of-the-art algorithms. Within this evaluation different properties will be ana-lyzed in terms of different channel parameters like the RMS delay spread functionat a specific location, material properties, AoA, and AoD.

G. This section illustrates the evaluation and the integration of the ray launchingpropagation loss model with localization algorithms as discussed in chapter 5. Asideof simulating the received signal strength at a specific location, the ray launchingpropagation loss model is able to simulate the multipath related to the propagationtime at a specific location. Since it takes reflections and refractions into account,each ray that arrives at a specific location has a different distance, which can beused to compute the time that a signal takes to travel between a transmitter andreceiver. According to this, the impulse response is characterized by the timeversus the signal strength of each ray. Because of the ray launching behavior,the impulse response is assumed not accurate. As a result of this assumption,a validation is required in order to analyze the influence between the number ofrays and the expected impulse response. Such a validation can be achieved bycomparing the result of a channel sounder such as MIMOSA to analyze fundamentalchannel properties for localization and communication purposes [Laly et al., 2016].In addition to these applications, the ray launching propagation loss model canhave many more application domains and research goals such as the followingtopics: First, applying a radio propagation simulation with a 3D environmentmodel will lead to the implementation of a method that is able to segment 3D

126

7.2. Future Work

environment models in order to apply the ray launching propagation loss modelbased on a 3D environment model. This will result in an evaluation that is ableto indicate the influence of reflections and refractions that occurs at the ceilingand floor compared to evaluation of 2D environment model. Subsequently, whenapplying a 3D ray launching propagation loss simulation, the influence of usingdifferent antenna patterns can be researched. Furthermore, in order to indicate andmeasure the influence of such an antenna pattern, it is necessary to validate eachsimulation with measurements of an anechoic chamber. This allows the evaluationof an individual ray.

H. In this section the research where an environment model can be used in a applicationwithout applying any radio propagation loss model is investigated. An example ofsuch an applications can be found in robotics where a complete environment modelcan deliver a better result to do autonomous robot navigation.

127

Bibliography

T. Adame, A. Bel, B. Bellalta, J. Barcelo, and M. Oliver. IEEE 802.11ah: the WiFiapproach for M2M communications. IEEE Wireless Communications, 21(6):144–152, 2014. doi: 10.1109/MWC.2014.7000982. 113

M. Aernouts, B. Bellekens, and M. Weyn. MapFuse : Complete And Realistic 3D Mod-elling. Hindawi Journal of Robotics, 2017. 37

C. Alippi, M. Bocca, G. Boracchi, N. Patwari, and M. Roveri. Rti goes wild: Radiotomographic imaging for outdoor people detection and localization. IEEE Transactionson Mobile Computing, 15(10):2585–2598, 2016. 105

P. Almers, E. Bonek, A. Burr, N. Czink, M. Debbah, V. Degli-Esposti, H. Hofstetter,P. Kyosti, D. Laurenson, G. Matz, A. Molisch, C. Oestges, and H. Ozcelik. Surveyof Channel and Radio Propagation Models for Wireless MIMO Systems. EURASIPJournal on Wireless Communications and Networking, 2007(1):019070, 2007. ISSN1687-1499. doi: 10.1155/2007/19070. 66

J. Aulinas, Y. Petillot, J. Salvi, and X. Llado. The SLAM problem: a survey. CCIA,pages 363—-371, 2008. 40

S. Aust and R. V. Prasad. Advances in Wireless M2M and IoT: Rapid SDR-prototyping of IEEE 802.11ah. In IEEE Local Computer Networks Conference, 2014.113

S. Aust, R. V. Prasad, and I. G. M. M. Niemegeers. Performance study of MIMO-OFDM platform in narrow-band sub-1 GHz wireless LANs. In 11th Interna-tional Symposium on Modeling & Optimization in Mobile, Ad Hoc & Wireless Networks(WiOpt), pages 89–94. IEEE, 2013. 113

M. Ayadi, N. Torjemen, and S. Tabbane. Two-Dimensional Deterministic PropagationModels Approach and Comparison With Calibrated Empirical Models. IEEE Trans-actions on Wireless Communications, 14(10):5714–5722, oct 2015. ISSN 1536-1276.doi: 10.1109/TWC.2015.2442572. 66

129

Bibliography

L. Azpilicueta, M. Rawat, K. Rawat, F. M. Ghannouchi, and F. Falcone. A RayLaunching-Neural Network Approach for Radio Wave Propagation Analysis in Com-plex Indoor Environments. IEEE Transactions on Antennas and Propagation, 62(5):2777–2786, may 2014. ISSN 0018-926X. doi: 10.1109/TAP.2014.2308518. 66

A. Ba, Y.-H. Liu, J. van den Heuvel, P. Mateman, B. Busze, J. Gloudemans, P. Vis,J. Dijkhuis, C. Bachmann, G. Dolmans, K. Philips, and H. de Groot. 26.3 A1.3nJ/b IEEE 802.11ah fully digital polar transmitter for IoE applications. In2016 IEEE International Solid-State Circuits Conference (ISSCC), pages 440–441.IEEE, jan 2016. ISBN 978-1-4673-9466-6. doi: 10.1109/ISSCC.2016.7418096. URLhttp://ieeexplore.ieee.org/document/7418096/. 112, 113, 119

C. A. Balanis. Antenna Theory and Design. Electronics and Power,28(3):267, 1982. ISSN 00135127. doi: 10.1049/ep.1982.0113. URLhttp://books.google.com/books?id=v1PSZ48DnuEC&pgis=1%5Cnhttp:

//digital-library.theiet.org/content/journals/10.1049/ep.1982.0113http:

//digital-library.theiet.org/content/journals/10.1049/ep.1982.0113. 27

V. Banos-Gonzalez, M. Afaqui, E. Lopez-Aguilera, and E. Garcia-Villegas. IEEE802.11ah: A Technology to Face the IoT Challenge. Sensors, 16(11):1960, nov 2016a.ISSN 1424-8220. doi: 10.3390/s16111960. URL http://www.mdpi.com/1424-8220/

16/11/1960. 113

V. Banos-Gonzalez, M. S. Afaqui, E. Lopez-Aguilera, and E. Garcia-Villegas. Through-put and range characterization of IEEE 802.11ah. page 7, apr 2016b. 113

B. Bellekens, V. Spruyt, and M. Weyn. A Survey of Rigid 3D Pointcloud RegistrationAlgorithms. In AMBIENT 2014, The Fourth International Conference on AmbientComputing, Applications, Services and Technologies. 2014., pages 8–13, 2014. ISBN9781612083568. 37

B. Bellekens, V. Spruyt, R. Berkvens, R. Penne, and M. Weyn. A Benchmark Survey ofRigid 3D Point Cloud Registration Algorithms. International Journal on Advances inIntelligent Systems, 8(1):118–127, 2015. ISSN 1942-2679.

B. Bellekens, R. Penne, and M. Weyn. Validation of an indoor ray launching RF propa-gation model. In 2016 IEEE-APS Topical Conference on Antennas and Propagation inWireless Communications (APWC), pages 74–77. IEEE, sep 2016. ISBN 978-1-5090-0470-6. doi: 10.1109/APWC.2016.7738122. 65, 66

B. Bellekens, L. Tian, P. Boer, M. Weyn, and J. Famaey. Outdoor IEEE 802.11ah RangeCharacterization Using Validated Propagation Models. In GLOBECOM 2017 - 2017IEEE Global Communications Conference, pages 1–6. IEEE, dec 2017. ISBN 978-1-5090-5019-2. doi: 10.1109/GLOCOM.2017.8254515. URL http://ieeexplore.ieee.

org/document/8254515/. 111

B. Bellekens, R. Penne, and M. Weyn. Realistic Indoor Radio Propagation for Sub-GHzCommunication. MDPI Sensors, 2018.

A. Bensky. Wireless positioning technologies and applications. Artech House, 2016. 102

130

Bibliography

K. Berger, S. Meister, R. Nair, and D. Kondermann. A State of the Art Report on KinectSensor Setups in Computer Vision. In Lecture Notes in Computer Science (includingsubseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),volume 8200 LNCS, pages 257–272. 2013. ISBN 9783642449635. 40

R. Berkvens, B. Bellekens, and M. Weyn. Signal strength indoor localization using asingle DASH7 message. In 2017 International Conference on Indoor Positioning andIndoor Navigation (IPIN), number September, pages 1–7. IEEE, sep 2017. ISBN 978-1-5090-6299-7. doi: 10.1109/IPIN.2017.8115875. 101, 102

R. Berkvens, F. Smolders, B. Bellekens, M. Aernouts, and M. Weyn. Comparing 433and 868 MHz Active RFID for Indoor Localization Using Multi-Wall Model. NumberJune, pages 26–28, 2018. ISBN 9781538669846.

P. J. Besl and N. D. McKay. Method for registration of 3-D shapes. IEEE transactionson pattern analysis and machine intelligence, 14(2):586–606, apr 1992. doi: 10.1117/12.57955. 15, 38

N. BniLam, G. Ergeerts, D. Subotic, J. Steckel, and M. Weyn. Adaptive prob-abilistic model using angle of arrival estimation for IoT indoor localization. In2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN),number September, pages 1–7. IEEE, sep 2017. ISBN 978-1-5090-6299-7. doi:10.1109/IPIN.2017.8115864. 101, 107

M. Bocca, O. Kaltiokallio, N. Patwari, and S. Venkatasubramanian. Multiple targettracking with rf sensor networks. IEEE Transactions on Mobile Computing, 13(8):1787–1800, 2014. 103

T. P. Breckon and R. B. Fisher. Non-parametric 3D surface completion. In Proceedingsof International Conference on 3-D Digital Imaging and Modeling, 3DIM, 2005. ISBN0769523277. doi: 10.1109/3DIM.2005.61. 39

R. A. Casas, V. Papaparaskeva, R. Kumar, P. Kaul, and S. Hijazi. An IEEE 802.11ahprogrammable modem. In IEEE 16th International Symposium on A World of Wire-less, Mobile and Multimedia Networks (WoWMoM), 2015. doi: 10.1109/WoWMoM.2015.7158203. 113

CEPT ECC. ERC Recommendation 70-03, 2016. 119

COST Action 231. Digital Mobile Radio Towards Future Generation Systems, 1999. 32,33, 112, 116

W. R. Crum. Non-rigid image registration: theory and practice. British Journal of Radi-ology, 77(suppl 2):S140–S153, dec 2004. ISSN 0007-1285. doi: 10.1259/bjr/25329214.40

S. Denis, R. Berkvens, G. Ergeerts, B. Bellekens, and M. Weyn. Combining multiplesub-1 ghz frequencies in radio tomographic imaging. In Indoor Positioning and IndoorNavigation (IPIN), 2016 International Conference on, pages 1–8. IEEE, 2016. 101,103

131

Bibliography

S. Denis, R. Berkvens, G. Ergeerts, and M. Weyn. Multi-frequency sub-1 ghz radiotomographic imaging in a complex indoor environment. In Indoor Positioning andIndoor Navigation (IPIN), 2017 International Conference on, pages 1–8. IEEE, 2017.101, 103, 106

G. Dissanayake. A computationally efficient solution to the simultaneous localisationand map building (SLAM) problem. In Robotics and Automation, volume 2, pages1009–1014. IEEE, 2000. ISBN 0-7803-5886-4. doi: 10.1109/ROBOT.2000.844732. 7

B. Draper, W. Yambor, and J. Beveridge. Analyzing pca-based face recognition algo-rithms: Eigenvector selection and distance measures. Empirical Evaluation Methodsin Computer Vision, Singapore, pages 1–14, 2002. 13

H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping: part I. IEEERobotics & Automation Magazine, 13(2):99–110, jun 2006. doi: 10.1109/MRA.2006.1638022. 7

E. Tanghe, P. Laly, D. P. Gaillot, N. Podevijn, S. Denis, N. BniLam, B. Bellekens,R. Berkvens, M. Weyn, M. Lienard, L. Martens and W. Joseph. Dense MultipathComponent Polarization and Wall Attenuation at 1.35 GHz in an Office Environment.In EuCap2018, 2017.

A. Eliazar and R. Parr. DP-SLAM: Fast, robust simultaneous localization and mappingwithout predetermined landmarks. IJCAI International Joint Conference on ArtificialIntelligence, pages 1135–1142, 2003. ISSN 10450823. doi: 10.1109/IROS.2009.5354248.7

F. Endres, J. Hess, N. Engelhard, J. Sturm, D. Cremers, and W. Burgard. An evaluationof the RGB-D SLAM system. 2012 IEEE International Conference on Robotics andAutomation, may 2012. doi: 10.1109/ICRA.2012.6225199. 11, 21, 40, 56

J. Engel, T. Schops, and D. Cremers. LSD-SLAM: Large-Scale Direct Monocular SLAM.Computer Vision ECCV 2014, pages 834–849, 2014. 11, 21

J. Engel, J. Stuckler, and D. Cremers. Large-scale direct SLAM with stereo cameras.2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),sep 2015. doi: 10.1109/IROS.2015.7353631. 21

Erle Robotics. Erle-Copter — Erle Robotics. URL http://erlerobotics.com/blog/

erle-copter/. 54

S. Fantoni, U. Castellani, and A. Fusiello. Accurate and automatic alignment of rangesurfaces. In Proceedings - 2nd Joint 3DIM/3DPVT Conference: 3D Imaging, Modeling,Processing, Visualization and Transmission, 3DIMPVT 2012, pages 73–80. IEEE, oct2012. ISBN 9780769548739. doi: 10.1109/3DIMPVT.2012.63. 19, 38

Z. Farid, R. Nordin, and M. Ismail. Recent advances in wireless indoor localizationtechniques and system. Journal of Computer Networks and Communications, 2013,2013. ISSN 20907141. doi: 10.1155/2013/185138. 102

A. Fink, T. Ritt, and H. Beikirch. Redundant radio tomographic imaging for privacy-aware indoor user localization. In Indoor Positioning and Indoor Navigation (IPIN),2015 International Conference on, pages 1–7. IEEE, 2015. 103

132

Bibliography

F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua. Multicamera people tracking witha probabilistic occupancy map. IEEE transactions on pattern analysis and machineintelligence, 30(2):267–282, 2008. 103

A. E. Forooshani, S. Bashir, D. G. Michelson, and S. Noghanian. A survey of wire-less communications and propagation modeling in underground mines. IEEE Com-munications Surveys and Tutorials, 15(4):1524–1545, 2013. ISSN 1553877X. doi:10.1109/SURV.2013.031413.00130. 66

D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte Carlo Localization: EfficientPosition Estimation for Mobile Robots. Aaai-99, (Handschin 1970):343–349, 1999.doi: 10.1.1.2.342. 42

S. F. Frisken and R. N. Perry. Simple and Efficient Traversal Methods for Quadtrees andOctrees. Journal of Graphics Tools, 7(3):1–11, 2002. ISSN 1086-7651. doi: 10.1080/10867651.2002.10487560. 25

G. Grisetti, C. Stachniss, and W. Burgard. Improved Techniques for Grid Mapping WithRao-Blackwellized Particle Filters. IEEE Transactions on Robotics, 23(1):34–46, feb2007. ISSN 1552-3098. doi: 10.1109/TRO.2006.889486. 20, 42

J.-S. Gutmann and K. Konolige. Incremental mapping of large cyclic environments.In Proceedings 1999 IEEE International Symposium on Computational Intelligence inRobotics and Automation. CIRA’99 (Cat. No.99EX375), number October, pages 318–325. IEEE, 2012. ISBN 0-7803-5806-6. doi: 10.1109/CIRA.1999.810068. 7

A. Hazmi, J. Rinne, and M. Valkama. Feasibility study of I&#x0395;&#x0395;&#x0395;802.11ah radio technology for IoT and M2M use cases. In 2012 IEEE Globe-com Workshops, pages 1687–1692. IEEE, dec 2012. ISBN 978-1-4673-4941-3. doi:10.1109/GLOCOMW.2012.6477839. 112, 113, 116, 119

A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard. OctoMap: anefficient probabilistic 3D mapping framework based on octrees. Autonomous Robots,34(3):189–206, feb 2013. doi: 10.1007/s10514-012-9321-0. x, 24, 26, 38

A. S. Huang and A. Bachrach. Visual odometry and mapping for autonomous flightusing an RGB-D camera. International Symposium on Robotics Research (ISRR),pages 1–16, 2011. 40

T. Imai. A survey of efficient ray-tracing techniques for mobile radio propagation analysis.IEICE Transactions on Communications, E100B(5):666–679, 2017. ISSN 17451345.doi: 10.1587/transcom.2016EBI0002. 29, 30

M. F. Iskander and Z. Yun. Propagation prediction models for wireless communicationsystems. IEEE Transactions on Microwave Theory and Techniques, 50(3):662–673,mar 2002. doi: 10.1109/22.989951. 29, 66

ITU. ITU-R Recommendation P.1411.8, 2015. 32, 34, 116

J. Jrvelinen, S. L. H. Nguyen, K. Haneda, R. Naderpour, and U. T. Virk. Evaluationof millimeter-wave line-of-sight probability with point cloud data. IEEE WirelessCommunications Letters, 5(3):228–231, June 2016. ISSN 2162-2337. doi: 10.1109/LWC.2016.2521656. 38, 66

133

Bibliography

O. Kaltiokallio, M. Bocca, and N. Patwari. Enhancing the accuracy of radio tomographicimaging using channel diversity. In Mobile Adhoc and Sensor Systems (MASS), 2012IEEE 9th International Conference on, pages 254–262. IEEE, 2012a. 103

O. Kaltiokallio, M. Bocca, and N. Patwari. Follow@ grandma: Long-term device-freelocalization for residential monitoring. In Local Computer Networks Workshops (LCNWorkshops), 2012 IEEE 37th Conference on, pages 991–998. IEEE, 2012b. 103

O. Kaltiokallio, M. Bocca, and N. Patwari. A fade level-based spatial model for radiotomographic imaging. IEEE Transactions on Mobile Computing, 13(6):1159–1172,2014. 106

J. Kay. Introduction to Homogeneous Transformations & Robot Kinematics. RowanUniversity Computer Science Department, (January):1–25, 2005. 11

J. Kemper and D. Hauschildt. Passive infrared localization with a probability hypoth-esis density filter. In Positioning Navigation and Communication (WPNC), 2010 7thWorkshop on, pages 68–76. IEEE, 2010. 103

C. Kerl, J. Sturm, and D. Cremers. Dense visual SLAM for RGB-D cameras. In 2013IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2100–2106. IEEE, nov 2013. ISBN 978-1-4673-6358-7. doi: 10.1109/IROS.2013.6696650. 37,40

E. Khorov, A. Lyakhov, A. Krotov, and A. Guschin. A survey on IEEE 802.11ah:An enabling networking technology for smart cities. Computer Communications, 58:53–69, mar 2015. ISSN 01403664. doi: 10.1016/j.comcom.2014.08.008. URL http:

//linkinghub.elsevier.com/retrieve/pii/S0140366414002989. 111, 113

M. Klepal. Novel approach to indoor electromagnetic wave propagation modelling. CzechTechnical University in Prague, (July), 2003. 30, 31

Z. Lai, N. Bessis, G. De La Roche, J. Zhang, G. Clapworthy, P. Kuonen, and D. Zhou.Intelligent ray launching algorithm for indoor scenarios. Radioengineering, 2011. 31

P. Laly, D. Gaillot, M. Lienard, P. Degauque, E. Tanghe, and W. Joseph. Radio WavePenetration into Buildings Polarization and Spatial Characteristics of the Rays Uni-versity of Lille. Journal of Electromagnetics, 1:1–4, 2016. ISSN 2534-8833. 126

D. Lee, H. Kim, and H. Myung. Image feature-based real-time RGB-D 3D SLAM withGPU acceleration. Journal of Institute of Control, Robotics and Systems, 19(Urai):457–461, 2013. ISSN 19765622. doi: 10.5302/J.ICROS.2013.13.8002. 40

F. Letourneux, S. Guivarch, and Y. Lostanlen. Propagation models for HeterogeneousNetworks. Antennas and Propagation (EuCAP), 2013 7th European Conference on,(January 2013):3993–3997, 2013. 66

M. Li and D. Wang. Internet of Vehicles Technologies and Services. In R. C.-H. Hsu and S. Wang, editors, Proceedings of the 1st International Conference onInternet of Vehicles, volume 8662 of Lecture Notes in Computer Science, pages211–217, Cham, 2014. Springer International Publishing. ISBN 978-3-319-11166-7. doi: 10.1007/978-3-319-11167-4. URL http://link.springer.com/10.1007/

978-3-319-11167-4. 113

134

Bibliography

L. Liu, C. Oestges, J. Poutanen, K. Haneda, P. Vainikainen, F. Quitin, F. Tufvesson, andP. Doncker. The COST 2100 MIMO channel model. IEEE Wireless Communications,19(6):92–99, dec 2012. ISSN 1536-1284. doi: 10.1109/MWC.2012.6393523. URLhttp://ieeexplore.ieee.org/document/6393523/. 31

K. Low. Linear least-squares optimization for point-to-plane icp surface registration.Technical Report February, 2004. 16

S. Marden and J. Guivant. Improving the Performance of ICP for Real-Time Applica-tions using an Approximate Nearest Neighbour Search. Proceedings of AustralasianConference on Robotics and Automation, pages 3–5, 2012. ISSN 14482053. 15, 38

J. Meinil, P. Kysti, T. Jms, and L. Hentil. WINNER II Channel Models. InRadio Technologies and Concepts for IMT-Advanced, volume 1, pages 39–92.John Wiley & Sons, Ltd, Chichester, UK, 2008. ISBN 9780470747636. doi:10.1002/9780470748077.ch3. URL http://projects.celtic-initiative.org/

winner+/WINNER2-Deliverables/D1.1.2v1.2.pdfhttp://doi.wiley.com/10.

1002/9780470748077.ch3. 30

B. Moon, H. V. Jagadish, C. Faloutsos, and J. H. Saltz. Analysis of the Clustering ofHilbert Space-Filling Curve. 8958546(CS-TR-3611), 1996. 23

L. Nagy. FDTD and Ray and Optical Methods and for Indoor and Wave and PropagationModeling. 2010. 30

L. NAGY and and Budapest University of TechnologyEconomics. Deterministic indoorwave propagation modeling. 2007. 29

R. A. Newcombe, A. J. Davison, S. Izadi, P. Kohli, O. Hilliges, J. Shotton, D. Molyneaux,S. Hodges, D. Kim, and A. Fitzgibbon. KinectFusion: Real-time dense surfacemapping and tracking. In 2011 10th IEEE International Symposium on Mixed andAugmented Reality, pages 127–136. IEEE, oct 2011. ISBN 978-1-4577-2185-4. doi:10.1109/ISMAR.2011.6092378. 37, 40

P. Panchal, S. Panchal, and S. Shah. A comparison of SIFT and SURF. InternationalJournal of Innovative Research in Computer and Communication Engineering, 2013.ISSN 00496979. doi: 10.1007/s11270-006-2859-8. 21

M. Park. IEEE 802.11ah: sub-1-GHz license-exempt operation for the internet ofthings. IEEE Communications Magazine, 53(9):145–151, sep 2015. ISSN 0163-6804.doi: 10.1109/MCOM.2015.7263359. URL http://ieeexplore.ieee.org/document/

7263359/. 113

D. Plets, W. Joseph, K. Vanhecke, E. Tanghe, and L. Martens. Coverage predictionand optimization algorithms for indoor environments. Eurasip Journal on WirelessCommunications and Networking, 2012(1):123, dec 2012. ISSN 16871472. doi: 10.1186/1687-1499-2012-123. 66

A. Popleteev. Wi-Fi butterfly effect in indoor localization: The im-pact of imprecise ground truth and small-scale fading. In 2017 14thWorkshop on Positioning, Navigation and Communications (WPNC),pages 1–5. IEEE, oct 2017. ISBN 978-1-5386-3089-1. doi: 10.1109/

135

Bibliography

WPNC.2017.8250049. URL http://popleteev.com/static/pdf/2017/

Wi-Fibutterflyeffectinindoorlocalization-Theimpactofimprecisegroundtruthandsmall-scalefading_WPNC-2017.

pdfhttp://ieeexplore.ieee.org/document/8250049/. 63

J. A. Richards. Radio Wave Propagation. Springer Berlin Heidelberg, Berlin, Heidelberg,2008. ISBN 978-3-540-77124-1. doi: 10.1007/978-3-540-77125-8. 27, 73

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. ORB: An efficient alternative toSIFT or SURF. In 2011 International Conference on Computer Vision, pages 2564–2571. IEEE, nov 2011. ISBN 978-1-4577-1102-2. doi: 10.1109/ICCV.2011.6126544.21

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. Hill, M. O. Leach, and D. J. Hawkes. Nonrigidregistration using free-form deformations: application to breast MR images. IEEEtransactions on medical imaging, 18(8):712–21, aug 1999. ISSN 0278-0062. doi: 10.1109/42.796284. 39, 40

M. Ruhnke, L. Bo, D. Fox, and W. Burgard. Compact RGBD Surface Models Based onSparse Coding. AAAI, 2013. 40

R. B. Rusu. Semantic 3D Object Maps for Everyday Manipulation in Human LivingEnvironments. KI - Kunstliche Intelligenz, 24(4):345–348, aug 2010. ISSN 0933-1875.doi: 10.1007/s13218-010-0059-6. 16, 37

R. B. Rusu and S. Cousins. 3D is here: Point Cloud Library (PCL). In IEEE InternationalConference on Robotics and Automation (ICRA), Shanghai, China, may 2011. 40

J. Salvi, C. Matabosch, D. Fofi, and J. Forest. A review of recent range image registrationmethods with accuracy evaluation. Image and Vision Computing, 25:578–596, 2007.ISSN 02628856. doi: 10.1016/j.imavis.2006.05.012. 38

H. Samet. The Quadtree and Related Hierarchical Data Structures. ACM ComputingSurveys, 16(2):187–260, jun 1984. ISSN 03600300. doi: 10.1145/356924.356930. 23

T. K. Sarkar, Z. Ji, K. Kim, A. Medouri, and M. Salazar-Palma. A Survey of VariousPropagation Models for Mobile Communication. IEEE Antennas and PropagationMagazine, 45(3):51–82, jun 2003. ISSN 1045-9243. doi: 10.1109/MAP.2003.1232163.34, 114

R. Sato, H. Sato, and H. Shirai. A SBR algorithm for simple indoor propaga-tion estimation. 2005 IEEE/ACES International Conference on Wireless Commu-nications and Applied Computational Electromagnetics, 2005:812–815, 2005. doi:10.1109/WCACEM.2005.1469708. 30

S. R. Saunders and A. A. Zavala. Summary for Policymakers. In IntergovernmentalPanel on Climate Change, editor, Climate Change 2013 - The Physical Science Basis,pages 1–30. Cambridge University Press, Cambridge, 2007. ISBN 9788578110796.doi: 10.1017/CBO9781107415324.004. URL https://www.cambridge.org/core/

product/identifier/CBO9781107415324A009/type/book_part. 27

S. Savarese. 3D generic object categorization, localization and pose estimation. In 2007IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007.ISBN 978-1-4244-1630-1. doi: 10.1109/ICCV.2007.4408987. 40

136

Bibliography

A. V. Segal, D. Haehnel, and S. Thrun. Generalized-ICP. In Proceedings of Robotics:Science and Systems, page 8, Seattle, 2009. 19, 20, 38

M. Seifeldin, A. Saeed, A. E. Kosba, A. El-Keyi, and M. Youssef. Nuzzer: A large-scaledevice-free passive localization system for wireless environments. IEEE Transactionson Mobile Computing, 12(7):1321–1334, 2013. 103

R. Shams, P. Sadeghi, R. Kennedy, and R. Hartley. A survey of medical image registrationon multicore and the GPU, 2010. ISSN 10535888. 40

G. Silveira, E. Malis, and P. Rives. An Efficient Direct Approach to Visual SLAM. IEEETRANSACTIONS ON ROBOTICS, 24(5), 2008. doi: 10.1109/TRO.2008.2004829. 21

J. Sprickerhof and A. Nuchter. An Explicit Loop Closing Technique for 6D SLAM.ECMR, pages 1–6, 2009. 40

J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A benchmark for theevaluation of RGB-D SLAM systems. In IEEE International Conference on IntelligentRobots and Systems, 2012. ISBN 9781467317375. doi: 10.1109/IROS.2012.6385773.51, 53, 54, 56

L. Subrt and P. Pechac. Advanced 3D indoor propagation model: calibration and im-plementation. EURASIP Journal on Wireless Communications and Networking, 2011(1):180, 2011. ISSN 1687-1499. doi: 10.1186/1687-1499-2011-180. 30, 66

R. Szeliski. Rapid Octree Construction from Image Sequences. Computer Vision andImage Understanding, 58(1):23–32, jul 1993. ISSN 10773142. doi: 10.1006/cviu.1993.1030. 24

W. Tam and V. Tran. Propagation modelling for indoor wireless communication. Elec-tronics & Communications Engineering Journal, 7(5):221, 1995. ISSN 09540695. doi:10.1049/ecej:19950507. 30, 66

S. Thrun, W. Burgard, and D. Fox. Probabilistic robotics. 2005a. ISBN 0262201623 (alk.paper); 9780262201629 (alk. paper). doi: 10.1145/504729.504754. 9

S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. Intelligent robotics andautonomous agents. MIT Press, 2005b. ISBN 9780262201629. x, 10

L. Tian, S. Deronne, S. Latre, and J. Famaey. Implementation and Validation of anIEEE 802.11Ah Module for Ns-3. In Proceedings of the Workshop on Ns-3, WNS3’16, pages 49–56, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4216-2. doi:10.1145/2915371.2915372. 112, 117, 119

R. P. Torres. CINDOOR: Computer tool for planning and design of wireless systems inenclosed spaces. Microwave Engineering Europe, 41(4):11–22, 1997. ISSN 0960667X.doi: 10.1109/74.789733. 66

E. Turner and A. Zakhor. Automatic Indoor 3D Surface Reconstruction with Seg-mented Building and Object Elements. In 2015 International Conference on 3DVision. Institute of Electrical and Electronics Engineers (IEEE), oct 2015. doi:10.1109/3DV.2015.48. 39

137

Bibliography

S. Umeyama. Least-squares estimation of transformation parameters between two pointpatterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1991. doi:10.1109/34.88573. 21

S. Vanneste, B. Bellekens, and M. Weyn. Obstacle Avoidance Using an Octomap. InMORSE 2014, 2014.

B. Velleman, M. Weyn, R. Berkvens, and B. Bellekens. Energy-saving positioning andcommunication, 2016.

M. Weyn. Opportunistic seamless localization. PhD, Universiteit Antwerpen, 2011. 102

M. Weyn, G. Ergeerts, R. Berkvens, B. Wojciechowski, and Y. Tabakov. DASH7 allianceprotocol 1.0: Low-power, mid-range sensor and actuator communication. In 2015IEEE Conference on Standards for Communications and Networking (CSCN), numberOctober, pages 54–59. IEEE, oct 2015. ISBN 978-1-4799-8927-0. doi: 10.1109/CSCN.2015.7390420. 66, 83

B. Williams, M. Cummins, J. Neira, P. Newman, I. Reid, and J. Tardos. A comparisonof loop closing techniques in monocular SLAM. Robotics and Autonomous Systems,57(12):1188–1197, dec 2009. ISSN 09218890. doi: 10.1016/j.robot.2009.06.010. 13

J. Wilson and N. Patwari. Radio tomographic imaging with wireless networks. IEEETransactions on Mobile Computing, 9(5):621–632, 2010. 103

J. Wilson and N. Patwari. See-through walls: Motion tracking using variance-based radiotomography networks. Mobile Computing, IEEE Transactions on, 10(5):612–621, 2011.103

G. F. Worts. Directing the war by wireless. Popular Mechanics, pages 647–50, 1915. 1

M. Youssef, M. Mah, and A. Agrawala. Challenges: device-free passive localization forwireless environments. In Proceedings of the 13th annual ACM international conferenceon Mobile computing and networking, pages 222–229. ACM, 2007. 103

Z. Yun and M. F. Iskander. Ray Tracing for Radio Propagation Modeling: Principles andApplications. IEEE Access, 3:1089–1100, 2015. doi: 10.1109/ACCESS.2015.2453991.2, 30, 66

G. Zachmann and E. Langetepe. Geometric Data Structures for Computer Graphics.Synthesis, 16:1–54, 2003. ISSN 02581248. 23

138