PhD defences at the Faculty of Engineering Science

179
PhD defences at the Faculty of Engineering Science 2015

Transcript of PhD defences at the Faculty of Engineering Science

1

PhD defences at the Faculty of Engineering Science 2015

3

PhD defences at the Faculty of Engineering Science

It is our great pleasure to present the second edition of “PhD defences at the Faculty of Engineering Science”, giving you an overview of the innovative work and results obtained by 173 PhD students who successfully obtained their PhD degree from our Faculty of Engineering Science at KU Leuven in 2015.

This edition again demonstrates the vast amount of research activities in all kinds of engineering science disciplines, from algorithms to hardware, from fundamental long-term research all the way up to practical implementations. We are sure that many of the results of this applied research will soon or later find their way into our society. After all, to engineer is human.

We are moreover happy and grateful that much of this research work could be performed thanks to the many supporting companies, research institutes or European and national research funding agencies such as ERC, FWO and IWT.

We do sincerely hope that this document gives you a good first impression of the top research activities in the departments associated with our Faculty of Engineering Science and that it will inspire further research activities and collaborations.

Of course many thanks again to all the PhD researchers who contributed to this present collection. The Faculty of Engineering Science congratulates them with the obtained results and wishes them a successful professional career where they can pursue more technological innovations for the benefit of our world and our society. That way they will continue to contribute to our vision that engineering is for the betterment of our human society.

Sincerely yours

Michiel SteyaertDean Faculty of Engineering Science

Jan DegrèveChair Doctoral Committee Faculty of Engineering Science

Name PhD defence Page

1. Abdin Yasmine ............................................11/09/2015 .............................................................................. 1192. Agrawal Prashant ........................................30/04/2015 ................................................................................ 553. Agten Pieter .................................................29/06/2015 ................................................................................ 944. Ananthanarayanan Durga ............................25/03/2015 ................................................................................ 335. Annemans Margo ........................................30/10/2015 .............................................................................. 1446. Aravand Mohammadali ...............................25/02/2015 ................................................................................ 257. Baetens Ruben ............................................15/09/2015 .............................................................................. 1258. Baka Maria ..................................................07/05/2015 ................................................................................ 619. Bauwens Geert ............................................23/11/2015 .............................................................................. 15210. Bi Qilong ......................................................08/12/2015 .............................................................................. 16011. Bilgin Begül .................................................13/05/2015 ................................................................................ 6312. Billen Pieter .................................................17/02/2015 ................................................................................ 2113. Bogaerts Bart .............................................24/06/2015 ................................................................................ 8714. Buyens Wim ................................................24/08/2015 .............................................................................. 11115. Callemeyn Piet ............................................01/07/2015 .............................................................................. 10116. Carbone Maria Josefina ..............................25/06/2015 ................................................................................ 8917. Cheikh Hassan Ismael .................................26/06/2015 ................................................................................ 9118. Chiang Po-Kuan ..........................................01/04/2015 ................................................................................ 4219. Chiumento Alessandro ................................09/10/2015 .............................................................................. 13620. Claes Rutger ................................................23/06/2015 ................................................................................ 8621. Claesen Marc ..............................................14/12/2015 .............................................................................. 16522. Clausi Donato ..............................................23/02/2015 ................................................................................ 2323. Cnops Kjell ..................................................08/09/2015 .............................................................................. 11724. Cruz Torres Mário Henrique ........................27/04/2015 ................................................................................ 4825. Cuypers Gert ...............................................09/10/2015 .............................................................................. 13526. Danthurebandara Maheshi ..........................03/07/2015 .............................................................................. 10727. De Clercq Hans ...........................................02/07/2015 .............................................................................. 10628. De Coninck Roel .........................................04/06/2015 ................................................................................ 7929. De Santana Leandro ...................................14/09/2015 .............................................................................. 12230. De Smet Vincent .........................................13/05/2015 ................................................................................ 6531. Debonne Vincent .........................................20/08/2015 .............................................................................. 10832. Debrouwere Frederik ...................................23/09/2015 .............................................................................. 12733. Deckers Jan ................................................26/02/2015 ................................................................................ 2634. Decroix Koen ...............................................21/10/2015 .............................................................................. 13835. Decrop Boudewijn .......................................01/10/2015 .............................................................................. 13236. Deurinck Mieke ...........................................30/09/2015 .............................................................................. 13037. Di Lello Enrico .............................................18/02/2015 ................................................................................ 2238. Dupont Benjamin.........................................27/01/2015 ................................................................................ 1339. Ergun Hakan ................................................29/01/2015 ................................................................................ 1440. Esmaeil Zaghi Armin ...................................16/01/2015 .................................................................................. 641. Farrokhzad Hasan .......................................07/05/2015 ................................................................................ 5942. Fernández Leandro .....................................30/04/2015 ................................................................................ 5443. Fernando Palamandadige ...........................30/03/2015 ................................................................................ 3744. François Brecht ...........................................13/01/2015 .................................................................................. 545. Gao Bo ........................................................17/12/2015 .............................................................................. 17046. Geebelen Dries ............................................17/12/2015 .............................................................................. 16947. Geebelen Kurt .............................................23/04/2015 ................................................................................ 4548. Gencarelli Federica .....................................05/05/2015 ................................................................................ 5849. Georgieva Iveta ...........................................24/04/2015 ................................................................................ 4750. Gijbels Andy ................................................26/10/2015 .............................................................................. 14251. Gillis Joris ....................................................18/03/2015 ................................................................................ 3252. Goit Jay Prakash .........................................18/03/2015 ................................................................................ 3153. Gong Xing ...................................................30/11/2015 .............................................................................. 15854. González de Miguel Carlos .........................12/05/2015 ................................................................................ 6255. Gorissen Benjamin ......................................06/11/2015 .............................................................................. 14756. Guan Yuanyuan ...........................................30/11/2015 .............................................................................. 15657. Guha Thakurta Priyanko ..............................03/03/2015 ................................................................................ 29

Name PhD defence Page

58. Habibi Ranasadat ........................................24/08/2015 .............................................................................. 11059. Herrera Costanza ........................................17/12/2015 .............................................................................. 17260. Hilhorst Gijs ................................................09/12/2015 .............................................................................. 16161. Houbart Claudine ........................................16/01/2015 .................................................................................. 762. Houtmeyers Sofie ........................................06/10/2015 .............................................................................. 13363. Huang Wei ...................................................16/12/2015 .............................................................................. 16864. Jacqmaer Pieter ..........................................21/10/2015 .............................................................................. 13965. Jain Atul.......................................................04/05/2015 ................................................................................ 5766. Jaluvka David ..............................................02/02/2015 ................................................................................ 1667. Jeuris Ben ...................................................24/06/2015 ................................................................................ 8868. Jiang Sijia ....................................................26/11/2015 .............................................................................. 15469. Jofore Bruke Daniel .....................................13/05/2015 ................................................................................ 6470. Jonckheere Stijn ..........................................17/06/2015 ................................................................................ 8371. Kempen Karolien .........................................31/03/2015 ................................................................................ 3972. Kerkhofs Johan ...........................................28/09/2015 .............................................................................. 12873. Khakzad Sorna ............................................17/06/2015 ................................................................................ 8274. Knopp Jan ...................................................27/05/2015 ................................................................................ 7375. Koolen Ninah ...............................................17/12/2015 .............................................................................. 17176. Kükner Selahaddin (Halil) ............................02/04/2015 ................................................................................ 4377. Lauwers Joost .............................................01/10/2015 .............................................................................. 13178. Leemput Niels .............................................13/11/2015 .............................................................................. 14979. Li Yi ..............................................................27/05/2015 ................................................................................ 7680. Lin Jiuyang ..................................................22/10/2015 .............................................................................. 14081. Maerien Jef ..................................................19/06/2015 ................................................................................ 8482. Mall Raghvendra .........................................30/06/2015 ................................................................................ 9683. Margossian Harag .......................................09/12/2015 .............................................................................. 16284. Marques dos Santos Fábio Luis .................02/12/2015 .............................................................................. 15985. Martinovic Andelo .......................................14/09/2015 .............................................................................. 12086. Mathues Wouter ..........................................20/08/2015 .............................................................................. 10987. Matic Vladimir..............................................26/03/2015 ................................................................................ 3488. Mattheys Tina ..............................................15/12/2015 .............................................................................. 16689. Mehrkanoon Siamak ...................................02/07/2015 .............................................................................. 10590. Mercuri Marco .............................................31/03/2015 ................................................................................ 4091. Midheme Emmanuel ...................................10/11/2015 .............................................................................. 14892. Milosevic Milica ...........................................04/05/2015 ................................................................................ 5693. Milutinovic Milica .........................................11/12/2015 .............................................................................. 16494. Mirhoseini Seyyed Mohammad Hossein .....16/01/2015 ................................................................................ 1095. Mirzaei Sayeh ..............................................14/09/2015 .............................................................................. 12396. Moldovan Bogdan .......................................27/03/2015 ................................................................................ 3597. Motte Henk ..................................................05/02/2015 ................................................................................ 1898. Munaga Nagavenkata Satyakiran ...............29/06/2015 ................................................................................ 9599. Nagel Till ......................................................06/01/2015 .................................................................................. 1100. Natsakis Anastasios ....................................30/01/2015 ................................................................................ 15101. Ons Bart ......................................................20/05/2015 ................................................................................ 67102. Oramas Mogrovejo José .............................29/04/2015 ................................................................................ 52103. Patrignani Marco .........................................27/05/2015 ................................................................................ 74104. Pineda Ordonez Luis Eduardo ....................27/03/2015 ................................................................................ 36105. Pitropakis Ioannis ........................................30/06/2015 ................................................................................ 97106. Qin Ling .......................................................26/06/2015 ................................................................................ 92107. Ramos Araujo Beato Filipe ..........................27/05/2015 ................................................................................ 72108. Reynders Glenn ...........................................14/09/2015 .............................................................................. 121109. Rezaei Hosseinabadi Sareh ........................12/06/2015 ................................................................................ 85110. Romanov Valentin .......................................22/09/2015 .............................................................................. 126111. Santos Odriozola Jose Luis .......................20/11/2015 .............................................................................. 151112. Serhiienko Pavlo ..........................................10/02/2015 ................................................................................ 19113. Shakhimardanov Azamat ............................27/11/2015 .............................................................................. 155114. Shirazi Syed Ali Abbas ................................01/07/2015 .............................................................................. 100

Name PhD defence Page

115. Shterionov Dimitar .......................................08/09/2015 .............................................................................. 116116. Siguenza Guzman Lorena ...........................27/08/2015 .............................................................................. 113117. Smoljkic Gabrijel .........................................19/11/2015 .............................................................................. 150118. Sonnaert Maarten ........................................03/09/2015 .............................................................................. 114119. Strackx Maarten ..........................................01/06/2015 ................................................................................ 77120. Suetens Thomas .........................................11/06/2015 ................................................................................ 81121. Susilo Cynthia Ratih ....................................25/06/2015 ................................................................................ 90122. Swolfs Yentl .................................................08/01/2015 .................................................................................. 3123. Szurley Joseph ............................................30/06/2015 ................................................................................ 98124. Tacq Jeroen .................................................03/11/2015 .............................................................................. 145125. Tan Ye ..........................................................28/04/2015 ................................................................................ 51126. Tosi Niccolo .................................................30/03/2015 ................................................................................ 38127. Trompoukis Christos ...................................20/05/2015 ................................................................................ 68128. Van Acker Steven ........................................06/01/2015 .................................................................................. 2129. Van Beeumen Roel ......................................21/04/2015 ................................................................................ 44130. Van Herrewege Anthony ..............................16/01/2015 .................................................................................. 8131. Van Loon Sylvie ...........................................02/03/2015 ................................................................................ 27132. Vannieuwenhoven Nick ...............................24/02/2015 ................................................................................ 24133. Van Nieuwenhuyse Anneleen ......................26/05/2015 ................................................................................ 69134. Van Nimmen Katrien ....................................19/05/2015 ................................................................................ 66135. Van Roy Juan ..............................................07/05/2015 ................................................................................ 60136. Van Steenwinkel Iris ....................................10/06/2015 ................................................................................ 80137. Vancroonenburg Wim ..................................09/01/2015 .................................................................................. 4138. Vandael Stijn ................................................04/03/2015 ................................................................................ 30139. Vanhollebeke Frederik .................................21/01/2015 ................................................................................ 11140. Vanoost Dries ..............................................11/12/2015 .............................................................................. 163141. Vanthienen Dominick ...................................23/01/2015 ................................................................................ 12142. Varon Perez Jenny Carolina ........................30/04/2015 ................................................................................ 53143. Verbruggen Bart ..........................................26/08/2015 .............................................................................. 112144. Vercammen Dominique ...............................26/06/2015 ................................................................................ 93145. Verdult Roel .................................................21/04/2015 ................................................................................... /146. Vervecken Lieven ........................................14/10/2015 .............................................................................. 137147. Verveckken Jan ...........................................28/04/2015 ................................................................................ 50148. Volkaerts Wouter .........................................15/12/2015 .............................................................................. 167149. Vriami Despoina ..........................................05/11/2015 .............................................................................. 146150. Vukov Milan .................................................23/04/2015 ................................................................................ 46151. Wang Bo ......................................................15/09/2015 .............................................................................. 124152. Wang Xin .....................................................27/04/2015 ................................................................................ 49153. Wang Xue ....................................................02/06/2015 ................................................................................ 78154. Wang Yueqi .................................................11/09/2015 .............................................................................. 118155. Wang Yuyi....................................................01/07/2015 .............................................................................. 103156. Wauman Barbara ........................................23/10/2015 .............................................................................. 141157. Weckx Sam .................................................01/07/2015 .............................................................................. 102158. Widjaja Devy ................................................01/04/2015 ................................................................................ 41159. Wijnhoven Thomas ......................................30/11/2015 .............................................................................. 157160. Willemen Tim ...............................................04/02/2015 ................................................................................ 17161. Wu Minxian ..................................................02/09/2015 .............................................................................. 115162. Wuyts Kim ...................................................16/01/2015 .................................................................................. 9163. Wyffels Jeroen .............................................29/09/2015 .............................................................................. 129164. Xhakoni Adi .................................................10/07/2015 .............................................................................. 104165. Yan Sen .......................................................27/05/2015 ................................................................................ 75166. Ye Wenyuan .................................................26/10/2015 .............................................................................. 143167. Yilmaz Emre.................................................26/05/2015 ................................................................................ 70168. Zanon Mario ................................................26/11/2015 .............................................................................. 153169. Zapata Riveros Juliana Victoria ...................26/05/2015 ................................................................................ 71170. Zhang Fei ....................................................30/06/2015 ................................................................................ 99171. Zhang Leqi ..................................................01/10/2015 .............................................................................. 134172. Zhao Dixian .................................................03/03/2015 ................................................................................ 28173. Zhao Guoying ..............................................12/02/2015 ................................................................................ 20

1

Till NagelDepartment Computer Science

PhD defence 06 January 2015

Supervisor Prof. dr. ir. Erik Duval

Co-supervisors Prof. dr. ir.-arch. Andrew Vande Moere

Prof. dr. Frank Heidman (FH Potsdam)

E-mail [email protected]

Introduction / ObjectiveWhile there is a growing interest among citizens to make sense of their social community and urban environment, mostexisting geovisualization tools have been designed for experts. We introduce situation-specific visualization systems thatwere particularly designed for public exhibitions to balance powerful data exploration methods with inviting accessibilityfor laypeople. The research objective is to facilitate understanding geospatial patterns, relationships, and trends for wideraudience groups by designing comprehensible and easy-to-use interactive visualization systems for time-varying geo-referenced data.

Research MethodologyOur research approach was guided by an explorative methodology. We designed and evaluated three case studies fromdifferent domains. For each, we followed principles from a human-centered design, and used a mixed method approachof quantitative and qualitiative studies. All case studies had in common that the knowledge inherent in the data wasrelevant to non-experts for their everyday life. However, each data set was different in its specifics, and exemplifieddifferent aspects of spatio-temporal data. These ranged from classic geo-spatial data such as information on buildingsand places, to geo-referenced social network data, to mobility data based both on authoritative data sources, as well assensors and smart phones.

Major publicationsT. Nagel, F. Heidmann, M. Condotta, E. Duval (2010). Venice Unfolding: a tangible user interface for exploring faceteddata in a geographical context. In Proc. of NordiCHI '10. ACM, 743–746.T. Nagel, E. Duval, A. Vande Moere (2012). Interactive Exploration of Geospatial Network Visualization. In Proc. of CHIEA. ACM, 557–572.T. Nagel, M. Maitan, E. Duval, A. Vande Moere, J. Klerkx, K. Kloeckl, C. Ratti (2014). Touching Transport - a Case Studyon Visualizing Metropolitan Public Transit on Interactive Tabletops. In Proc. of AVI '14. ACM, 281–288.

Unfolding Data: Software and Design Approaches to SupportCasual Exploration of Tempo-spatial Data on Interactive Tabletops

Results & ConclusionA significant contribution is the portfolio of case studies. We investigated concrete challenges in their domains, designedsuccessful visualization systems, and provided innovative solutions to our research question by bringing togethercomputer science with design. With our map library, we made a major contribution to the set of geovisualizationconstruction tools. Overall, we motivated and further enabled a new design space for casual data exploration.

2

Steven Van AckerDepartment Computer Science

PhD defence 06 January 2015

Supervisor Prof. dr. ir. Frank Piessens

Co-supervisor Dr. ir. Lieven Desmet

Funding IWT, iMinds, EU FP7 projects STREWS, WebSand, and NESSoS

E-mail [email protected]

Introduction / ObjectiveIn today’s web applications, no one disputes the important role of JavaScript as a client-side programming language.JavaScript can turn the Web into a lively, dynamic and interactive end-user experience. Unfortunately, JavaScript canalso be used to steal sensitive information and abuse powerful functionality. Sloppy input validation can make a webapplication vulnerable, allowing malicious JavaScript code to leak into a web application’s JavaScript executionenvironment, where it leads to unintended code execution. An otherwise secure web application may intentionally includeJavaScript from a third-party script provider. This script provider may in turn serve untrusted or even malicious JavaScript,leading to the intended execution of untrusted code. In both the intended and unintended case, untrusted JavaScriptending up in the JavaScript execution environment of a trusted web application, gains access to sensitive resources andpowerful functionality. Web application security would be gved if this untrusted JavaScript could be isolated and itsaccess restricted.Research MethodologyIn this work, we first investigate ways in which JavaScript code can leak into the browser, leading to unintended JavaScript execution. We find that, due to bad input validation, malicious JavaScript code can be injected into a JavaScript execution environment through both browser plugins and browser extensions.

Next, we review JavaScript sandboxing systems designed to isolate and restrict untrusted JavaScript code and divide them into three categories, discussing their advantages and disadvantages: JavaScript subsets and rewriting systems, JavaScript sandboxing through browser modifications and JavaScript sandboxing systems without browser modifications. We further research the last two categories, developing and evaluating a prototype of each.

Results & ConclusionsThe goal of this work was two-fold:

First, study ways through which JavaScript code can leak into the JavaScript execution environment of abrowser, executing unintendedly, resulting in the work on FlashOver and Monkey-in-the-browser.

Second, isolate and restrict untrusted JavaScript code into a JavaScript sandbox, no matter whether or not itwas intended to be executed, resulting in WebJail and JSand, two JavaScript sandboxing mechanisms.

Major publications Van Acker, S., Nikiforakis, N., Desmet, L., Piessens, F., Joosen, W. (2014). Monkey-in-the-browser: Malware

and vulnerabilities in augmented browsing script markets. ASIACCS. Kyoto, Japan, 2-4 June 2014. Agten, P., Van Acker, S., Brondsema, Y., Phung, P., Desmet, L., Piessens, F. (2012). JSand: Complete client-

side sandboxing of third-party JavaScript without browser modifications. Proceedings of the 28th AnnualComputer Security Applications Conference (ACSAC 2012). Orlando, Florida, USA, 3-7 December 2012 (pp. 1-10).

Van Acker, S., Nikiforakis, N., Desmet, L., Joosen, W., Piessens, F. (2012). FlashOver: Automated discovery ofcross-site scripting vulnerabilities in rich internet applications. ASIACCS. Seoul, 2-4 May 2012.

Van Acker, S., De Ryck, P., Desmet, L., Piessens, F., Joosen, W. (2011). WebJail: Least-privilege integration ofthird-party components in web mashups. Proceedings of the 27th Annual Computer Security ApplicationsConference (ACSAC 2011). Orlando, Florida, USA, 5-9 December 2011 (pp. 307-316).

Isolating and Restricting Client-Side JavaScript

3

Yentl SwolfsDepartment Materials Engineering (MTM)

PhD defence 08 January 2015

Supervisor Prof. dr. ir. Ignaas Verpoest

Co-supervisor Dr. ir. Larissa Gorbatikh

Funding IWT & EU-FP7 HIVOCOMP

E-mail [email protected]

Research MethodologyExperimental aim = design hybrid self-reinforced composites with Increased stiffness Limited reduction in toughness or impact resistanceModelling aim = understanding delay in carbon fibre failure Quantitative predictions Approach for maximising this delay

Major publicationY. Swolfs et al. (2014). Fibre hybridisation in polymer composites: a review, Composites Part A: Applied Science andManufacturing, 67, 181-200.

Hybridisation of self-reinforced composites:Modelling and verifying a novel hybrid concept

0

50

100

150

200

250

0 5 10 15 20 25

Stress(MPa)

Strain (%)

7% 3%

11%

0%

Debonded region

Self‐reinforced composites

Carbon fibre composites

Hybridself‐reinforcedcomposites

Stiffness

ToughnessIntroduction / ObjectiveThe stiffness-toughness dilemma: Carbon fibre composites: high stiffness, low toughness. Self-reinforced polypropylene (PP reinforced with PP fibres): low

stiffness, high toughnessThis thesis aims to solve this dilemma by hybridising carbon fibres withself-reinforced PP. This should lead to the development of a newmaterial class that is both stiff and tough.

Results & ConclusionsExperimental work: Low carbon fibre content:

debonding high ultimate failure strain High carbon fibre content:

limited debonding low ultimate failure strain High impact resistance was maintained

Modelling work: Predicts carbon fibre breaks in a hybrid

composite Optimises hybrid composite design Layered structure is optimal for a 50/50 fibre ratio

Experimental validation In-depth validation for non-hybrid composites New thin ply methodology for hybrid composites Vital conclusions for advancing the state of the

art

4

Wim VancroonenburgDepartment Computer Science

PhD defence 09 January 2015

Supervisors Prof. dr. Patrick De CausmaeckerProf. dr. Frits SpieksmaProf. dr. ir. Greet Vanden Berghe

Funding IWT strategic basic research grant

E-mail [email protected]

Introduction / ObjectiveOver the past decades, globally rising expenditures on health care have forced governments to re-evaluate health carefunding. To reduce public spending on health care, budgetary pressure on hospitals has increased significantly. At thesame time, demand for hospital services has increased due to e.g. population ageing. Hospitals are expected to performmore with less resources. Hospital managers are thus constantly looking into new ways to increase efficiency, whilemaintaining a high level of care.The present dissertation focuses on developing operational decision support models and algorithms for hospitaladmission planning and scheduling. The aim is to increase efficient usage of key hospital resources by supporting humanplanners at hospital admission offices with automated tools for their daily and weekly decision making.

Research MethodologyThe main patient flow for admitted patients is depicted inFigure 1. Three processes concerned with admissionplanning and scheduling of patients are indicated forautomated decision making:A. determination of admission dates for elective

surgical patients,B. assignment of admitted patients to hospital rooms,C. scheduling surgical cases in operating rooms.Using techniques such as Mixed Integer Programmingand Local Search, models and algorithms are developedfor supporting decision making in these processes.

Results & Conclusions A stochastic admission scheduling approach is

developed that maximizes efficient usage of theoperating theatre while the risk of bed shortages isminimized.

Different admission strategies, stochastic and non-stochastic, are compared. It is shown that stochasticapproaches may increase efficient usage of theoperating theatre, while reducing the risk for bedshortages. However this is at the expense ofincreased patient waiting time and less patient-friendly admission policies.

Major publications Vancroonenburg, W., De Causmaecker, P., Vanden Berghe, G. (2013). A study of decision support models for online patient-to-room assignment

planning. Ann Oper Res. doi: 10.1007/s10479-013-1478-1. Available online Vancroonenburg, W., Della Croce, F., Goossens, D., Spieksma, F. (2014). The Red-Blue Transportation Problem. Eur J Oper Res, 237 (3), 814-823.

Operational decision support models and algorithms for hospital admission planning and scheduling

Two room planning strategies are developed andcompared. An anticipative approach is shown to besuperior to a reactive one, even in a dynamic,uncertain setting.

The complexity of patient-to-room assignmentplanning under a gender separation policy isdetermined to be NP-Hard.

An abstract resource model for operating theatrescheduling is developed and a scalable heuristicalgorithm is presented for solving it.

Figure 1: General patient admission flow. ED = Emergency Department,ICU = Intensive Care Unit, PACU = Post Anaesthetic Care Unit

5

Brecht FrançoisDepartment Electrical Engineering (ESAT)

PhD defence 13 January 2014

Supervisor Prof. dr. ir. ing. Patrick Reynaert

Funding European FP7-project, M4S - Huawei

E-mail [email protected]

Introduction The power amplifier is a key component in all wireless communication systems. In most of today’s smartphones and othermobile devices, the RF Power Amplifier (PA) is predominantly designed in a more exotic technology. To reduce the costand environmental footprint, it is desirable to completely integrate the RF PA and the entire transceiver into a singlesystem-on-chip (SoC). In addition, the new wireless and mobile communication standards introduce new challenges forfully-integrated power amplifiers.

Research MethodologyA major challenge is the efficient generation of a Watt-level output powerdespite the low-breakdown voltage in nanometer scale technologies. Asthe voltage drops with the technology scaling, not only the output powerand efficiency, but also the stringent requirements on linearity becomesignificantly harder. Certainly, due to the increased data rate, high linearityover instantaneous wide bandwidth is needed in future mobilecommunication standards.

Four different power amplifiers have been designed for different moderncommunication standards such as LTE, LTE-advanced, WLAN …: twolinear RF PAs, one linear RF PA with integrated power detector and finallya reconfigurable digital RF PA. Each RF PAs is designed to cope withseveral major challenges for fully-integrated RF PA design.

Results & ConclusionsBased on analyses and transistor level simulations, each technique to improve the RF PA is optimized and finallyvalidated by measurements. In addition, one of the proposed RF PA designs includes an RF power detector to improvethe overall performance and communication robustness.

Major publicationsB. François and P. Reynaert, “A Fully Integrated Watt-Level Linear 900 MHz CMOS RF Power Amplifier for LTE-Applications”,Microwave Theory and Techniques, IEEE Transactions on, vol. 60, no. 6, pp. 1878–1885, June 2012.B. François and P. Reynaert, “Highly Linear Fully-Integrated Wideband RF PA for LTE-advanced in 180 nm SOI”, Microwave Theoryand Techniques, IEEE Transactions on, is accepted for publication on 5 December 2014.B. François and P. Reynaert, “3.3 A transformer-coupled true-RMS power detector in 40nm CMOS”, in Solid-State Circuits ConferenceDigest of Technical Papers (ISSCC), 2014 IEEE International, pp. 62–63, Feb. 2014.

Design Techniques for CMOS RF Power Amplifiers

RF PA for LTE On-chip power detectorRF PA for LTE-advanced RF PA for LTE-advanced

6

Armin Esmaeil ZaghiDepartment Materials Engineering (MTM)

PhD defence 16 January 2015

Supervisor Prof. dr. ir. Jef Vleugels

Co-supervisor Prof. dr. ir. Jozef Poortmans

Funding Strategic Initiative Materials in Flanders

E-mail [email protected]

Introduction / ObjectiveThe objective of this PhD was the development of a suspension-based fabrication technique for CuIn(S,Se)2 (CIS) andCu(In,Ga)(S,Se)2 (CIGS) chalcogenide semiconductor light absorber layers for solar cell applications via printing of ananopowder precursor suspension followed by annealing/selenization. To reach the main objectives of this research, asystematic study was conducted to provide an in-depth understanding of the material science aspects of chalcogenidesemiconductor processing.

Research MethodologyA new synthesis route was developed for high purity chalcogenide alloy nanopowder precursors for printing CIS and CIGSabsorber layers with adjustable composition based on a sequence of dry mechanical alloying and wet ball milling in a sulfurdoped-amine solution. The particle size distribution of the mechanically synthesized chalcogenide alloy nanopowders couldbe optimized by controlling the ball milling process parameters, such as milling speed and milling time. Suspension inks wereprepared from mechanically synthesized chalcogenide alloy nanopowders, via solvent exchange and re-dispersing ofchalcogenide alloy nanopowders in environmentally friendly solvents suitable for coating. Nanopowder suspension inks werecoated on Mo-sputtered glass substrates via doctor blade coating followed by a drying process for the formation of 1-2 μmthin crack-free and organic phase free nanopowder precursor coatings.In order to transform the nanopowder precursor coatings into a 1-2 μm thin large crystalline CIS and CIGS semiconductorlayer, annealing and heat treatments in controlled selenium vapor (selenization) were performed. A prototype infrared heatingfurnace for rapid thermal processing (RTP) in controlled selenium vapor (selenization) was designed and built. The effect ofthe chalcogenide nanopowder precursor composition and selenization conditions on the grain growth and densification of CISand CIGS semiconductor phases was investigated. Chalcogenide alloy nanopowder precursors with sub-stoichiometricselenium and sulfur contents (CuInSe0.5 CuInSe and CuIn0.7Ga0.3S0.5) were found to be suitable precursors for the formationof large grain CIS and CIGS semiconductor phases during selenization.

Results & ConclusionsCIS and CIGS semiconductor absorber layers were fabricated by inertgas annealing and selenization of printed CuInSe0.5 andCuIn0.7Ga0.3S0.5 nanopowder precursor coatings. The physical andoptoelectronic properties of the CIS and CIGS semiconductor absorberlayers were investigated by several characterization techniques. Thinfilm solar cell devices with standard stack structure ofGlass/Mo/CIGS/CdS/ZnO/AZO based on the printed CIS and CIGSsemiconductor absorber layers showed efficiencies of 5.4 % and 6 %.Further optimization of the nanopowder precursor composition andselenization condition are needed to enhance the quality and solar cellperformance of the printed CIS and CIGS semiconductor absorberlayers.Major publicationA. E. Zaghi, M. Buffière, G. Brammertz, M. Batuk, N. Lenaers, B. Kniknie, J. Hadermann, M. Meuris, J. Poortmans, and J.Vleugels, “Mechanical synthesis of high purity Cu–In–Se alloy nanopowder as precursor for printed CISe thin film solar cells,”Advanced Powder Technology, vol. 25, no. 4, pp. 1254–1261, Mar. 2014.

Nanopowder based printed CIGS chalcogenide semiconductor absorber layers for thin film solar cell applications

7

ObjectiveMainly based upon the study of Raymond M. Lemaire’s personal archive, handed overto the KU Leuven after he became Professor emeritus en 1991, this research aims atidentifying Lemaire’s role in the emergence of a conservative vision of urban renovationat the turn of the 1960’s in Belgium and Europe. Being the first study carried out aboutthis major figure of the international conservation scene of the second half of thetwentieth century, it also lays the foundations of his early biography.

Research MethodologyThe archive used for the research comprises written, graphic and photographic material :correspondance, reports, plans, pictures and drawings led to a plausible reconstruction ofLemaire’s intentions and ideas. For the study of his projects in Brussels, that has been a crucialstep towards the understanding of his particular position towards the ancient city, a micro‐historical approach has been necessary in order to extract Lemaire’s own contribution fromthese of the many actors and networks involved in the capital’s planning during the sameperiod.Results & Conclusions

Major publicationsCl. Houbart (2012), « Raymond M. Lemaire et les débutsde la rénovation urbaine à Bruxelles », Revue d’histoireurbaine / Urban History Review (Oct. 2012) : 37-56.Cl. Houbart (2014), « Deconsecrating a DoctrinalMonument : Raymond M. Lemaire and the Revisions ofthe Venice Charter, Change Over Time (fall 2014) : 218-243.

Raymond M. Lemaire (1921-1997) and the Conservation of the Ancient City : Historical and Critical Approach of his Belgian Projects in an International Perspective

R.M. Lemaire and the Prince of Liège, KU Leuven, Unversiteitsarchief,

The research led to conclusions at different scales.

The study of Lemaire’s early biography, including his role as“monuments man” during the second world war, hisparticipation in the post-war reconstruction and his relationswith Italian scholars and architects, allowed a clarification ofhis role in the writing of the Venice Charter (1964) and a betterunderstanding of his personal vision of architecturalconservation.

Comparing Lemaire’s projects in the fields of urban renovationand urbanism in Brussels with a careful study of the Greatbeguinage of Leuven revealed how the latter had beenconsidered by Lemaire as an ideal of the ancient city, owing tobe not only conserved, but also reproduced. This factpositions Lemaire’s work within the emerging postmodernmovement.

At a broder scale, linking Lemaire’s field experience with hiscontemporary contributions to international doctrinalreflections threw a new light on the emergence and meaningof “integrated conservation”, a notion still inspiringcontemporary heritage politics.

The Great Beguinage of Leuven, project, nd. KU Leuven, Unversiteitsarchief,

Houbart ClaudineDepartment Architecture

PhD defence 16 January 2015

Supervisor Prof. dr. ir. Luc Verpoest

Co-supervisor Prof. dr. Ir Krista De Jonge

E-mail [email protected]

8

The visual representations of SRAM power-up data of twopopular microcontrollers highlight the non-randomness inthe PIC16F1825. It can not be used to implement PUF-based designs, while the STM32F100R8 definitely can.

Research MethodologyWe turn our attention to physically unclonable functions (PUFs), a relatively novel cryptographic primitive that functions asa fingerprint for electronic devices. The research goes into two directions, both with a strong focus on practicality. First ofall, we design a highly secure, black box PUF-based key generation module, named PUKY. We attempt to reduce areaby using a full-custom microprocessor for our design. The main drawback of the module is that it requires customhardware. Thus, for the second part of our research, we look into extracting PUF behavior from commercial off-the-shelf(COTS) microcontrollers. Towards this end, the behavior of SRAM in four of the most popular families of microcontrollersis first measured at different operating temperatures. Various quality metrics are then calculated, after which we canassess the feasibility of using these microcontrollers for secure implementations of key generation and RNG blocks.

Anthony Van Herrewege

Department Electrical Engineering (ESAT)

PhD defence 16 January 2015

Supervisor Prof. dr. ir. Ingrid Verbauwhede

E-mail [email protected]

Introduction / ObjectiveEmbedded electronics, such as cellphones, are enjoying an ever greater presence in our daily lives. To protect datastored on and transmitted by these devices, cryptography is required. Two crucial building blocks of cryptography are akey generation module and a random number generator (RNG). Unfortunately, insecure designs are often used, whichweakens the whole cryptographic design. Our goal is to develop secure, yet efficient, designs for these building blocks.

Results & Conclusions

The design of the microprocessor inside the PUFKYrequires only 68 slices, less than 1% of the area of aVirtex-6 FPGA. Our tiny design proves that PUF-basedkey generators are feasible for real-world applications.

Major publicationR. Maes, A. Van Herrewege, and I. Verbauwhede, “PUFKY: A Fully Functional PUF-Based Cryptographic KeyGenerator”, in International Workshop on Cryptographic Hardware and Embedded Systems (CHES), E. Prouff and P.Schaumont, Eds., ser. Lecture Notes in Computer Science, vol. 7428, Leuven, Belgium: Springer, 2012, pp. 302–319.

Lightweight PUF-based Key and Random Number Generation

Microchip PIC16F1825 STMicro STM32F100R8

9

Kim WuytsDepartment Computer Science

PhD defence 16 January 2015

Supervisor Prof. dr. ir. Wouter Joosen

Co-supervisor Prof. dr. ir. Riccardo Scandariato

Funding iMinds

E-mail [email protected]

Introduction / ObjectiveWith privacy becoming a key concern in modern society, it is important that privacy measures are strongly incorporated whenever digital data are involved. Unfortunately, privacy is often neglected when engineering software systems andonly introduced as an afterthought. In recent years, a different attitude towards privacy has emerged, which is known as ‘Privacy by Design.’

Research MethodologyThis thesis adheres to the Privacy by Design paradigm as it proposes and validates LINDDUN, a privacy threat modeling methodology that helps software engineers with limited privacy expertise to introduce privacy early on in the software development lifecycle. We presented LINDDUN, a privacy threat modeling methodology. LINDDUN is a systematic approach with a rich

privacy knowledge base that forces the analyst to think about possible privacy issues in a software system. We executed a multi-faceted empirical evaluation of LINDDUN comprising three studies. In the first two studies,

we used the empirical technique of descriptive studies, which were instrumental in order to understand a LINDDUN and eventually formulate research hypotheses to be further investigated by means of comparative experiments. In the third study, we investigated the reliability of LINDDUN (in terms of coverage of threat space). In particular, we set out to answer five research questions, which are related to correctness, completeness, productivity, ease of use, and reliability.

Results & Conclusions Encouraging results of the descriptive studies.

Correctness rate of 70% Positive feedback on the ease of use Promising results of reliability study

Studies identified some shortcomings, which were tackled by incorporating a number of changes, which improve the overall performance of LINDDUN.

LINDDUN is a solid privacy threat modeling methodology that aids analysts in the elicitation of privacy issues in software systems.

The LINDDUN threat modeling methodology

Major publications Kim Wuyts, Riccardo Scandariato, Wouter Joosen, Empirical evaluation of a privacy-focused threat modeling methodology, The Journal of Systems and Software, volume 96, pages 122-138, 2014 Mina Deng, Kim Wuyts, Riccardo Scandariato, Bart Preneel, Wouter Joosen, A privacy threat analysis framework: supporting the elicitation and fulfillment of privacy requirements, Requirements Engineering, volume 16, issue 1, pages 3-32, 2011

Privacy Threats in Software Architectures

10

Seyyed Mohammad Hosein Mirhoseini

Department Electrical Engineering (ESAT)

PhD defence 16 January 2015

Supervisor Prof. dr. ir. Koen Van Reusel

Co-supervisor Prof. dr. ir. Johan Driesen

Email [email protected]

Introduction / ObjectiveThis work deals with investigating the electromagnetic interaction of the liquid metals. AC magnetic fields can induceforces in the conducting material. This force appears specifically as deformation in the liquid metals. The surfacedeformation of the liquid metal is important in the metallurgical applications. The main objective of this work was to studythe liquid metal deformation by means of magnetic pressure produced by a high current gapped inductor.Research MethodologyThe problems has been solved by analytical andexperimental methods. The first analytical model of thesystem is based on the Young-Laplace equation whichis the pressure equilibrium over the surface of the liquidmetal. The deformation is calculated by solving theresulting differential equation. In the second analyticalapproach, the deformation is obtained by calculating theminimum of the total energy contribution function of thesystem including the deformed liquid metal and theinduced field. Experimental setup has beenimplemented to validate the analytical approaches.

Results & Conclusions• Magnetic field calculations show a specific singular

behavior of the magnetic field at the edge of the liquidmetal pool (Fig.1).

• The deformation calculated by the analyticalapproaches is proportional to the magnetic pressurelevel produced by the gapped inductor. The solution ofthe Young-Laplace equation is able to calculate theliquid metal deformation in 2D. The fixed volumeconstraint causes a squeezing effect on the liquidmetal pool by increasing the magnetic pressure.Results of the Minimum Energy method is shown inFig. 2.

• Experimental results show a squeezing effect of themagnetic pressure over the surface of the liquid metalpool. The deformation is maximum at the middle ofthe inductor poles (Fig.3).

Figure 1: Magnetic field deformation in presence of the liquidmetal pool

Major publicationS. Mirhoseini, K. Van Reusel and J. Driesen, "Investigation of Meltpool Deformation by Magnetic Pressure: Analyticaland Experimental,“ Proceeding of the 7th International Modeling of Electromagnetic Processing (MEP), September 16-192014, Hanover, Germany.

Mathematical and Experimental Approach to Magnetohydrodynamic Problems: Meltpool Control and Thermoacoustic-MHD Generator

Figure 2: Total Energy of the systemvs the Deformation index. Minimum ofthe energy at each current levelcorresponds to the liquid deformationat that value of the magnetic pressure.Figure 3: Liquid metal pool deformation by means of magneticpressure. The induced magnetic force by the fringing field ofthe inductor squeezes the liquid metal pool.

Inductor

Liquid metal pool

11

Frederik VanhollebekeDepartment Mechanical Engineering

PhD defence 21 January 2015

Supervisor Prof. dr. ir. Wim Desmet

Co-supervisor Prof. dr. ir. Dirk Vandepitte

Funding IWT – Baekeland IWT090730

E-mail [email protected]

Introduction / ObjectiveThis research develops a methodology and modelling approach to lower the mechanical noise of the drive train of amodern wind turbine with a strong focus on the wind turbine gearbox. Although this mechanical noise is not the mainnoise source, it could, due to its tonal nature, result in non-conformity to local noise regulations. This becomes morestringent when wind turbines are installed closer to urbanised areas. This research is motivated by inefficiencies in thecurrent trail and error approach to reduce or remove the audible tonalities from the wind turbine noise.

Research MethodologyTo obtain an in depth insight in the dynamic behaviour of awind turbine gearbox, a thorough multi-level modelling andexperimental validation strategy is followed. In each level, anindividual part, a sub-assembly, the gearbox or two gearboxeson the end of line test rig are modelled, investigated and ifpossible experimentally validated. This approach allowsidentifying the components which contribute the most to thedynamic behaviour.

Results & ConclusionsUsing this methodology and experimentally validated modellingapproach the dynamic behaviour of the wind turbine gearboxcan be assessed. Two design optimisations clearly illustratethe potential of pro-actively using virtual simulation models tooptimise the noise and vibration behaviour of the wind turbinegearbox during its design.

Major publicationF. Vanhollebeke, P. Peeters, J. Helsen, E. Di Lorenzo, S. Manzato, J. Peeters, D. Vandepitte, and W. Desmet. “Large scale validation of a flexible multibody wind turbine gearbox model”. Accepted for: Journal of Computational and Nonlinear Dynamics - Special Issue on Wind Turbine Modeling and Simulation (2014)

Dynamic analysis of a wind turbine gearboxTowards prediction of mechanical tonalities

12

Dominick VanthienenDepartment Mechanical Engineering

PhD defence 23 January 2015

Supervisor Prof. dr. ir. Joris De Schutter

Co-supervisor Prof. dr. ir. Herman Bruyninckx

Funding FWO project G040410

E-mail [email protected]

Introduction / ObjectiveRobots are becoming more autonomous and complex, integrating the knowledge of many areas of expertise. Moreoverthey increasingly operate in an environment shared with humans. An example of this evolution are service robots, whichhelp humans in their daily activities and interact with them physically and cognitively.The first objective is to develop a systematic approach to deal with the outlined integration challenge. Moreover thisapproach should result in more flexible, robust, reusable, and adaptable software.The second, complementary objective is to develop a controller that allows a robot to interact physically with humans orits environment and which has sufficient performance for service robot tasks, such as pushing a button. Moreover, itshould not require a force sensor, nor a precise dynamic model of the robot, environment, or contact point.

Research MethodologyBased on metamodeling and the 5C approach toseparation of concerns, the Composition Pattern isdefined, as shown to the right. It is (i) used tostructure and formalize constraint-basedprogramming in a domain-specific language (DSL),and it is applied as an architectural pattern torefactor the iTaSC constraint-based programmingsoftware framework. Secondly, a novel force-sensorless force-torque control scheme forresolved-velocity robots with proportional gains isdeveloped. It features a reference adaptation factor,which can be applied to impose a desired transientbehavior on the applied forces and torques.Results & Conclusions The DSL enabled a non-expert to reprogram the constraint-based programming application shown to the right in a fast manner, since it (i) provided a template of the application, (ii) enabled model verification, and (iii) enabled automatic code instantiation to the refactored iTaSC framework.Experiments validate the applicability of the control scheme to service robot pushing and table wiping tasks. The controller is integrated into the comanipulation application shown to the right.

Force-sensorless human-robot comanipulation. A robot helps ahuman carrying a plate in a restaurant, while avoidingobstacles, maintaining visual contact with the operator, andavoiding unnatural poses. Photo by KU Leuven - Rob Stevens.Major publication

Vanthienen, D., Klotzbücher, M., Bruyninckx, H. (2014). The 5C-based architectural Composition Pattern: lessons learnedfrom re-developing the iTaSC framework for constraint-based robot programming. JOSER: Journal of SoftwareEngineering for Robotics, 5 (1), 17-35.

Composition Pattern for Constraint-based Programmingwith application to force-sensorless robot tasks

13

Benjamin Dupont

Department Electrical Engineering (ESAT)

PhD defence 27 January 2015

Supervisor Prof. dr. ir. Ronnie Belmans

E-mail [email protected]

Introduction / ObjectiveThe need for flexibility within power system operation is growing as more intermittent renewables with limitedcontrollability are integrated. While traditionally this need is met by supply side resources, the demand side also hasintrinsic flexibility available which could be tapped, often referred to as demand response (DR). Although policy makersand industry recognize the value of DR, its use and understanding remains limited. This thesis enhances theunderstanding of DR by addressing three knowledge gaps, ranging from designing dynamic pricing (DP) schemes toincentivize DR, over quantifying the residential load modifications these cause, until determining the final benefits thisbrings for households and power system operation & investment.

Research MethodologyThe thesis is divided in three main parts: Part 1 analyses the fundamentals of DR and DP based on atheoretical framework covering the principles of tariff design. Part 2 quantifies DR resulting from different tariff schemes basedon theoretical simulation and practical evidence from the Linear pilotproject. Optimization methods and statistical analysis are used. Part 3 describes DR benefits on the power system level, using realoptions theory and unit commitment and economic dispatch models.

Results & Conclusions Momentum towards implementation of residential DR is building. The enabler of this momentum is the rise of technology, asadvanced metering, ICT and automation have taken a leap. Monetary benefits under RTP resulting from DR are substantial. Ifthese benefits do not come at cost of comfort, DR participationseems viable, especially for battery electric vehicle owners. Averagebenefits are lower for DR with wet appliance, yet they largely vary. DR proves an efficient means to integrate renewable energyresources. DR reduces generation operating costs significantly. Moreover, DRreduces the generation investment need in quantity and time. To reach these benefits, sufficient dynamics of tariff schemes isrequired. Moreover, automation of demand seems essential.

Thesis overview

Major publications B. Dupont, K. Dietrich, C. De Jonghe, A. Ramos, and R. Belmans, "Impact of residential demand response on power system operation: A Belgian case

study," Applied Energy, vol. 122, pp. 1-10, June 2014. B. Dupont, C. De Jonghe, L. Olmos and R. Belmans, "Demand response with locational dynamic pricing to support the integration of renewables,"

Energy Policy, vol. 67, pp. 344-354, April 2014.

Residential Demand Response Based on Dynamic Electricity Pricing: Theory and Practice

Chapter1.

DR: theory & practice

Chapter 2.

DP: theory & practice

Chapter 6.Impact on power system

operation

Chapter 7.Impact on generation investment decisions

Chapter 3.Development of DP schemes

Chapter 4.DR simulation and practical evidence

Chapter 5.DR quantification

with price elasticities

Part 1: Fundamentals of demand response (DR) and dynamic pricing (DP)

Part 2: Residential DR based on DP

Part 3: Power system benefits of residential DR

Example of a real time pricing scheme, distinguishing between the different tariff components.

14

Hakan Ergun

Department Electrical Engineering (ESAT)

PhD defence 29 January 2015

Supervisor Prof. dr. ir. Ronnie Belmans

E-mail [email protected]

Introduction / ObjectiveIncreased use of renewable energy sources and the creation of an internal electricity market have resulted in higher andmore variable power flows in the transmission grid. Due to a sustained climate policy, the share of renewable energysources in electricity generation will increase making new transmission system investments inevitable.

This dissertation provides the building blocks of a planning methodology to optimize future investments in thetransmission grid by considering several technical, spatial and temporal aspects. It delivers a stepwise transmissionsystem investment plan containing the optimal time point, power rating, transmission route and transmission technologyfor new investments.Research MethodologyThe shown planning structure is used in order to deal with thelarge number of optimization variables and non-linearity. In thefirst step, a market analysis is performed, using limited gridinformation and determining inter-connection powerrequirement. In the second step, a network abstraction isperformed based on the market analysis and using a detailedrepresentation of the transmission grid. This way, the grid isreduced to set of possible injections. In the last step, anoptimization is performed in order to determine the besttransmission topology, technology, routing and investment timepoint to fulfill the required inter-connection capacity.Results & ConclusionsThe developed methodology provides a stepwise investment plan indicating which transmission lines should be built where. Both overhead and underground HVAC and HVDC transmission is considered as a possible technology option. The methodology has been tested at Elia, the Belgian Transmission System Operator, delivering satisfactory results.

Major publicationErgun, H., Rawn, B., Belmans, R., Van Hertem, D. (2014). Technology and Topology Optimization for MultizonalTransmission Systems. IEEE Transactions on Power Systems, 29 (5), 2469-2477

Grid Planning for the Future GridOptimizing Topology and Technology Considering Spatial and Temporal Effects

Structure of the developed methodology

Stepwise investment plan to establish additional 30 GW of transmission capacity between France and Spain. Red: HVAC, White: HVDC, Circles: overhead lines, solid lines: underground cables

15

Tassos NatsakisDepartment Mechanical Engineering

PhD defence 30 January 2015

Supervisors Prof. dr. ir. Jos Vander Sloten

Prof. dr. Ilse Jonkers

Funding Baron Berghmans – dr. Dereymaeker research chair

E-mail [email protected]

Introduction / ObjectiveOsteoarthritis (OA) is a common degenerative joint disease for the ankle joint of the foot, with an important economicaland societal burden. Its aetiology is poorly understood, however a link between aberrant loading conditions and the onsetof OA has been suggested. Furthremore, surgical treatment options for ankle OA (i.e. Total Ankle Arthroplasty (TAA))exhibit high failure rates. We therefore quantified the intra-articular pressure distribution in the human ankle during in vitrogait simulations, to investigate whether the onset of OA or the failure of TAA are related to joint loading conditions.

Results & ConclusionsA significant increase of 3.16 MPa in peak pressure in theTAA joint, compared to the native ankle, was measured.Such an increase in peak pressure could partially explainthe high failure rates reported for TAA. Furthermore, theforce delivered by several muscle groups was found toaffect significantly the pressure magnitude and distribution(figure 2). This information could assist in constructingmuscle training strategies for creating more favourableloading conditions in the ankle joint, reducing the risk of OAdevelopment or decelerating its progression.

Figure 1: The gait simulator used for performing themeasurements. A frame supports the carriage thatthe cadaveric specimens are mounted on. Usingelectric motors and pneumatic actuators, the motionand muscle forces are simulated in real speed.

Major publicationNatsakis, T., Burg, J., Dereymaeker, G., Jonkers, I., & Vander Sloten, J. (2015). Inertial control as novel technique for invitro gait simulations. Journal of Biomechanics, 48(2), 392–395.

In vitro analysis of dynamic foot biomechanics using a gait simulator andintra-articular pressure measurements

Figure 3: Intra-articular pressure distributionprojected on the articular surface of the talus

Nor

mal

ised

peak

pre

ssur

e

Normalised muscle force

Figure 2: Effect of force from triceps surae muscle on peakpressure in the native (red) and TAA (red) ankle, in threepositions (beginning, middle and end of stance phase).

the simulations with a Tekscan #5033 sensor. The differences before andafter implanting the TAA were quantified. Furthremore, the influence ofmuscle force on the topology of the loading conditions was investigated.

Research MethodologyA custom built cadaveric gait simulator (figure 1) was used to perform gait with cadaveric specimens. A specimen specifickinematics model was developed, to accommodate for the geometric differences among specimens. Furthermore, as thegoal of the research was to investigate TAA, a methodology to perform simulations for different conditions of thespecimens was developed and used. The intra-articular pressure in the ankle joint was measured during

16

David JaluvkaDepartment Computer Science

PhD defence 02 February 2015

Supervisor Prof. dr. ir. Stefan Vandewalle

Co-supervisor Dr. ir. Gert Van den Eynde

Funding SCK•CEN

E-mail [email protected]

Introduction / ObjectiveThis dissertation develops a core management tool capable of optimizing reactor-core fuel loadings for MYRRHA, the future fast-spectrum research facilitycurrently under development at SCK•CEN, Belgium. Such a core managementtool is needed for designing highly efficient loading patterns that reflect variousperformance objectives of the multipurpose machine. The optimization problem tobe solved is a highly-complex multi-modal non-convex nonlinear combinatorialproblem.

Research Methodology The MYRRHA loading pattern optimization problem (LPOP) is solved using two

population-based metaheuristic optimization methods: Genetic Algorithm (GA)and Ant Colony Optimization (ACO). Special MYRRHA reactor-core neutronics and thermal-hydraulics models are

developed that are used by the optimization methods to evaluate candidateloading patterns during the iterative optimization process. The employed modelsare sufficiently accurate and fast enough for optimization purposes. The optimization methods and reactor physics models are applied to solve a

constrained MYRRHA LPOP that aims at maximizing the facility’s irradiationperformance expressed in terms of the fast-neutron fluence achieved in reactorexperimental channels (IPS). Three constraint types are included in theproblem: limited number of fuel assemblies (FAs) of different types, maximumallowed fuel-cladding temperature, and end-of-cycle criticality condition.

Results & Conclusions The RELOAD-M core management tool is developed that solves the MYRRHA

LPOP using GA and ACO. It is found that the GA with an elitist population-replacement strategy gives the

most consistent results and performs best when applied to the MYRRHA LPOP. The obtained results show that both GA and ACO provide feasible solutions that

outperform intuitively designed loading patterns. The achieved improvement isvery limited, however.

MYRRHA nuclear reactor.

Major publicationD. Jaluvka, G. Van den Eynde, S. Vandewalle (2013). Development of a core management tool for MYRRHA. EnergyConversion and Management, 74, 562–568.

Development of a core management tool for the MYRRHA irradiation research facility

RELOAD-M high-level design.

Best solution found.1/3 core symmetry assumed. Different

colors indicate different FA types.

17

Tim WillemenDepartments Electrical Engineering (ESAT)

PhD defence 04 February 2015

SupervisorsProf. dr. ir. Jos Vander SlotenProf. dr. ir. Bart HaexProf. dr. ir. Sabine Van Huffel

Funding IWT; iMinds

E-mail [email protected]

Introduction / ObjectiveSleep disorders such as sleep apnea, insomnia and restless legs syndrome disrupt people’s healthy pattern of sleep.Most clinical diagnoses revolve around complaints of excessive daytime sleepiness. People usually wait quite longhowever before contacting professional help, and might only do so when complaints have gone from minor to serious.Current methods for objective diagnosis of sleep disorders are too costly, impractical and intrusive, or lack sufficientinformation and/or accuracy, to be used for long-term screening or follow-up after diagnosis. This PhD work hypothesizesthat automated cardiac, respiratory and movement-based analysis could be able to bridge this gap, especially when allsignals are monitored off-body in a mechanical way.

Research MethodologyThe first part of this work investigated the ability to use cardiac, respiratory andmovement activity for sleep monitoring in healthy subjects and subjects withsleep apnea (Wake, REM, light sleep, deep sleep, apneic breathing). Themodels were trained with and validated against gold standardpolysomnography annotations, derived by sleep experts.The second part of this work investigated the ability to monitor cardiac,respiratory and movement activity in an off-body mechanical way. A pressure-based ballistocardiographic setup was implemented inside a bed, measuringfluctuations in pressure difference between two air volumes underneath thechest area of the subject (cfr. figure). An adaptation of the Pan-Tompkinsalgorithm was proposed for accurate detection of cardiac inter beat intervals.

Results & ConclusionsWith respect to sleep stage classification, the large amount of variability in cardiac and respiratory functioning amongdifferent subjects led to difficult-to-avoid misclassifications. For healthy subjects, agreement values around 80% stillconfirmed the potential of the method. For subjects with sleep apnea, the presence of apneic events proved to have asignificant impact on the model's performance, with agreement values dropping to around 70%.With respect to apneic breathing detection, accuracy varied with the proportion of hypopneic events present in thedataset, stressing the possible need of pulse oxymetry for reliable detection of hypopneic events. Obtained results variedbetween 90% and 75%.With respect to the ballistocardiographic setup, respiratory and movement activity could be easily detected. In betweenepisodes of movement activity an average correspondence of 97% with respect to inter beat intervals detected byelectrocardiogram proved the potential of the method.

Major publicationT. Willemen, D. Van Deun, V. Verhaert, M. Vandekerckhove, V. Exadaktylos, J. Verbraecken, S. Van Huffel, B. Haex, J. Vander Sloten (2013). An evaluation of cardio-respiratory and movement features with respect to sleep stage classification. IEEE Journal of Biomedical and Health Informatics, 18 (2), 661-669.

Biomechanics based analysis of sleep

18

Henk MotteDepartment Electrical Engineering (ESAT)

PhD defence 05 February 2015

Supervisor Prof. dr. ir. Michiel Steyaert

Co-supervisor Prof. dr. ir. Lieven De Strycker, Ir. Olivier Chasles

Funding IWT

Introduction / ObjectiveAs the demands in terms of performance and cost on digital communication networks continue to increase, alternativesfor the current short to mid-range interconnects (1-150 m) become more interesting. Today’s wireless options suffer froma crowded shared medium whereas wired (UTP, coax, PLC) solutions are often bulky and/or expensive. A cheaper,smaller wired alternative for these electrical interconnects could be found in the use of optical communication over largecore ( 1 mm) Step Index Plastic Optical Fiber (SI-POF). Current available implementations of such links provideinsufficient communication range and/or speed while higher performance experimental implementations use too complexand/or expensive techniques. Therefore, this PhD aims to improve the performance of available SI-POF links in aneconomically viable way and with a limited impact on existing systems.

Research MethodologyUsing simulation models of this specific type of fiber, the performance limits for different modulation schemes and filtering techniques are examined. Further, these simulations help to determine the preconditions and optimize the critical design parameters to come to an improved system. Indicate best suited system architecture Optimize system design parameters Translate to transistor level implementation

Results & ConclusionsThe best trade-off between increasedperformance and limited complexity and cost wasfound in receiver side electronic equalization. Atwo stage, adaptive analog equalizer was imple-mented and demonstrated the reduction of the SI-POF induced data rate limitation. This indicatesthe performance can be improved using relativelysimple techniques. The limiting factor for furthersystem improvement is shifted from the fiber’slimited bandwidth to the receiver’s random noisegeneration.

Major publication H. Motte, M. Steyaert, O. Chasles, J.-P. Goemaere, N. Stevens and L. De Strycker (2013). Linear equalization filter for PMMA fiber channels, in Semiconductor Conference (CAS), 2013 International, vol. 2, pp. 207-210, 2013.

H. Motte, M. Steyaert, O. Chasles, J.-P. Goemaere, N. Stevens and L. De Strycker (2013). Electronic Dispersion Correction Circuit for Plastic Optical Fiber Channels, in Intelligent Signal Processing and Communications Systems (ISPACS), 2013 International Symposium on, vol. 2 ,pp. 743-748

Fully Integrated, High Performance Building Blocks for Multimedia Communication over POF

Simulated results of a 150Mbps signal after 150 m of SI-POF (left) and the same signal after equalization with anoptimized analog equalizer (right).

19

Pavlo SerhiienkoDepartment Electrical Engineering (ESAT)

PhD defence 10 February 2015

Supervisor Prof. dr. Guy Vandenbosch

Co-supervisor Prof. dr. Yuriy Prokopenko

Funding National Technical University of Ukraine “KPI”

E-mail [email protected]

Introduction / ObjectiveThis Ph.D. thesis is devoted to the investigation of the influence of geometrical and electro physical parameters of tunablemicrostrip resonators on their resonant frequency, quality factor, and coupling coefficient between resonator andmicrostrip line. The resonance frequency tuning was performed by introducing a tunable heterogeneity between the signalelectrode and he substrate.

Research MethodologyAn analysis method is proposed based on effective permittivity, characteristic impedance and loss of the microstrip linewith tunable air heterogeneity. The scattering matrix is derived from finite element models. A verification of the method isperformed through experiments. The influence of the physical and topological parameters of the microstrip line with airheterogeneity on its equivalent parameters is analyzed.Finite elements and finite integration techniqueresults are derived with CST Microwave Studio 2011.

Results & ConclusionsMicromechanically tunable microstrip resonanceelements are developed and experimentally investigated.Models of the microstrip resonance elements whichsimplify the calculation of the characteristics are created.The influence of the geometrical and electrophysicalparameters of tunable microstrip stub and ring resonatorson resonance frequency, unloaded quality actor andcoupling coefficient is analyzed.

Resonators based on microstrip lines with tunable airheterogeneity provide smooth tuning in a widefrequency range without unloaded quality factordeterioration. This provides the opportunity to reducethe cost, weight and size of this type of devices whileworking in different frequency ranges.

Major publicationSerhiienko P. Novel Concept for Microstrip Stub Resonant Frequency Control/ P. Serhiienko, Yu. Prokopenko, G.Vandenbosch // Electronics and Nanotechnology: ELNANO. — 2013. — P. 94 — 98.

Micromechanically tunable microwave resonators based on microstrip lines

20

Guoying ZhaoDepartment Mechanical Engineering

PhD defence 12 February 2015

Supervisor Prof. dr. ir. Paul Sas

Co-supervisor Dr. ir. Neven Alujevic

Funding CSC & IWT

E-mail [email protected]

Introduction / ObjectiveHigh-level noise not only affects hearing, it can also drive up blood pressure, disrupt sleep, and compromise the ability towork and learn. Nowadays, legal regulations have more and more restricted the allowable levels regarding workersexposure to noise. In order to meet these requirements, effective noise control measures need to be developed. Thepresented research therefore focuses on suppressing the noise radiated from structural frames in rotating machineryusing an active structural acoustic source control strategy.

Research MethodologyThe control of the noise radiation is implementedusing a pair of piezo-based rotating inertialactuators (PBRIA). Experimental modal analysisand blocked force measurements have beenperformed to assess the dynamics of thePBRIAs and the test bed. Two control strategies,adaptive-passive control and feedforwardcontrol, have been investigated in this thesis.Theoretical analysis has been firstly conductedto demonstrate the working principles. Then,experimental validation has been carried out.

Results & ConclusionsDevelopment of a framework for active structural acoustic control of rotating machinery;Development of a piezo-based rotating inertial actuator prototype;Theoretical analysis and experimental implementation of adaptive passive control strategy and feedforward control strategy;In principle, the proposed approaches and methods are applicable to suppress noise radiation from rotating machinery.

Major publicationZhao, G., Alujevic, N., Depraetere, B., Sas, P. (2014). Dynamic analysis and 2 optimization of a piezo-based tunedvibration absorber. Journal of Intelligent Material Systems and Structures, doi: 10.1177/1045389X14546652.

Active Structural Acoustic control of Rotating Machinery using Piezo-Based Rotating Inertial Actuators

0 200 400 600 800 1000 1200 1400 1600 1800 2000−60

−40

−20

0

20

40

Am

plitu

de d

B r

e. 1

N/V

0 200 400 600 800 1000 1200 1400 1600 1800 2000−200

−100

0

100

200

Phas

e (d

egre

es)

frequency (Hz)

PBRIA Blocked force

Adaptive passive control Active control

21

comb. min comb. max LS min LS max

kg C

O2-

eq. /

Mg

poul

try m

anur

e

-800

-600

-400

-200

0

200

400

600

N2O emissionLime productionNH3 productionHeat from nat. gasTransportStart-up fuel oilSand productionNH4NO3 productionSaved emissions

Pieter BillenDepartment Chemical Engineering

PhD defence 17 February 2015

Supervisor Prof. dr. Carlo Vandecasteele

Co-supervisor Dr. ing. Jo Van Caneghem

E-mail [email protected]

Introduction / ObjectiveExcessive fertilization of agricultural land in areas known for intensive livestock breeding, a need for alternative manuretreatment methods exists. Combustion of manure, a renewable fuel, in a fluidized bed allows to produce electricity, but issubject to technological problems related to the ash composition. More specifically, low-melting compounds in the ashcause agglomeration and deposition in the installation, potentially causing a loss of fluidization. This thesis shows thatcombustion of manure is, compared to land spreading, a sustainable technology, which is improved by avoiding severeash problems.

Research MethodologyThe environmental impact was evaluated in a life cycle perspective, using azero-burden approach for the manure. For the energy recovery duringcombustion, a consequential approach was used, meaning that in the bestcase emissions from coal combustion are avoided and in the worst caseemissions from natural gas combustion. Emission data were obtained froman operating combustion plant in Moerdijk (NL), and from the literature forland spreading.The agglomeration/deposition of ash was investigated in a step-wiseapproach: element analysis of the non-agglomerated and agglomerated ashindicated the most important elements, and was used for thermodynamiccalculations to determine which salts are formed. Phase diagrams predict themelt behavior. The thermodynamic findings were confirmed by labexperiments and full-scale tests.

Results & ConclusionsThe environmental impact of combusting manure is lower than that of landspreading, because renewable, CO2 neutral electricity is produced, and due tothe high NH3, N2O and NOx emissions of manure spread on land, as illustratedby the GHG accounting in Fig. 1.Agglomeration/deposition was explained via 2 mechanisms, coating inducedand melt induced agglomeration. An holistic theory of all occurring reactions andmorphological consequences, as shown in Fig. 2 for coating inducedagglomeration, was developed and confirmed by experiments. Appropriatecountermeasures were successfully tested.

Fig. 1. Impact in the category climate change for combustion (left) versus land spreading (LS, right)

Major publicationBillen, P., Creemers, B., Costa, J., Van Caneghem, J., Vandecasteele, C. (2014). Coating and melt inducedagglomeration in a poultry litter fired fluidized bed combustor. Biomass & Bioenergy, 69, 71-79.

Fluidized bed combustion of manure: Technology improvement and sustainability assessment

Fig. 2. Sequence of reactions leading to ash deposition based on the coating induced agglomeration mechanism

22

Enrico Di LelloDepartment Mechanical Engineering

PhD defence 18 February 2015

Supervisor Prof. dr. ir. Herman Bruyninckx

Co-supervisor Prof. dr. ir. Tinne De Laet

Funding N/A

E-mail [email protected]

Introduction / ObjectiveThe main goal of this thesis is to investigate how Bayesian time-series models can be used in engineering/medical applications to leverage the available domain-specific expert knowledge. We focused on four applications that require the interpretation of multi-dimensional time-series: automatic segmentation of healthy human gait, classification of pathological gait patterns in children with Cerebral Palsy (CP), fault detection and recognition in industrial robotic tasks and gas sensing in Open Sampling systems (OSS) using Metal OXide (MOX) sensors. Research MethodologyFor all the previously mentioned applications, two Bayesian time-series models were used: the sticky-Hierarchical Dirichlet Process Hidden Markov model (s-HDP-HMM) was used for gait segmentation, gait classification and fault detection; the Augmented Switching Linear Dynamical System model (aSLDS) was used for the gas sensing application.In particular, the s-HDP-HMM was combined with a linear regression stepto allow the model to decompose joint angle time-series in polynomialshape primitives. An ad-hoc transition model was developed for the aSLDS to model the switching dynamical behaviour of MOX sensors.

Results & ConclusionsFor the gait segmentation problem, the HDP-HMM basedapproach is able to segment ankle joint time-series inclinically defined phases with an 11% error compared to aa human expert. In the pathological gait classificationproblem, the HDP-HMM with polynomial shape primitives(see figure on the right) model outperforms the alternatives.The HDP-HMM model was also used to develop an on-linefault detection method in a robotic assembly task usingforce/torque measurement, validated on a real robot setup(see figure below).

The aSLDS model allows to identify the current MOXdynamical behaviour and to estimate the gasconcentration concurrently (see figure below). The useof the aSLDS overcomes the slow dynamical responseof MOX sensors, therefore extending their range ofapplicability.

Major publicationE. Di Lello, M. Trincavelli,, H.Bruyninckx,T. De Laet (2014) “Augmented Switching Linear Dynamical System Model for GasConcentration Estimation with MOX Sensors in an Open Sampling System”, MDPI Sensor 2014 14 (7), 12533-12559.

Bayesian Time-Series Models: Expert Knowledge-Driven Inference and Learning for Engineering Applications

23

Donato ClausiDepartment Mechanical Engineering

PhD defence 23 February 2015

Supervisor Prof. dr. ir. Dominiek Reynaerts

Co-supervisor Dr. ir. Jan Peirs

Funding FP6 European project “Q2M”

E-mail [email protected]

Introduction / ObjectiveShape Memory Alloy (SMA) offers the highest work density of all microactuators. Despite its advantages, SMA is not yet a standard MEMS material, partially due to three main technical challenges: I) a strong mechanical bond of the actuator material to the target structures is needed for reliable actuation. II) Mechanical and electrical connections should be batch-manufactured, to achieve an overall cost reduction. III) Deforming the SMA in martensitic state requires a bias mechanism, which is difficult to implement at the microscale.This thesis focuses on wafer-level integration of SMA wires on arrays of Si structures for high-performance actuators.

Results & Conclusions The first silicon-biased SMA wire actuator. The first wafer-scale method to integrate SMA wires to silicon using adhesive bonding. Mechanical and electrical connection of SMA wires to silicon MEMS in the same processing step using electroplating. Record stroke of 354 μm with no performance degradation for over 150 thousand cycles. First front gate valve with integrated actuation. Robust flow control of more than 1600 sccm at a pressure drop of 200 kPa and at a power consumption of 90 mW demonstrated up to 10 Hz.

Wafer-level integration of SMA wires and actuator concept.

Major publication D.Clausi, H. Gradin, S. Braun, J. Peirs, G. Stemme, D. Reynaerts and W. van der Wijngaart, Design and wafer-level fabrication of

SMA wire microactuators on silicon. J. Microelectromech. Syst. 19, 2010, 982–991. D.Clausi, H. Gradin, S. Braun, J. Peirs, G. Stemme, D. Reynaerts and W. van der Wijngaart, Robust actuation of silicon MEMS

using SMA wires integrated at wafer-level by nickel electroplating. Sens. Actuat. A 189, 2013, 108–116. H. Gradin, D.Clausi, S. Braun, J. Peirs, G. Stemme, W. van der Wijngaart, and D. Reynaerts, A low-power high-flow shape memory

alloy wire gas microvalve. J. Micromech. Microeng. 22, 2012, 075002.

Microactuation using Wafer-level Integrated SMA Wires

Research MethodologyTwo integration methods have been developed in this work: Adhesive bonding of SMA wires to silicon MEMS using SU-8 Nickel electroplating to form mechanical and electrical connections of the SMA wires to the Si structures.The developed actuators are used for high gas flow control.

Wafer-level integrated SMA wires

Array of silicon structures

Fixed anchor

Electric current path

Silicon cantilevers

Moving anchor

SMA wires

SU-8 or Nickel

Fabricated actuator and deflections upon long term cycling.

Valve concept, valve assembly and pneumatic measurements.

Cold state

Hot state

24

Nick VannieuwenhovenDepartment Computer Science

PhD defence 24 February 2015

Supervisor Prof. dr. Raf Vandebril

Co-supervisor Prof. dr. ir. Karl Meerbergen

Funding Ph.D. fellow of the FWO

E-mail [email protected]

Introduction / ObjectiveA tensor is an array whose elements are addressed by atleast three indices. The tensor rank decomposition is anexpression of a tensor as a linear combination of rank-1tensors. It can be considered as a generalization of thesingular value decomposition of matrices.

The tensor rank decomposition arises naturally in chem-istry, algebraic statistics, signal processing, and machinelearning.

The objective is to analyze certain unexpected pro-perties that hold for tensors but not for matrices, orvice versa. In particular, we consider truncation andidentifiability. The former holds if selecting a subsetof rank-1 terms results in a best approximation to thetensor, and the latter holds if a tensor has only onetensor rank decomposition.

Research MethodologyThe dimension of the set of tensor rank decomposi-tions of fixed rank is investigated using a random-ized algorithm that produces probabilistic statementsabout this dimension. As the dimension is as expec-ted, it implies that a low-rank tensor has at most afinite number of tensor rank decompositions. Thus,we undertake a more refined analysis. An algorithmis proposed for proving that a general low-ranktensor admits only one tensor rank decomposition.We show that certain orthogonality conditions musthold on the rank-1 tensors appearing in a decom-position if truncation is to be feasible. These condi-tions are valid only on a set of strictly lower dimen-sion than proved by the foregoing results.

Results & ConclusionsWe showed that algebraic geometry may assist inanalyzing mathematical properties of the tensor rankdecomposition. By adopting this viewpoint, we de-monstrated that a general low-rank tensor admits only one rank

decomposition; and unfortunately, the tensor rank decomposition can

not be computed by means of “successive defla-tions,” i.e., by successively computing best rank-1approximations, in contrast to the matrix case.

Main publicationsN. Vannieuwenhoven, J. Nicaise, R. Vandebril, K. Meerbergen, On generic nonexistence of the Schmidt—Eckart—Youngdecomposition for complex tensors, SIAM Journal on Matrix Analysis and Applications 35(3), pp. 886—903, 2014.L. Chiantini, G. Ottaviani, and N. Vannieuwenhoven, An algorithm for generic and low-rank specific identifiability of com-plex tensors, SIAM Journal on Matrix Analysis and Applications 35(4), pp. 1265—1287, 2014.

The tensor rank decomposition: Truncation and identifiability

Top row: The approxi-mation in the left image is obtained by truncating the right image from 50 rank-1 terms to 40 terms, using the singular value decomposition.Bottom row: On the left, the approximation ob-tained by truncating the right image from 50 tensor rank-1 terms to 40 terms. Here, we interpre-ted the image as a three-dimensional array, each slice containing either the red, green, or blue con-stituent color compo-nents.

25

Mohammadali AravandDepartment Materials Engineering (MTM)

PhD defence 25 February 2015

Supervisor Prof. Stepan V. Lomov

Co-supervisor Dr. Larissa Gorbatikh

Funding GOA 10/004

E-mail [email protected]

Introduction / ObjectiveA recent approach to control damage and fracture properties of the FRP composites is designing multicomponentcomposites with a hierarchical structure (known as hierarchical or multi-scale composites). The focus of this work will beon exploring controlled positioning, phase behavior, and morphological aspects of the matrix micro and nanophaseheterogeneities in relation to the mechanical properties and fracture behavior of the polymer matrices and the resultinghierarchically structured fibre reinforced nanocomposites.Research MethodologyThis study was composed of two main parts. In the first part, various aspects of the CNT and thermoplastic modified bulkresin systems were investigated in a systematic fashion. In the second part, the bulk resin systems with improved (orpreserved) fracture toughness and mechanical properties were employed as the matrix material for the multi-componentCFRP composite laminates. To this end, a new processing methodology based on resin transfer molding (RTM) formanufacturing of crystallizable thermoplastic modified laminates at high temperature was developed.

Results & ConclusionsPolyoxymethylene (POM) can efficiently enhance themode I fracture toughness of the resulting epoxy/POMblends mainly through particle crack bridging mechanism.Carbon nanotubes (CNTs) strongly affect the finalmorphology of phase separating POM/epoxy blends, andhence the resulting fracture toughness properties.Fiber reinforcement can significantly interfere in thereaction induced phase separation of the thermoplasticparticles and therefore the final properties of thehierarchically structured FRP composites.

Major publicationM. Aravand, S.V. Lomov, and L. Gorbatikh, “Morphology and fracture behavior of POM modified epoxy matrices and their CFRP composites”, Composites Science and Technology, 110, (2015) 8-16.

Micro and nano structured hierarchical carbon fiber composites

top$mold$

spacer$

bo. om$mold$

0"

10"

20"

30"

40"

50"

60"

70"

0" 10" 20" 30" 40" 50"

Load"(N)"

Displacement"(mm)"

0"wt%"POM"

5"wt%"POM"

10"wt%"POM"

onset

arrest

26

Jan Deckers

Department Electrical Engineering (ESAT)

PhD defence 26 February 2015

Supervisor Prof. dr. ir. Jef Poortmans

Introduction / ObjectiveThe cost of solar cells can be reduced on a per unit energy basis by making more efficient solar cells. Contact recombination currents are one of the major power loss mechanisms in certain high efficiency silicon solar cells. Therefore, reducing contact recombination currents can result in efficiency gains. However, the characterization of contact recombination losses is convoluted. This provided the incentive for developing a new contact recombination characterization method.Research MethodologyA dedicated characterization method for contact recombination measurements was developed. The characterizationmethod is based on photoconductance measurements on point contact lattices having various contact fractions.

Results & ConclusionsUnder limiting assumptions, contact saturationcurrent densities can be extracted from theslope of inverse lifetime as a function of contactfraction.

Photoluminescence image of a finished test structure.Higher contact fractions correspond to lower effectivelifetimes which corresponds to a lowerphotoluminescence signals.

Major publicationJ. Deckers, M. Debucquoy, I. Gordon, R. Mertens, J. Poortmans (2014). Avoiding Parasitic Current Flow through PointContacts in Test Structures for QSSPC Contact Recombination Current Measurements, Journal of Photvoltaics 5 (1),276-281

Contact recombination in silicon solar cells

27

Sylvie Van LoonDepartment Chemical EngineeringPhD defence 02 March 2015Supervisor Prof. dr. ir. Jan Vermant

Co-supervisor Prof. dr. ir. Jan FransaerProf. dr. ir. Christian Clasen

Funding NanoDirect

E-mail [email protected]

28

Dixian ZhaoDepartment Electrical Engineering (ESAT)

PhD defence 03 March 2015

Supervisor Prof. dr. ir. Ing. Patrick Reynaert

Funding Catrene PANAMA project, ERC Advanced Grant (DARWIN), Analog Devices Inc.

E-mail [email protected]

Introduction / ObjectiveThe rapid growth of mobile data and the use of smart phones are making unprecedented challenges for wireless serviceproviders to overcome a global bandwidth shortage. Millimeter-wave (mm-Wave) technology is widely considered as oneof the key technologies that will continue to serve the consumer demand for increased wireless data capacity. Thisdoctoral work focuses on realizing compact CMOS mm-Wave transmitters (TXs) and power amplifiers (PAs) towardsmore output power, higher efficiency and broader bandwidth for future high-speed wireless communications.

Research MethodologyThe advanced CMOS can now well operate in mm-Wave bands, permitting theintegration of a full transceiver in a low-cost, high-yield technology. However, thedesign of a mm-Wave transceiver in advanced CMOS still poses many challengesat device, circuit and architecture levels. To address these challenges at mm-Wave, novel design techniques have been proposed in this thesis, such as optimaltransistor layout, enhanced amplifier stage and broadband power combiner.Design methodologies will be presented to deal with the long EM-simulation timeand strict design rules. In addition, detailed design issues, such as common-modestability and magnetic mutual coupling, will also be covered in the thesis.

Results & ConclusionsAll the proposed design techniques have been applied to five prior-art designs thatare implemented and measured in the context of this doctoral work. These designsinclude (a) the first reported 60-GHz dual-mode Class AB PA that achieves arecorded PAE of 30%; (b) the first reported 60-GHz outphasing TX; (c) a multi-Gb/s E-band TX; (d) the first CMOS PA which achieves uniform performanceacross complete E-band and (e) a 4-way E-band PA using NBCA topology.

Major publicationsD. Zhao and P. Reynaert, "A 40-nm E-band Direct-Conversion Transmitter with 4.5-Gb/s 64-QAM and 14-Gb/s 16-QAM,"IEEE J. Solid-State Circuits, vol. 50, no. 11, Nov 2015 (invited from A-SSCC 2014).D. Zhao and P. Reynaert, "A 0.9V 20.9dBm 22.3%-PAE E-band Power Amplifier with Broadband Parallel-Series PowerCombiner in 40nm CMOS," in ISSCC Dig. Tech. Papers, pp. 248-249, Feb 2014.D. Zhao and P. Reynaert, "A 60-GHz Dual-Mode Class AB Power Amplifier in 40-nm CMOS," IEEE J. Solid-StateCircuits, vol. 48, no. 10, pp. 2323-2337, Oct 2013.D. Zhao, S. Kulkarni and P. Reynaert, "A 60-GHz outphasing transmitter in 40-nm CMOS," IEEE J. Solid-State Circuits,vol. 47, no. 12, pp. 3172-3183, Dec 2012 (invited from ISSCC 2012).

CMOS Millimeter-Wave Power Amplifiers and Transmitters

Neutralized amplifier with optimal mm-Wave transistor layout

Broadband compact 4-way parallel-series power combiner

(b) (c) (d) (e)

(a)

29

Priyanko Guha ThakurtaDepartment Electrical Engineering (ESAT)

PhD defence 03 March 2015

Supervisor Prof. dr. ir. Ronnie Belmans

Co-supervisor Prof. dr. ir. Dirk Van Hertem

E-mail [email protected]

Introduction / ObjectiveIncreased penetration of renewable energy into the existing transmission system in Europe is becoming a challenge forthe Transmission System Operators (TSOs) to schedule their power systems day-ahead, due to their intermittent nature.Limitation in expansion of the transmission infrastructure forces them to use the existing grid more flexibly. Power flowcontrolling devices (PFCs) help in a flexible operation of power system. This dissertation addressed methodologies toincorporate such devices in day-ahead operational planning, taking into account the coordinated control of the devices.The main objectives are:1. To manage contingencies in the system with the help of PFCs, the operation of which comes at a zero cost to the

TSOs.2. To integrate more renewable with the help of their coordinated control.

Research MethodologyThe methodologies incorporate deterministic approaches to consider PFCs in the day-ahead scheduling process. Linearoptimizations were formulated to address the objectives.

Results & Conclusions• Figure 1 shows the result of contingency management with PFCs for whole month of January 2013. The x-axis showsthe initial loading of the system (loading is defined as the ratio of initial to maximum flow through a critical transmissionline). The y-axis shows the reduction of loading of the system after applying the developed approach. It is seen that all theinitially overloaded cases are secured by the approach.• Figure 2 shows the additional wind in-feed into the system with coordinated control of PFCs. The carnation pink areashows the additional wind in-feed into the system above the forecasted values (represented by the cadet blue area).

Major publicationGuha Thakurta, P., Maeght, J., Belmans, R. and Van Hertem, D.(2014). Increasing Transmission Grid Flexibility by TSOCoordination to Integrate More Wind Energy Sources while Maintaining System Security. IEEE Transactions onSustainable Energy, DOI 10.1109/TSTE.2014.2341640.

Increasing Transmission System Operation Flexibility using Power Flow Controlling Devices

30

Vandael StijnDepartment Computer Science

PhD defence 04 March 2015

Supervisor Prof. dr. Holvoet Tom

Co-supervisor Prof. dr. ir. Deconinck Geert

Funding IWT

E-mail [email protected]

Introduction / ObjectiveEVs (Electric vehicles) will play a key role in the electricity grid of the future. In recent years, the increase in the amount ofelectric vehicles is gaining momentum, as global environmental concerns are getting stronger, and automotive OEMs(Original Equipment Manufacturers) are preparing for mass-production. This vast increase of grid-connected vehiclesoffers opportunities to use electric vehicles as a large-scale distributed storage system. In a liberalized electricity market,aggregators are typically seen as the actors who will manage this storage system. The central problem addressed in thisdissertation is an aggregator’s large-scale control of the power transfer between electric vehicles and the grid.Research MethodologyThis dissertation proposes three GIV control approaches, eachdesigned to provide large-scale control of EVs in differentbusiness cases of an aggregator. The first GIV control approach is a three-step market-based

approach to charge electric vehicles in response to adynamic electricity pricing scheme.

The second GIV control approach is a reinforcementlearning approach to learn a cost- effective day-aheadschedule.

The third GIV control approach is a bin-based schedulingapproach to provide regulation services with electricvehicles. This approach has been validated and comparedwith other approaches in the EV fleet at the University ofDelaware.

Each GIV control approach is based on a common blueprint forlarge-scale control, called “aggregate and dispatch”. In thistype of control, an aggregator calculates aggregated decisionsfor the EV fleet, which are translated to individual EV decisionsby a dispatch mechanism

Results & Conclusions Simulation results show that aggregate and dispatch

control is able to combine the advantages ofcentralized and decentralized control approaches.

In-field results show that aggregate and dispatchcontrol is applicable in a real-world scenario.

Major publications S. Vandael, B. Claessens, D. Ernst, T. Holvoet and G. Deconinck, “Reinforcement Learning of Heuristic EV Fleet

Charging in a Day-Ahead Electricity Market,” IEEE Transactions on Smart Grid, early access, 2015. S. Vandael, B. Claessens, M. Hommelberg, T. Holvoet and G. Deconinck, “A scalable three-step approach for demand

side management of plug-in hybrid vehicles,” IEEE Transactions on Smart Grid, vol. 4, no. 2, pp. 720-728, May 2013.

Aggregate and dispatch control of grid-integrated electric vehicles

aggregator

electricity grid

EV

EVSE

communicationconnection

electricalconnection

legend

31

Introduction / ObjectiveIn large wind farms, the vertical interaction of the farm with the atmospheric boundary layer plays an important role, i.e.the total energy extraction is dominated by the vertical turbulent transport of kinetic energy from higher regions in theboundary layer towards the turbine level. The current study investigates the use of optimal control techniques in large-eddy simulations of wind-farm-boundary layer interactions with the aim of increasing the total energy extraction in windfarms.

Research Methodology Large-eddy simulations are performed in an in-house SP-Wind code Force due to turbine: Actuator-disk model For optimization, individual turbines are considered as flow actuators whose energy extraction can be dynamically regulated in time so as to optimally influence the flow field Receding-horizon approach together with gradient- and adjoint-based scheme are employed.

Major publicationGoit, J. P., Meyers, J. (2015). Optimal control of energy extraction in wind-farm boundary layers. Journal of FluidMechanics, 768, 5-50.

Optimal control of energy extraction in large-eddy simulation of wind farms

Jay Prakash GoitDepartment Mechanical Engineering

PhD defence 18 March 2015

Supervisor Prof. dr. ir. Johan Meyers

FundingKU Leuven, ERC (FP7-Ideas, grant no. 306471) and Flemish Science Foundation (FWO, grant no. G.0376.12)

E-mail [email protected]

Velocity field

Adjoint fieldCompute gradientBackward in time

Results & ConclusionsInfinite farm: For optimal control without penalization, gain in energy extraction is 16%,… but flow decelerated and dissipation increases For two cases with penalization of turbulent dissipation, gain are 11 and 6%

Finite farm Gain in energy extraction 7% i.e., lower that infinite farm Possibly because front row turbines are already operating close to optimal

Gains and losses to the boundary layer forunpenalized infinite farm case.

farm powerdissipation

32

Joris GillisDepartment Electrical Engineering (ESAT)

PhD defence 18 March 2015

Supervisor Prof. dr. Moritz Diehl

Co-supervisor Prof. dr. ir. Eric Van den Bulck & Jan Swevers

Funding FWO Vlaanderen

E-mail [email protected]

Introduction / ObjectiveOptimal control is a powerful paradigm to design and control nonlinear dynamical systems that are subject to constraints.This work considers periodic optimal control problems (OCP), for which the path constraints are robustified with respect toGaussian disturbances injected along the limit cycle. Starting from the Lyapunov framework that provides a first-orderapproximation to such robust periodic OCP, the goal is to improve convergence and complexity such that large-scaleengineering-type problems can be tackled.

Research MethodologyVarious formulations for robust OCP were proposed and explored in simulation. The approach in this thesis is both: Pragmatic: use of off-the-shelf numerical code as much as possible, without restricting the scope of problem classes. Generic: make the results of the work available in a generic open-source optimization framework: CasADi.

Results & Conclusions

Smarter discretization

Major publicationJ. Gillis, G. Horn, M. Diehl (2014). Joint design of stochastically safe setpoints and controllersfor nonlinear constrained systems by means of optimization. Proceedings of the 19th IFAC World Congress.

Practical Methods for Approximate Robust PeriodicOptimal Control of Nonlinear Mechanical Systems

Better complexity: O(n6) → O(n3)

Original formulation improved in two major ways:

Demonstrated on a 17-state nonlinear quadcopter model withinvariants, with joint design of a linear time-varying controller.

33

Durga AnanthanarayananDepartment Materials Engineering (MTM)

PhD defence 25 March 2015

Supervisor Prof. dr. ir. Nele Moelans

Co-supervisor Prof. dr. ir. Patrick Wollants

Funding OT/07/040, CREA/02/012, SoPPoM program

E-mail [email protected]

Introduction / ObjectiveThe microstructure of a material largely determines its mechanical performance at the macroscale. Phase-field modellingis a tool to simulate microstructure evolution in a material under a given set of conditions. The goal of this work is todevelop a phase-field model that can treat chemical diffusion and mechanical deformation as a step forward in enablingpredictive simulations of multi-phase alloy systems in the solid state. The model is then applied to the growth of brittleintermetallic phases formed in Sn-Cu/Cu solder joint, which reduce its reliability.

Research Methodology Analysis of local interfacial equilibrium conditions in existing phase-field models considering elastic effects Simulations for model systems with known analytical solutions of bulk chemical and mechanical properties Coupling with thermodynamic databases and elastic constants from ab initio calculations for the application to Sn-Cu/Cu solder joint

Results & Conclusions Existing phase-field models considering elastic effects found to give rise to unphysical excess energy at the interfaces New model developed considering local equilibrium at the interfaces without giving rise to excess energy Coupled with plastic deformation model and extended to multi-phase systems Applied to the growth of Cu3Sn and Cu6Sn5 phases in Sn-Cu/Cu solder joints subjected to internal and external strains

Major publicationA. Durga, P. Wollants, N. Moelans, A quantitative phase-field model for two-phase elastically inhomogeneous systems,Computational Materials Science 99 (2015) 81–95.

Development of an elastoplastic phase-field model for multi-phase systems

Elastic strain profile along the dotted line from the ellipse centre (Analytical solution in black)

σ11 stress field of an elastically inhomogeneous system with an elliptical precipitate

Growth of Cu3Sn and Cu6Sn5 layers at the interface between Cu substrate and Sn-Cu solder

Equal diffusion potential:

Mechanical equilibrium:

Local interfacial equilibrium

34

Vladimir MaticDepartment Electrical Engineering (ESAT)

PhD defence 26 March 2015

Supervisor Prof. dr. ir. Sabine Van Huffel

Co-supervisor Prof. dr. ir. Maarten De Vos

Funding KU Leuven, IWT NeoGuard

E-mail [email protected]

Introduction / ObjectiveWithin this thesis the automated algorithms for the EEG-based assessment of the brain functioning of asphyxiated infantshave been developed. Their goal is to assess the severity of the hypoxic brain injuries in the asphyxiated infants.This estimate will assist clinicians to promptly diagnose and to guide further treatment decisions.

Research MethodologyAn automated method for the background EEG classification has been developed. As the first step, it maps shorter,segmented, EEG segments’ features into segments’ feature space, thereby creating a 3D distribution. Next, this3D structure is represented as a data tensor that is used for further dimensionality reduction and robust classification.

Results & ConclusionsThe algorithms and their performances have beenverified by expert EEG readers, demonstrating itspotential. In addition, the efficient visualization developedwithin our project NeoGuard will enable fast insight intothe algorithms’ output and, hopefully, very soon beimplemented in the NICUs. Three blocks represent the parameterization of the

cEEG data stream. A. First, EEG signal is adaptively segmented. B. Segment’s features are calculated for each segment and depending on the quantized features’ index values (m1, m2, m3) they are mapped into the discretized segments’ feature space. C. 3D distribution is parameterized using the tensor representation to effectively capture the structure of the distribution.

Major publicationMatic, V., Cherian, P. J., Koolen, N., Naulaers, G., Swarte, R. M., Govaert, P., ... & De Vos, M. (2014). Holistic approachfor automated background EEG assessment in asphyxiated full-term infants. Journal of neural engineering,11(6), 066007.

Neonatal EEG Signal Processing

Flow chart illustrating a classification procedure based on the Tucker decomposition and machine learning methods.

35

Bogdan MoldovanDepartment Computer Science

PhD defence 27 March 2015

Supervisor Prof. dr. ir. Luc De Raedt

Funding IWT

E-mail [email protected]

Introduction / ObjectiveAffordances are used in robotics to model action opportunities of a robot on objects in the environment. They were usedto model the relations between object properties, executed actions, and the effects of those actions for single objects. Ourobjective is the use of statistical relational learning to build relational affordance models, where the (spatial) relationsbetween the different objects are taken into account, allowing us to model settings where objects interact during actions.

Research MethodologyLearning a relational affordance model: Table-top scenario with multiple objects Babbling phase with one or two objects Learn a Bayesian Network (BN) from data From BN, build a ProbLog model Generalisation through the use of variables Add background knowledge as logical rules Model joint probability distribution P(O,E,E)

Results & ConclusionsExperiment setting:: Table-top scenario with six objects Model learnt from 1 or 2 objects data Random objects types and positions Action prediction: argmaxA P(A|O,E) Compare relational model vs. BN model Relational model can be reused for other settings

Major publicationB. Moldovan, P. Moreno, M. van Otterlo, J. Santos-Victor, L. De Raedt. Learning Relational Affordance Models for Robotsin Multi-Object Manipulation Tasks, in Proceedings of the 29th IEEE International Conference on Robotics andAutomation (ICRA), St. Paul, MN, USA, 2012

Relational Affordances and their Applications

Additional relational affordances applications: Two-arm robot models Multiple-action planning task Occluded object search:

36

80W

2S

4S Pe

arso

n’s

corre

latio

n

Luis Eduardo Pineda Ordoñez

Department Civil Engineering

PhD defence 27 March 2015

Supervisor Prof. dr. ir. Patrick Willems

Funding Erasmus Mundus (EMECW 19)

E-mail [email protected]

Introduction / ObjectiveNorth western South-America (NWSA) has been historically threatened by extreme or unexpected hydroclimaticconditions that resulted in serious social and economical losses. Predictions of the rainy season are therefore crucial toaddress the major climate variability impacts on sensitive sectors such as water management, agricultural planning,disaster preparedness, among others. The objective of this dissertation was to develop an alternative and/orcomplementary probabilistic modeling framework for seasonal precipitation prediction subrogated to climate information.

Research MethodologyAnalysis and modelling of precipitation and its relation with regionalclimatology in NWSA. The research made use of statistical methods,ground observations, remote sensing, numerical weather prediction(NWP) and global climate models (GCM) data backed with a heuristicknowledge of local and regional processes. The key research axesare shown in Figure 1.

Results & ConclusionsAdvances in the theoretical knowledge necessary for improving regional physically based prediction of precipitation. New insights are formulated on sources of predictability for: Seasonal rainfall amounts. Monthly maxima of daily intensity within a season. Daily intensities and occurrences within the rainy season

(Dec-May) (Figure 2, left).

A modeling framework that makes intelligent use of seasonal climate model predictions to produce precipitation estimates relevant for river basins in NWSA. The framework enables to: Extract the signal/noise pattern from the GCM forecast. Train a Hidden Markov model (HMM) in a statistical

partially dynamic approach (Figure 2, center). Simulate space-time rainfall characteristics for the

upcoming rainy season (Figure 2, right).

Major publicationPineda, L., and Willems P.: Multisite downscaling of seasonal prediction to daily rainfall characteristics over Pacific-Andean River Basins in Ecuador and Peru using a non-homogenous hidden Markov model, J. Hydrometeor., submitted(February, 2015).

Climate variability and rainfall response: Analysis and Predictability in the Pacific-Andean basin of Ecuador and Northern Peru

80 W90 W100 W

0

10 S

a) State 1 (1523 d)1.69 -1.04

1 m/s

Assessment current capabilities of GCM and NWP models

Identification of processes controlling variability

Process-based downscaling of seasonal GCM forecasts

Explore rainfall data

Fig 1. Scheme of research work flow

Fig 2. Weather-state identification (left), Probabilistic hindcastsimulation (center), Region-wide mean rainfall intensities (right)

1975 1980 1985 1990 1995 2000 2005 2010

37

Palamandadige Fernando

Department Electrical Engineering (ESAT)

PhD defence 30 March 2015

Supervisor Prof. dr. ir. Tinne Tuytelaars

Funding EC FP7 AXES, iMinds Beeldcanon

Introduction / ObjectiveThe performance of computer vision recognition methods heavily relies on the chosen image or video representations. Inthis thesis we concentrate our efforts on (1) designing novel image representation pipelines utilizing state of the art datamining algorithms and (2) developing novel data transformation strategies that result in effective image or videorepresentations.

Research MethodologyIn this thesis we focus on developing novel mid-level image representations using pattern mining. The first approachcalled FLH is suitable for image classification while KRIMP-MQIR is particularly designed for effective instance retrieval.Using data mining based mid-level features we obtain state-of-the-art results in several image classification and imageretrieval benchmarks. We also propose a novel subspace based domain adaptation method which transforms the originalrepresentation such that the new transformed space is invariant to domain shifts which allows to apply object recognitionsystems in the wild. A novel video representation called VideoDarwin that allows to capture both video dynamics as wellas appearance information of videos is also presented.

Results & ConclusionsData mining is useful in discovering mid-levelrepresentations for image classifications. It allows torecognize generic objects accurately and flower specieswith an accuracy more than 90%.Developed domain adaptation methods allows to improveobject recognition rates on several cross domain objectrecognition tasks.The VideoDarwin method allows to obtain good actionrecognition results on various benchmarks.

Why we need good image representations?

Major publicationFernando B., Fromont E., Tuytelaars T. 2014. Mining mid-level features for image classification. International Journal ofComputer Vision. Kluwer Academic Publishers nr.108 , pp. 186-203 , ISSN 0920-5691

Image Representations for Improving Object Recognition

Action recognition from videos

38

Niccolò TosiDepartment Mechanical Engineering

PhD defence 30 March 2015

Supervisor Prof. dr. ir. Herman Bruyninckx

Funding CEA LIST

E-mail [email protected]

Introduction / ObjectiveTouch-based sensing is relevant for a number of applications where cameras operate in non-optimal conditions, e.g. during underwater or tunnel-boring operations. Focusing on the industrial requirement of performing fast and reliable scene calibration, this doctoral project copes with the curse of dimensionality related to pose estimation and action-selection in high-dimensional space, localising objects up to industrial complexity.

Research MethodologyA test with 30 human subjects performing a touch-based localisation tasks has been carried out. The common behaviour pattern of decoupling the task into a sequence of lower-complexity problems was observed. Inspired by this results, The DOF Decoupling Task Graph was introduced as the model that allows task programmers to represent different strategies in the design of localisation tasks, as sequences of active-sensing subtasks with the lowest possible complexity. The act-reason algorithm was presented as an action-selection scheme designed to explicitly trade off information gain with execution and computation time.

Results & Conclusions DOF Decoupling Task Graph introduced as modelling primitive to design active-sensing localisation tasks. Objects up to industrial complexity localised with force-based sensing. Time-efficiency improvement using act-reason setting the allocated time as a function of the current uncertainty

Major publications• N. Tosi, O. David, H. Bruyninckx (2014). Action Selection for Touch-based Localisation Trading Off Information Gain

and Execution Time. In IEEE International Conference on Robotics and Automation.• N. Tosi, O. David, H. Bruyninckx (2013). DOF-Decoupled Active Force Sensing (D-DAFS): A Human-inspired

Approach to Touch-Based Localisation Tasks. In International Conference on Advanced Robotics.

Active Sensing for Touch-based Object Localisation

35% execution time reduction with act-reason

39

Karolien Kempen

Department Mechanical Engineering

PhD defence 31 March 2015

Supervisor Prof. dr. ir. Jean-Pierre Kruth

Co-supervisor Prof. dr. ir. Jan Van Humbeeck

E-mail [email protected]

Introduction / ObjectiveSelective Laser Melting (SLM) is an Additive Manufacturing technique in which a part is built up by consequently melting metal powder particles together in a layer-by-layer fashion, with a high-power laser source. The number of available materials for this production technique however, is still very limited.The overall goal of the thesis is to expand the materials palette for Selective Laser Melting in an empirical way, with high-demand materials fulfilling the prerequisites like full density and conventional mechanical properties. Along the way, barriers need to be overcome that characterize the SLM process, but prohibit it from reaching a higher technology readiness level, like thermal stresses, cracks and poor dimensional accuracy.

Research Methodology

Four different materials, divided in two material groups were processed in this work. The first part describes the work on two aluminum alloys, a cast aluminum alloy, A360.0, and a wrought aluminum alloy, 7075. The second part handles the process capabilities of two types of tool steel: a low-carbon maraging steel 18Ni300, and a high-carbon M2 High Speed Steel. The primary goal of this thesis is to produce nearly-fully dense parts in all four materials.Along the way, barriers need to be overcome that characterize the SLM process, but prohibit it from reaching a higher technology readiness level, like thermal stresses, cracks and poor dimensional accuracy.

Results & ConclusionsCracks are eliminated by the use of either baseplate pre-heating, or addition of alloying powders (e.g. silicon), depending on the origin of the crack formation. The influence of the composition, size and morphology of base powder material is shown to be influential for the final part quality.After proper powder selection, the production of nearly-fully dense parts can be achieved after optimization of scan parameters like laser power, scan speed, scan spacing and layer thickness. Laser remelting as an additional scan strategy can increase the part density and improve the top surface roughness significantly.

Major publicationsKempen, K., Vrancken, B., Buls, S., Thijs, L., Van Humbeeck, J., Kruth J.-P. (2014). Selective Laser Melting of crack-free high density M2 HSS parts by baseplate pre-heating. Journal of Manufacturing Science and Engineering, 136(6), art.nr. MANU-14-1285; doi: 10.1115/1.4028513.

Kempen, K., Thijs, L., Van Humbeeck, J., Kruth, J. (2014). Processing AlSi10Mg by Selective Laser Melting: Parameter optimization and material characterization. Materials Science and Technology, art.nr. 10.1179/1743284714Y.

Expanding the materials palette for Selective Laser Melting of metals

Furthermore, a preliminary experiment of single track scans offers a greatamount of information and defines a process window, in which a stable meltpool is formed.Material characterization in terms of mechanical properties and microstructure show that the quality of as-built SLM parts to conventionally produced and heat treated parts is comparable or even exceeds it.

40

Marco Mercuri

Department Electrical Engineering (ESAT)

PhD defence 31 March 2015

Supervisor Prof. dr. ir. Dominique Schreurs

Co-supervisor Prof. dr. ir. Paul Leroux

E-mail [email protected]

Introduction / ObjectiveFall incidents represent the most dangerous causes of accidents for elderly people. The rapid detection of a fall event canreduce the mortality risk, increasing the chance to survive the incident and to return to independent living. The aim of thisPh.D. research was to explore the base for future long-term health monitoring. More precisely, the objectives were thedesign of a contactless sensing device enabling multi-parameter characterizations, which are fall detection and taglesslocalization, and to integrate such sensors in a Wireless Sensor Network (WSN) for full room coverage.

Research MethodologyThe detection of falls together with tagless in-door localization can be made contactless and therefore non-invasive byadopting radar techniques. The radar is used to transmit an RF signal to a target and to receive the reflected echo, on thebasis of which the target's speed and absolute distance can be extracted. Moreover, the difference in speed signaturecan be used to distinguish a fall event from a normal movement. The research methodology can be summarized as: to develop a radar-based sensor enabling fall detection and tagless localization; to investigate data processing algorithms, exploiting radar signals, to perform fall detection and tagless localization; to develop a WSN, integrating multiple sensors and a base station for real-time long-term health monitoring (Fig. 1).

Results & ConclusionsLong-term health monitoring: development of a radar-based sensor (Fig. 2); real-time fall detection with a max. delay of 0.3 s (Fig. 3); in-door tagless localization; WSN integrating multiple radar sensors and a base station.

Major publicationM. Mercuri, P. J. Soh, G. Pandey, P. Karsmakers, G.A.E. Vandenbosch, P. Leroux, D. Schreurs, “Analysis of an indoorbiomedical radar-based system for health monitoring,” IEEE Trans. Microwave Theory Techn., vol. 61, no. 5, pp. 2061-2068, May 2013.

Development of contactless health monitoring sensors and integration inwireless sensor networks

Base Station

RadarSensor

Fig. 2: Developed radar sensor.

Fig. 1: Proposed radar-based WSR.

0 60 120

normal movement

falling

time (s)

falling

Fig. 3: Real-time fall detection on a radar signal containing normal movements anda fall invoked at about 42 s. The fall is detected in about 300 ms.

41

Devy Widjaja

Department Electrical Engineering (ESAT)

PhD defence 01 April 2015

Supervisor Prof. dr. ir. Sabine Van Huffel

Funding IWT

E-mail [email protected]

Introduction / ObjectiveThe rate at which our heart beats, is a dynamical process enabling adaptive changes according to the demands of ourbody. One of the main short-term modulators of the heart rate is respiration. This phenomenon is called respiratory sinusarrhythmia (RSA) and comprises the rhythmic fluctuation of the heart rate at respiratory frequency. It has also widelybeen used as an index of vagal outflow. However, research indicates that variations in respiratory rate and tidal volume,change RSA, independently of vagal control. Inspired by the polemic nature of this debate on the interpretation of RSA,this dissertation focuses on three topics: (1) the derivation of a respiratory signal from the electrocardiogram (ECG-derived respiration) such that respiration does not need to be recorded separately; (2) the separation of the tachogram(RRorig) in two components: one that is strictly related to respiration (RRresp), and another component that is unrelated torespiration (RRres); and (3) the characterization of common dynamics in heart rate variations and respiration. The impactof the latter two topics is evaluated on the application of mental stress monitoring.

Research Methodology ECG-derived respiration (EDR): an algorithm based on kernel principal component analysis (kPCA) was developed to

derive a surrogate respiratory signal from single lead ECGs. Separation of respiratory influences from the tachogram: a thorough comparison study between several time domain

separation methods was conducted. Additionally, the separation in the time-frequency domain was evaluated. Characterization of dynamics in cardiorespiratory time series: information theory was used to assess directional

interaction in cardiorespiratory data and measures of information transfer, information storage, cross information andinternal information were proposed.

Results & Conclusions ECG-derived respiration (EDR): the

method based on kPCA proved tooutperform state-of-the-art EDR methods Separation of respiratory influences from

the tachogram revealed that, in contrast towhat we hypothesized, especially the partunrelated to respiration is very importantfor mental stress monitoring Information dynamics: the results

demonstrate that, especially the internalinformation is very informative for use incardiorespiratory time series during mentalstress monitoring.

Major publicationD. Widjaja, A. Caicedo, E. Vlemincx, I. Van Diest, S. Van Huffel (2014). Separation of respiratory influences from thetachogram: a methodological evaluation. PLoS ONE, 9 (7), e101713.

Cardiorespiratory dynamics: algorithms and application to mental stress monitoring

Fig. Respiratory signal (RESP), original tachogram (RRorig), the respiratorycomponent (RRresp) and a residual component (RRres) of the tachogram.

42

Po-Kuan ChiangDepartment Civil Engineering

PhD defence 01 April 2015

Supervisors Prof. dr. ir. Jean Berlamont

Prof. dr. ir. Patrick Willems

Funding Self-supporting

E-mail [email protected]

Introduction / ObjectiveFloods often cause huge economic and life losses, and flood hazards tend to increase. Water authorities - also in Belgium- have to face increasing challenges and need to build a better control strategy to mitigate the flood damages. Therefore,the overall objective of this research was to investigate real-time flood control of hydraulic structures. To achieve thisoverall objective, three specific objectives were set. Objective 1 was to review the flood models and develop a fast andprecise model. Objective 2 was to develop a real-time control procedure, incl. an optimization model and objectivefunctions. Objective 3 was to carry out real-time flood control combining all models and testing for a specific case study.

Research MethodologyA real-time flood control scheme integrates weather prediction, flood simulation and optimization models. This researchdevelops a conceptual river model that can be well identified and calibrated to the results simulated by a fullhydrodynamic model, applies an advanced control strategy by Model Predictive Control (MPC) and discusses its potentialperformance for flood mitigation along the Demer river system as case study. The research methodology had three steps: Development of a procedure for conceptual river model buildup Development of an efficient MPC procedure optimized

by a Genetic Algorithm (GA) Application of the MPC + GA procedure to the case study

Results & ConclusionsThe advanced MPC + GA procedure applied to the extremeflood event of Sept. 1998 solved the flood damages at waterlevel locations hzw2, hbgopw, hbg i.c.w. current operating rules, and: It keeps all 20 selected water levels beneath their flood levels It can search for better control policies to delay or avoid

flood occurrence and outperforms the current rule operationdue to its better adaptability for flood emergencies

Fig. 1: Schematic overview of the conceptual model of the study areaFig. 2: Comparison of the starting time (hour) of the flood within 48 hrsFig. 3: Comparison of total costFig. 4: Current rule operation vs. MPC + GA for water level hzw2Fig. 5: Current rule operation vs. MPC + GA for water level hbg

Major publications1. Chiang, P.-K., Willems, P. (2015). Combine evolutionary optimization with model predictive control in real-time flood

control of a river system. Water Resources Management, [accepted].2. Chiang, P.-K., Willems, P. (2013). Model conceptualization procedure for river (flood) hydraulic computations: Case

study of the Demer River, Belgium. Water Resources Management, 27(12), 4277-4289

Flood Control Combining Optimization Techniques with Hydrologic-Hydraulic Modelling

Fig.2

Fig.1

Fig.3 Fig.4 Fig.4

43

Halil KüknerDepartment Electrical Engineering (ESAT)

PhD defence 02 April 2015

Supervisor Prof. dr. ir. Rudy Lauwereins

Co-supervisor Prof. dr. ir. Liesbet Van der Perre

Funding imec vzw.

E-mail [email protected]

Introduction / ObjectiveThis thesis targets the reliability modeling of the Bias Temperature Instability (BTI) phenomenon in CMOS digital circuits,while covering the scaling impacts from planar to advanced FinFET process technology nodes. Contributions of thisthesis are 1) to propagate the BTI modeling from device to processor data-path and local memory level, 2) by proposingfast and still accurate simulation frameworks at various design levels, 3) to investigate the impacts of the BTI degradationin CMOS circuits, and 4) to provide BTI-aware design guidelines at the device, gate and block level to the IC designers.Research Methodology This thesis applies the state-of-the-art, world-wide recognized, physics-based models that have been developed in KU Leuven & IMEC: Trap-based model and Capture/Emission Time Map-based model, due to their superior modeling capabilities of the BTI degradation, especially in the decananometer devices. Simulation frameworks at various design levels have been constructed, e.g. transistor, gate, and netlist-level.Workload-dependent, instance-based, NBTI aging-aware librarycharacterization has been integrated within the typical STA flow. The impacts of the BTI degradation w.r.t. the logic gate type,drive strength, stress waveform properties (frequency, periodicity),architectural topology parameters on data-path blocks, the timezero path replacement, etc. have been investigated.Results & Conclusions Voltage scaling at a slower pace than the dimensional scaling results in higher electrical field, triggering higher stress levels, and higher BTI degradation. Deeply scaled FinFET devices degrade 2x faster than the planar devices, due to increased electrical field. BTI is workload-, and circuit architecture-dependent.Parallel architectures lowers the sensitivity of BTI to the workload variations.

a) In the deeply scaled nodes, a limited number ofdefects (e.g. N = 12) makes the atomistic perspectiveof the BTI degradation highly visible, where the lifetimeexpectancies of devices are widely distributed. b) Acomplete CET map covers the entire space of defects,with short and long time constants.

Major publicationH. Kukner, S. Khan, P. Weckx, P. Raghavan, S. Hamdioui, B. Kaczer, F. Catthoor, L. Van der Perre, R. Lauwereins, andG. Groeseneken. Comparison of reaction-diffusion and atomistic trap-based BTI models for logic gates. IEEETransactions on Device and Materials Reliability, 14(1):182–193, Mar 2014.

Bias Temperature Instability in CMOS Digital Circuits from Planar to FinFET Nodes

a) b)

44

Roel Van BeeumenDepartment Computer Science

PhD defence 21 April 2015

Supervisors Prof. dr. ir. Wim Michiels

Prof. dr. ir. Karl Meerbergen

E-mail [email protected]

IntroductionEigenvalue problems arise in all fields of science and engineering. The mathematical properties and numerical solutionmethods for standard, linear eigenvalue problems are well understood. However, recent advances in several applicationareas resulted in a new type of eigenvalue problem, i.e., the nonlinear eigenvalue problem which exhibits nonlinearity inthe eigenvalue parameter.

Research MethodologyWe developed new rational Krylov methods for solving both small-scale and large-scale nonlinear eigenvalue problems: Polynomial and rational interpolation results in globally convergent methods inside the region of interest, Linearization of the corresponding polynomial and rational eigenvalue problems results in linear pencils, Exploitation of the special structure of the linearization pencils results in efficient and reliable software.

Results & ConclusionsWe proposed the Compact Rational Krylov (CORK) method as a generic class of numerical methods for solving nonlineareigenvalue problems. CORK is characterized by a uniform and simple representation of structured linearization pencils.The structure of these linearization pencils is fully exploited and the subspace is represented in a compact form.Consequently, we are able to solve problems of high dimension and high degree in an efficient and reliable way.

Major publicationsR. Van Beeumen, K. Meerbergen, W. Michiels (2015). Compact rational Krylov methods for nonlinear eigenvalueproblems. SIAM Journal on Matrix Analysis and Applications.S. Güttel, R. Van Beeumen, K. Meerbergen, W. Michiels (2014). NLEIGS: A class of fully rational Krylov methods fornonlinear eigenvalue problems. SIAM Journal on Scientific Computing, 36 (6), A2842–A2864.R. Van Beeumen, K. Meerbergen, W. Michiels (2013). A rational Krylov method based on Hermite interpolation fornonlinear eigenvalue problems. SIAM Journal on Scientific Computing, 35 (1), A327–A350.

Rational Krylov methods for nonlinear eigenvalue problems

The family of CORK methods has a lot of flexibility for solving thenonlinear eigenvalue problem. We discuss three particular types ofCORK methods. The first one is the Newton Rational Krylov methodwhich makes use of dynamic polynomial interpolation. The second oneis the Fully Rational Krylov method which uses rational interpolation andhas three viable variants: a static, dynamic, and hybrid variant. The thirdone is the Infinite Arnoldi method which uses an operator setting tosolve the nonlinear eigenvalue problem. Finally, the proposed methodsare used to solve applications from mechanical engineering, quantumphysics, and civil engineering which were not solved earlier with thesame efficiency and reliability. Structure of linearization pencils and Krylov subspace

45

Geebelen Kurt

Department Mechanical Engineering

PhD defence 23 April 2015

Supervisor Prof. dr. ir. Swevers Jan

Co-supervisor Prof. dr. Diehl Moritz

E-mail [email protected]

Introduction / ObjectiveThis research investigates a new method to harvest wind energy, known as Airborne Wind Energy (AWE). In the methodexplored in this thesis, an aeroplane flies a crosswind trajectory while it is tethered to a ground based winch consisting ofa drum connected to a motor/generator. The tether is wound up on the drum, and electricity is produced using the‘pumping cycle’. In the first phase of the pumping cycle, the aeroplane delivers a high traction force on the tether while itis being reeled out, causing the generator to produce electricity. Once the tether is fully unrolled, the aeroplane iscontrolled such that the force on the tether is reduced and the tether is reeled in using only a fraction of the electricityproduced in the first phase. The first part of this research focuses on the development of experimental test set-ups forairborne wind energy.Unfortunately the benefits of airborne wind energy come at a cost. While a wind turbine only needs to be aimed towardsthe wind to operate, an AWE system needs to be constantly controlled to fly a certain crosswind trajectory. Because ofthis, AWE systems need an automatic control system, which in turn needs a reliable estimate of the system state. Thesecond part of this research investigates methods to fuse the different sensor measurements to form a reliable stateestimate.

Development of experimental set-upsThe purpose of the set-ups is to perform the ‘rotation start’, a start-upmethod for AWE systems in which the tethered aeroplane is brought up tospeed by an arm rotating around a central vertical axis. The set-ups areequipped with sensors and actuators to allow estimating and controlling theposition and orientation of the aeroplane such that it can track the desiredpower generating trajectory. The figure to the right shows the outdoors set-up which is developed in the course of the research and is capable tolaunch aeroplanes with a wing span of 3 meter.

Pose estimationEstimation of the aeroplane’s position and orientation is achieved by fusingthe different available sensor measurements. This is achieved by using atechnique known as Moving Horizon Estimation (MHE) which can reliably fuse the information from the nonlinearsystem and measurement models and compared to traditional methods such as the extended and unscented Kalmanfilter using both simulations and experimental data obtained on the indoors set-up. A method to fuse sensormeasurements that come at a different time scale is presented. MHE is shown to have both a better start-up behaviourand average estimation performance than Kalman filtering techniques.

Major publicationGeebelen K., Vukov M., Wagner A., Ahmad H., Zanon M., Gros S., Vandepitte D., Swevers J., Diehl M. (2013). AnExperimental Test Setup for Advanced Estimation and Control of an Airborne Wind Energy System. In: Ahrens U., DiehlM., Schmehl R. (Eds.), bookseries: Green Energy and Technology, Airborne Wind Energy, Chapt. 27. Heidelberg,Germany: Springer, 459-471

Design and Operation of Airborne Wind Energy SystemsExperimental Validation of Moving Horizon Estimation for Pose Estimation

46

Milan VukovDepartment Electrical Engineering (ESAT)

PhD defence 23 April 2015

Supervisor Prof. dr. Moritz Diehl

Co-supervisors Prof. dr. ir. Jan Swevers, Dr. Hans Joachim Ferreau

Funding FP7-EMBOCON, ERC HIGHWIND, Eurostars SMART

E-mail [email protected]

IntroductionThe concepts of Model Predictive Control (MPC) and Moving Horizon Estimation (MHE) received widespread acceptancein both industry and academia. This is due to the ability to explicitly define objectives and constraints in the framework ofdynamic optimization. Those key facts eventually lead to improved control performance. Progress in the area ofoptimization algorithms and computational hardware in the last two decades have extended the applicability of numericaloptimization to mechatronics applications. Following the success convex quadratic programming (QP) solvers made inlinear MPC, the ideas have been extended for nonlinear MPC and MHE.

Research MethodologyThis thesis aims to further reduce the gap between academia and industry. With optimized software for nonlinear MPCand MHE and extended problem formulations we can efficiently handle complex nonlinear systems, possibly workingunder nonlinear constraints. We present recent extensions to the ACADO Code Generation Tool (CGT). Once specified,the problem structure is exploited offline by the tool that generates the tailored code optimized for execution in real-timeenvironments. We demonstrate the strength of the newly developed features of the tool in numerical simulations and tworeal-world applications.

Results & ConclusionsOur numerical simulations show readiness to effectively treat problems onboth short and long horizons. For the systems with a few states and fewcontrols solution times in the microsecond range are observed. On anotherside of the spectrum, a test case comprising 33 states and 3 controls anda prediction horizon of 50 steps is possible to solve on modern hardwarein under 50 milliseconds. The first experimental study is the application ofnonlinear MPC and MHE to a laboratory scale overhead crane. Using theoriginal implementation of the ACADO CGT and only an MPC controller,we achieved execution times close to 1 millisecond. With the recentlyoptimized code, we attained nearly the same execution times, now withboth nonlinear MHE and the MPC in the loop. The aim of the second real-world application is to validate the computational performance of the auto-generated MHE and MPC solvers on an experimental setup for rotationalstart-up of an airborne wind energy system. The results confirm thatnonlinear MPC formulation with more than 1500 optimization variables issolved in just less than 5 milliseconds.

Major publicationM. Vukov, S. Gros, G. Horn, G. Frison, K. Geebelen, J. B. Jørgensen, J. Swevers, and M. Diehl, “Real-time NonlinearMPC and MHE for a Large-scale Mechatronic Application,” 2015. (submitted to Control Engineering Practice).

Embedded Model Predictive Control and Moving Horizon Estimationfor Mechatronics Applications

47

Results & ConclusionsThe most important aspects in the behaviour of CFS built-up sections areindentified: Different types of instabilities Slip in bolted connections Initial imperfections

An extension of the DSM towards arbitrary built-up sections was formulatedand validated with: Experimental results & Numerical (FEA) results Practical applicability and design considerations were also considered.

Novel built-up shapes with greatly improved buckling response were proposedfor implementation in industry (together with (inter)connecting elements);The applicability, added value, and limitations of FEM-based analysis were

identified;It was shown that through stability-aware design, resistances can be notablyincreased and results scatter can be mitigated.

Iveta GeorgievaDepartment Civil Engineering

PhD defence 24 April 2015

Supervisor Prof. dr. ir. Lucie Vandewalle

Co-supervisor Prof. dr. ir. Luc Schueremans

Funding KU LEUVEN

E-mail [email protected]

ObjectiveThis doctoral research addresses an increasing demand from the cold-formed steel (CFS) industry for more stable load-bearing elements. The limited stability of standard CFS structural profiles is resolved in the thesis by using primaryelements composed of multiple profiles. Such elements are investigated due to their potential to achieve notably higherload-bearing capacity and avoid overall and distortional buckling occurrences that may compromise a structure’s integrity.

Research MethodologyThe research contains a fundamental part on the theoretical behaviour of a number of built-up CFS cross-section shapes;analytical and numerical analysis is performed and comparisons with existing design methods are documented. An extensive experimental part aims at validation or disproof of the presented theoretical models. The experimentswere executed in collaboration with Belgian CFS producers. Practical considerations had to be kept in mind - feasibility interms of production, transport, storage, (dis-)assembly, and overall cost (including labour). Numerical analysis was performed to simulate all experiments that were performed as part of the doctoral thesis – full-scale tests and coupon tests to determine the material properties

Major publicationsI. Georgieva, L. Schueremans, L. Pyl (2012). Composed columns from cold-formed steel Z-profiles. Experiments and code-based predictions of the overall resistance. Engineering Structures, 37 (4), 125-134.I. Georgieva, L. Schueremans, L. Pyl, L. Vandewalle (2012). Experimental investigation of built-up double-Z members in bending and compression. Thin- Walled Structures, 53 (4), 48-57.I. Georgieva, L. Schueremans, L. Pyl, L. Vandewalle (2012) Numerical investigation of built-up double-Z members in bending and compression, Thin-Walled Structures, 60 (11), 85-97.

Behaviour and Design of Built-up Structural Elements Composed of Thin-walled Cold-Formed Steel Profiles

48

Mário Henrique Cruz Torres

Department Computer Science

PhD defence 27 April 2015

Supervisor Prof. dr. Tom Holvoet

Funding IWT, iMinds, KU Leuven

E-mail [email protected]

Introduction / ObjectiveServices computing facilitates the creation of large scale applications. Services are relatively small and manageable software units withclearly defined interfaces. Applications then consist of orchestrated invocations of services, the so‐called composite services. Theservices on which a composite service relies ‐ called component services ‐ have various quality of service (QoS) characteristics, such asperformance, reliability, availability, accuracy. Such quality parameters can be used by a composite service to select componentservices when called for. Service selection and composition is particularly challenging when the system is large‐scale ‐ consisting ofthousands of nodes, components and composite services ‐ and dynamic ‐ where QoS varies. This thesis ambition is to create a highlyresilient system for dynamic service compositions.

Research MethodologyFirst we defined a decentralized software architecture for dynamic service composition using delegate MAS, which is a coordinationmechanism originally targeted for large‐scale coordination and control applications. We implemented a prototype of our solutionsand deployed it on a computer cluster in order to perform experiments.

We performed experiments that were:‐ large and huge in scale (up to tens of thousands of nodes andservices).

‐ We assess the behavior of the system under failing conditions,including drastic failure scenarios.

These experiments show that the approach is effective, efficient,scales linearly, and can cope even with severe failures.Results & ConclusionsOur delegate MAS approach was capable of creating betterquality compositions, into terms of composition time, at ahigher communication cost tan a purely reactive approach.We also show that our approach properly scales even with anexponential growth in the size of the network where it isexecuting.Based on our results, we conclude that our approach can help inthe creation of large scale service systems, having thousands ofnodes.

Major publicationCruz Torres, Mário Henrique; Holvoet, Tom. Self‐adaptive resilient service composition. Proceedings of the IEEE InternationalConference on Cloud and Autonomic Computing (ICCAC 2014), London, UK, pp. 141‐150.

Decentralized Service Selection and Composition

The graph on the left shows sample of a service network with 1000 services. Below, we can see that our approach, DMAS, constantly creates service compositions with lower composition time, than a reactive approach.

49

Xin WangDepartment Mechanical Engineering

PhD defence 27 April 2015

Supervisor Prof. dr. ir. Jan Swevers

Co-supervisor Prof. dr. ir. Joris De Schutter

Funding OPTEC, LeCoPro, DYSCO

E-mail [email protected]

IntroductionOver the last three decades, significant development of advanced control technologies has enlarged the applicationdomain of mechatronic systems in industry. Due to the increasing customer expectations, many mechatronic systems arefacing challenging specifications with respect to energy consumption, production speed and positioning accuracy. Thishas led to the current design challenges of advanced control technologies. Model Predictive Control (MPC) is one of themost promising optimal control strategies because of its ability to take into account system constraints explicitly.

Energy-optimal MPC (EOMPC)EOMPC aims at LTI mechatronic systems performing energy-optimal point-to-point (PTP) motions within a required motion time. The EOMPC approachis formulated as a two-layer optimization problem such that it is able to makea smart trade-off between on-time arrival and minimal energy consumption ofthe PTP motion. The developed EOMPC approach is validated on abadminton robot experimentally. The results show that EOMPC guarantees:

Energy optimality. On-time arrival. System constraints.

Offset-free EOMPCOffset-free EOMPC improves the positioning accuracy of EOMPC. This isrealized by adopting a ’disturbance model’ strategy: the system state isaugmented with disturbance variables. Based on the ’disturbance model’,the disturbances are estimated and their effects are cancelled. Thisapproach is experimentally validated on a linear motor test setup withcoulomb friction and cogging disturbances

Major publicationX. Wang, J.Stoev, G.Pinte and J.Swevers, ”Classical and modern methods for time-constrained energy optimal motion –Application to a badminton robot”, Mechatronics, Volume 23, Issue 6, September, 2013, pages 669-676.

Energy Optimal Model Predictive ControlApplications to point-to-point motions of linear time-invariant mechatronic systems

Method Missedhits

Energyconsumption

EOMPC 2 130 [kJ]

PEOS 7 137 [kJ]

TOMPC 2 238 [kJ]

PTOS 2 233 [kJ]

Method RMS of error

Energyconsumption

Motion time

EOMPC 0.5 mm 24.33 guaranteed

Offset-free EOMPC

4 um 27.88 guaranteed

Offset-free MPC

4 um 40.88 Approximated (Q)

50

Jan VerveckkenDepartment Electrical Engineering (ESAT)

PhD defence 28 April 2014

Supervisor Prof. dr. ir. Johan Driesen

Funding IWT (Agency for Innovation by Science and Technology

E-mail [email protected]

Introduction / ObjectiveThe desirable increase of switching frequencies in power electronics, promised by wide-bandgap semiconductors, shifts the bottleneck in control towards computational processing power. Sliding mode controllers are known to have low computation demands and high robustness, and their discrete control nature allows them to combine two control levels, external control goal and internal switching state decision, in one controller. We investigate if these controllers are able to reach state-of-the-art performance in power electronics applications.Research MethodologySpecifically, we investigate sliding mode control of three-phase LCL-filtergrid connections and of a series converter of Unified Power FlowControllers.We adapt an analytic design method for analog filters to design an LCLfilter optimised for total life-cycle cost including projected incurred powerlosses. We demonstrate it is cheaper then other design methods. Wedemonstrate the first sliding mode controller for a three-phase LCL-filtergrid connection, using a detailed three-phase model including the power-electronic switches.We derive a dynamic power flow model of a power line controlled by aUPFC, and its instantaneous derivations. Thereby we isolate the keyinstantaneous system dynamics to develop a Dynamic Inverse ModelController and a Direct Power Controller. With a detailed simulation of aUPFC equipped with a multi-level inverter, including power-electronicswitches, we compare the designed controllers to continuous controllersfrom literature, in balanced and unbalanced conditions. In a scaledlaboratory model with a multi-level inverter, we demonstrate the DirectPower Controller in balanced conditions.Results & Conclusions• The LCL-filter designed by our method is significantly

cheaper in the projected life cycle.• The sliding mode controller for a three-phase LCL-

filter grid connection is demonstrated in simulations.• The developed controllers for series converter of a

UPFC perform better than state-of-the-art in literaturein simulation in balanced and unbalanced conditions.

• Direct Power Control outperforms all othercontrollers, and is demonstrated in a scaledlaboratory model

• Sliding mode control can achieve state-of-the-artresults in power-electronic applications,demonstrated by several types of control problems

Major publicationDirect Power Control of Series Converter of Unified Power-Flow Controller With Three-Level Neutral Point ClampedConverter, Verveckken, J. and Silva, F.A. and Barros, D. and Driesen, J., IEEE Transactions on Power Delivery vol:27issue:3, 2012, June 25.

Sliding Mode Control for Power Electronic Convertors in Transmission and Distribution Grids - Applied to Three-Phase LCL-Filter Grid Coupling and Series Convertor of UPFC

Figure 1: Sliding mode control in αβ of LCL grid coupling. Grid side output currents.

Figure 2: UPFC Controller comparison: Simulation of controlled response to step in active power reference, unbalanced conditions.

51

Ye TanDepartment Mechanical Engineering

PhD defence 28 April 2015

Supervisor Prof. dr. ir. Jean-Pierre Kruth

Co-supervisor Prof. dr. ir. Wim Dewulf

Funding FWO

E-mail [email protected]

Introduction / ObjectiveIndustrial CT, as an emerging technology for dimensional quality control, is increasingly favored by industry due to itscapabilities to provide geometric information of inner and hidden structures of complex or assembled parts. However,industrial CT has not been widely accepted as an accurate measurement tool due to its high operator dependency andlack of traceability on its measurement accuracy.This PhD study investigates various influence factors and their correlations throughout the entire measurement loop of CTdimensional metrology, including the workpiece’s properties, the scanning settings and the post-processing parameters.Based on the results of this PhD research, optimization strategies in term of parameters for scanning and post-processinghave been proposed.Research MethodologyExperimental investigation is the primary research method within this PhD study. Various academic test setups aredeveloped to study different influence factors for CT dimensional metrology; furthermore, workpieces from the automotiveindustry are also scanned to search for additional influence factors related to industrial applications. In addition to theexperimental approach, CT simulations are also performed so that one single influence factor can be well isolated andinvestigated.

Results & Conclusions

Major publicationTan Y., Kiekens K., Welkenhuyzen F., Angel J., De Chiffre L., Kruth J.P., Dewulf W., “Simulation-aided investigation ofbeam hardening induced errors in CT dimensional metrology”, Meas. Sci. Technol. 25 064014, doi:10.1088/0957-0233/25/6/064014, 2014

SCANNING AND POST-PROCESSING PARAMETER OPTIMIZATION FOR CT DIMENSIONAL METROLOGY

Figure 1. various academic experimental setups and workpieces from the industry

Figure 2. Left: Decision making flow chart for the

major CT machine settings. Right: Case dependent calibration

procedures for CT dimensional metrology

applications

Based on the experimental and simulation results throughout this PhD research, initial protocols are suggested for “optimizing” the operator dependent scanning parameters and for post-calibration strategy regarding CT dimensional metrology applications.

52

José Oramas MogrovejoDepartment Electrical Engineering (ESAT)

PhD defence 29 April 2015

Supervisor Prof. dr. ir. Tinne Tuytelaars

Co-supervisor Prof. dr. Luc De Raedt

Funding DBOF Research Fund KUL 3E100864FP7 ERC grant 240530 COGNIMUND

Introduction / ObjectiveIn recent years, contextual information has been successfully used for improving object detection precision by removingfalse hypotheses. Our objective is to investigate the potential of contextual information for improving object poseestimation performance. In addition, we analyze methods to discover underlying higher-order relations between objects.Finally, we analyze how to exploit object relations to improve object detection recall by retrieving object instances missedafter an initial detection step.Research MethodologyThe first part of this thesis focuses on investigating the effect of contextual information to improve object pose estimation. Our first approach exploits pairwise relations between objects within a collective classification setting to estimate the pose of each object. Our second approach focuses on exploiting scene-driven contextual cues for the same task. In the second part of the thesis, we focus on exploiting object relations for improving object detection. We propose a cautious approach that uses the most certain/reliable object hypotheses as source of contextual information. In addition, we propose a Topic Model formulation to discover underlying higher-order relationships between objects. Finally, we propose a method to use relations-based methods to generate object proposals and improve object detection recall.

Contextual reasoning based on object relations

Key publicationsOramas M., J., De Raedt, L. Tuytelaars, T. (2013). Allocentric Pose Estimation. ICCV'13.Oramas M., J., De Raedt, L. Tuytelaars, T. (2014). Towards Cautious Collective Inference for Object Verification. WACV'14.Oramas M., J., Tuytelaars, T. (2014). Scene-driven cues for Viewpoint Classification of Elongated Object Classes. BMVC'14.

Context-based Reasoning for Object Detection and Object Pose Estimation

Results & Conclusions Relations between objects can be used as a cue to improve

object pose estimation performance. Cautious inference increases the gains in performance

brought by contextual information for object detection. The scene can serve as a source of contextual information

for the task of object pose estimation. Assuming that objects are associated by underlying

relationships increases the performance of relations-based methods.

a) Detector and b) Detector + Proposals.( Undetected object instances in red )

Top-view of the distribution of object-centered relations for cars with the same pose (a) and opposite pose (b), respectively.

53

Carolina VaronDepartment Electrical Engineering (ESAT)

PhD defence 30 April 2015

Supervisor Prof. dr. ir. Sabine Van Huffel

Co-supervisor Prof. dr. ir. Johan Suykens

E-mail [email protected]

Introduction / ObjectiveThe electrocardiogram (ECG) is a very well-known diagnostic tool and it is among the most preferred tests in clinicalpractice. Even though several studies in the literature have focused on its analysis, many challenges still need to betackled before fully relying on an ECG monitoring system. In this context, the main goals of this research are twofold. Onthe one hand, it aims to develop algorithms for the extraction of informative features from the ECG that can be used forthe quantification of cardiac and respiratory activities. On the other hand, it evaluates the application and interpretation ofthose informative features in sleep and epilepsy research.

Research MethodologyTo achieve the main goals of this research, the whole track can be divided intodifferent blocks as indicated in Figure 1. The main challenges of this work are: A model selection approach for kernel principal component analysis A scoring system to differentiate contaminated from “clean” ECG segments Quantification of morphological changes of the ECG signal by means of

principal component analysis Evaluation of different ECG-derived respiration (EDR) algorithms on real and

continuous datasets Quantification of cardiorespiratory interactions using only the ECG signal Quantification of cardiorespiratory interactions in epilepsy Development of seizure detection algorithms based on single-lead ECG Development of an algorithm for sleep apnea detection from single-lead ECG

Results & Conclusions

Major publicationVaron C., Caicedo A., Testelmans D., Buyse B., Van Huffel S. (2015). A novel algorithm for the automatic detection ofsleep apnea from single-lead ECG. IEEE Transactions on Biomedical Engineering, in press.

Mining the ECG: Algorithms and Applications

Artifact detection

Cardiac activity

Cardiorespiratory interactions

Respiratory activity

Decision making

Figure 1: Simplified diagram of a monitoring system based on ECG

Step forward towards the monitoring of epileptic seizures and sleep apnea

in a home environment

Children suffering from West syndrome and absence epilepsy have areduced vagal tone during interictal activity.

In temporal lobe epilepsy patients only experience autonomic changesduring ictal activity.

Epileptic children have a more fixed heart rate and a reducedcardiorespiratory coupling which can compromise their defensemechanisms against asphyxia and hypoxia.

Partial epileptic seizures can be detected with a PPV larger than 80%. For generalized seizures, a PPV of 83% was reached, which until now

was not achieved by any other algorithm based solely on ECG analysis. Novel features allow to achieve accuracies of 85% for the detection of

sleep apnea

54

Leandro FernandezDepartment Civil Engineering

PhD defence 30 April 2015

Supervisor Prof. dr. ir. Jaak Monbaliu

Co-supervisor Prof. dr. ir. Alessandro Toffoli

Funding FWO

E-mail [email protected]

Introduction / Objective: This research investigates the combined effect of higher order nonlinearity, directional spreading and finite water depth on the statistical properties of surface gravity waves.Research Methodology: Numerical simulations of the sea surface with random amplitudes and phases have been carried out using the higher order

spectral method (HOSM) developed by West et al. (1987) to solve the truncated Euler equations of motion. Three up to fiveorders of the expansion were used. Several directional sea states were investigated, ranging from fairly long crested toshort crested wave fields at different relative water depths kh, being k the wavenumber of the main wave and h the waterdepth.

The phenomenon of modulational instability, known as one of the main mechanisms for the formation of rogue waves, wasassessed.

The simulated data were compared with field measurements of short crested waves in water of finite depth at Lake George(Australia) and with experimental data obtained in the wave basin of Marintek (Norway).

Results & Conclusions:1) Maximum elevation variation. (Fig 1): The modulational instability phenomenon is suppressed

for collinear wave fields in relative water depth kh<1.36(collinear case in Fig. 1, d, e, f, g, h, i).

They can still trigger wave modulation when a directionalwave field is considered, resulting in a wave elevationgrowth (directional case in Fig. 1, d, e, f, g, h, i).

Statistics of Directional Wave Fields in Water of Finite Depth

Major publication: L. Fernandez, M. Onorato, J. Monbaliu and A. Toffoli.“Modulational instability and wave amplification in finite water depth”, Nat. Hazards Earth Syst. Sci., 14, 705-711, 2014

Fig. 1.

2) Wave crest height distributions (Fig. 2 and 3) : A deviation from linear and second order based statistics

is observed when a unidirectional wave field in deepwater is considered, Fig. 2, a.

Fig. 2. Fig. 3.

This deviation is suppressed when the directional spreading of the wave field is increased (Fig. 3, a, b and c) andfor kh < 1.36. (Fig. 2 and Fig. 3, b and c).

55

Prashant AgrawalDepartment Electrical Engineering (ESAT)

PhD defence 30 April 2015

Supervisor Prof. dr. ir. Francky Catthoor

Co-supervisor Prof. dr. ir. Liesbet Van der Perre

Funding Imec

E-mail [email protected]

Introduction / ObjectiveSystems implementing embedded applications such as wireless communication, multimedia, etc., are expected to continuously push boundaries in terms of energy efficiency, performance, cost and supported functionality. This has led to the emergence of complex heterogeneous Multi-Processor System-on-Chip (MPSoC) based platforms. Design and implementation of these MPSoC platforms are non-trivial, given the exponentially increasing design space that they present, in terms of application, architecture and technology choices. Moreover, in deep-submicron technology nodes, it is now inevitable to directly include the strong impact of the technology on architecture choices. Thus, not only a systematic exploration approach but also a close coupling of different phases of system design, starting from high-level algorithm design to low-level physical design, has become unavoidable. It will allow understanding and analysis of implications of design choices across different design phases, in the face of increasing complexity of applications and architectures, and increasing uncertainty in the underlying technology.Research MethodologyApplication-Architecture Co-Exploration – This thesis has proposed a systematic methodology for application mapping and architecture exploration. The proposed methodology enables an early exploration of the partitioning and assignment (P&A) search space of the streaming multi-mode applications “together” with the available platform architecture choices. Architecture-Technology Co-Exploration – This thesis explores the architecture and interconnect technologyimplications of ne-grained 3D partitioning for complex MPSoC platforms instantiated for streaming multi-mode applications. The thesis presents a design framework to carry out 2D versus 3D integration evaluations and comparisons.

Results & ConclusionsApplication-Architecture Co-Exploration – It has been shown that the proposed methodology achieves energy gains with negligible area overheads by carrying out ne grained P&A exploration by considering the static and runtime dynamism across and within the application modes. The methodology generates multiple heterogeneous partitions such that the tasks assigned to a partition are well matched in complexity, parallelism, duty cycle and hardware requirements and do not have conicting requirements. This ensures energy efficiency while minimizing the area overheads.Architecture-Technology Co-Exploration – 2 layer 3D-SIC based on memory-on-logic 3D partitioning has been carried out for MPSoC instantiated for wireless baseband processing. 2D, 3D-SIC Face-to-Back (F2B) and 3D-SIC Face-to-Face (F2F) based integrations have been compared. It has been shown that Cu-Cu bonding based F2F stacking is more optimal than TSVs/RDL/µbump based F2B stacking, both from interconnect and system-level architecture perspective. The impact of variations in system level architecture parameters, such as on-chip communication structure, memory hierarchy, application performance constraints, etc. have been shown across 2D and 3D-SIC integration technologies.Major publications Milojevic, D., Agrawal, P., Raghavan, P., Van der Plaas, G., Catthoor, F., Van der Perre, L., Velenis, D., Varadarajan, R.,

Beyne, E. (2015). Ultra-Fine Pitch 3D-Stacked Integrated Circuits: Technology, Design Enablement and Application. Handbook of 3D Integration – Volume 4: 3D Design, Test, and Thermal

Agrawal, P., Milojevic, D., Raghavan, P., Catthoor, F., Van der Perre, L., Beyne, E., Varadarajan, R. (2014) System LevelComparison of 3D Integration Technologies for Future Mobile MPSoC Platform. IEEE Embedded Systems Letters, 6(4),pp. 85-88, Dec 2014.

Agrawal, P., Raghavan, P., Hartmann, M., Sharma, N., Van der Perre, L., Catthoor, F. (2013). Early Exploration forPlatform Architecture Instantiation with Multi-mode Application Partitioning. Proceedings of Design Automation Conference(DAC), pp.1-8, June 2013.

Application Partitioning and Architecture Instantiation along with Technology Exploration for MPSoCs

56

Milica MiloševićDepartment Electrical Engineering (ESAT)

PhD defence 04 May 2015

Supervisor Prof. dr. ir. Sabine Van Huffel

Co-supervisor Prof. dr. ir. Bart Vanrumste

E-mail [email protected]

Introduction / ObjectiveEpilepsy is one of the most common neurological diseases that manifests in repetitive epileptic seizures as a result of anabnormal, synchronous activity of a large group of neurons. There is no cure for epilepsy and sometimes even medicationand other therapies, like surgery, do not control the number of seizures. In that case, long-term (home) monitoring andautomatic seizure detection would enable the tracking of the evolution of the disease and improve objective insight in anyresponses to medical interventions or changes in medical treatment. Especially during the night, supervision is reduced;hence a large number of seizures is missed. In addition, an alarm should be integrated into the automated seizuredetection algorithm for severe seizures in order to help the patient during and after the seizure. Frontal lobe and tonic-clonic seizures are accompanied with violent movements which could lead to injuries; also there is the danger ofsuffocation caused by vomiting or the breathing can be obstructed. Combined video/electroencephalography (EEG)monitoring remains the gold standard for epilepsy monitoring, whereas solely EEG is traditionally used for automatedseizure detection in specialized hospitals. However, EEG electrodes have to be attached to the scalp by the trainednurse, and long-term wearing EEG can become uncomfortable, which makes EEG-based home monitoring not feasible.In this thesis, we investigate the application of less intrusive sensors, namely accelerometers (ACM) attached to thewrists and ankles within wrist-bands, and surface electromyography (EMG) registering the muscle activity of the biceps atboth arms, for the detection of epileptic seizures. This thesis aims at developing automated seizure detection algorithmsusing aforementioned modalities in pediatric patients.

Major publicationMilošević, M., Van de Vel, A., Bonroy, B., Ceulemans, B., Lagae, L., Vanrumste, B., and Van Huffel, S. Detection ofepileptic convulsions from accelerometry signals through machine learning approach. In Proceedings of the IEEEInternational Workshop on Machine Learning for Signal Processing MLSP (2014), IEEE, pp. 1–6.

Automated detection of epileptic seizures in pediatric patients using accelerometry and surface electromyography

Results & ConclusionsA multimodal approach resulted in a more robust detection ofshort and non-stereotypical seizures, while the number of falsealarms increased significantly compared with the use of singleEMG modality. This thesis also showed that the choice of therecording system should be made depending on the prevailingpediatric patient-specific seizure characteristics and non-epilepticbehavior.

Introduction / ObjectiveEpilepsy is one of the most common neurological diseases that manifests in repetitive epileptic seizures as a result of anabnormal, synchronous activity of a large group of neurons. There is no cure for epilepsy and sometimes even medicationand other therapies, like surgery, do not control the number of seizures. In that case, long-term (home) monitoring andautomatic seizure detection would enable the tracking of the evolution of the disease and improve objective insight in anyresponses to medical interventions or changes in medical treatment. Especially during the night, supervision is reduced;hence a large number of seizures is missed. In addition, an alarm should be integrated into the automated seizuredetection algorithm for severe seizures in order to help the patient during and after the seizure. Frontal lobe and tonic-clonic seizures are accompanied with violent movements which could lead to injuries; also there is the danger ofsuffocation caused by vomiting or the breathing can be obstructed. Combined video/electroencephalography (EEG)monitoring remains the gold standard for epilepsy monitoring, whereas solely EEG is traditionally used for automatedseizure detection in specialized hospitals. However, EEG electrodes have to be attached to the scalp by the trainednurse, and long-term wearing EEG can become uncomfortable, which makes EEG-based home monitoring not feasible.In this thesis, we investigate the application of less intrusive sensors, namely accelerometers (ACM) attached to thewrists and ankles within wrist-bands, and surface electromyography (EMG) registering the muscle activity of the biceps atboth arms, for the detection of epileptic seizures. This thesis aims at developing automated seizure detection algorithmsusing aforementioned modalities in pediatric patients.

Research MethodologyMachine learning techniques, including feature selection and least-squares support vector machine (LS-SVM) classification, wereemployed for detection of tonic-clonic seizures from ACM and EMGsignals in leave-one-patient-out (LOPO) testing loop. In addition,the outputs of ACM and sEMG-based classifiers were combinedusing a late integration approach.

57

Atul JAINDepartment Materials Engineering (MTM)

PhD defence 04 May 2015

Supervisor Prof. dr. ir. Stepan Lomov

Co-supervisor Prof. dr. ir. Ignaas VerpoestProf. dr. ir. Wim Van Paepegem

Funding IWT Baekaland

E-mail [email protected]

Introduction / ObjectiveThe main goal of this project is to develop, implement, and validate methodologies for the fatigue evaluation of short fiberreinforced composites (RFRC) that are based not only on material tests but on a combination of manufacturingsimulation, micromechanical modeling and macroscopic fatigue behavior (Hybrid Multiscale Model).

Research MethodologyA four step research strategy was used for the thesis:Step 1: Choose the correct mean field homogenization schemeStep 2: Micromechanics based damage model for SFRCStep 3: Damage at the constituent level is linked to the macroscopic fatigue propertiesStep 4: Process integration and validations

Each of the 4 steps are validated either by experimental tests and/or full FE calculations

Results & Conclusions

Mori-Tanaka formulation is found to be the most appropriatemean field homogenization scheme (Fig 1)

EqBI concept for treating fiber matrix debonding was developedand validated by full FE calculations (Fig 2)

Master SN curve approach developed to predict the local SNcurves ~ only 1 SN curve is needed as input

Framework for fatigue simulation is developed and validated forcomponent “Pinocchio” (Fig 3)

Fig 1: Mori-Tanaka formulation predicts the stresses in individual

inclusions correctly while PGMT fails

Major publicationJain, A., Lomov, S.V., Abdin, Y., Verpoest, I., Van Paepegem, W., "Pseudo-grain discretization and full Mori-Tanakaformulation for random heterogenous media: Predictive abilities for stresses in individual inclusion andmatrix" Composites Science and Technology. 87(0): p. 86-93.

Hybrid Multi-Scale Modelling Of Damage And Fatigue In Short Fiber Reinforced Composites

Fig 2: FE validation of the EqBI concept is performed by using contact surfaces of

varying areaFig 3: Stress contour and critical areas in Pinocchio

58

Federica Gencarelli

Department Materials Engineering (MTM)

PhD defence 05 May 2015

Supervisor Prof. dr. ir. Marc Heyns

Co-supervisor Prof. dr. Kristiaan Temst

email [email protected]

Introduction / Objectives Ge1-xSnx : emerging group IV semiconductor alloys with unique crystalline, optical and electrical properties. Several challenges to Ge1-xSnx growth: e.g. Sn equilibrium solid solubility in Ge below 1 at.%. Ph.D. work objectives: (i) develop a novel chemical vapor deposition (CVD) Ge1-xSnx growth approach, (ii)

investigate the kinetics and the chemical reactions involved in the Ge1-xSnx epitaxial growth mechanismwithin the proposed CVD approach, ( i i i ) investigate the Ge1 -xSnx materials propert ies.

Research Methodology

Major publicationF. Gencarelli, B. Vincent, L. Souriau, O. Richard, W. Vandervorst, R. Loo, M. Caymax, M. Heyns, “Low-temperature Ge and GeSn Chemical Vapor Deposition using Ge2H6”, Thin Solid Films, vol. 520, p. 3211, 2012.

Epitaxial growth of GeSn compounds for advanced CMOS and Photonics applications

SnCl4 and Ge2H6. Low T (320 C) ATM pressure CVD process using a pioneering combination of Sn and Ge precursors: Kinetic study of the Ge1-xSnx growth process. Experimental & theoretical (density functional theory (DFT)) investigation of the precursors-surface interaction. Study of the Ge1-xSnx material properties via different characterization techniques.

Results and conclusions

Novel CVD approach: > 11% substitutional [Sn]. Chemical reactions proposed explaining Ge1-xSnx growth. Determination of pPGe2H6crit to avoid phase separation.

Sn atoms preferentially incorporated as α-Sn defects. Sn-alloying-induced strain preferentially accommodated

via Ge-Sn bond bending.

Positive deviation from Vegard’s law: extraction of new experimental bowing parameter.

Main strain relaxationmechanism: misfit dislocations at the Ge1-xSnx /Ge interface.

Ge1-xSnx growth

Ge1-xSnx material properties

Thick strain-relaxed Ge1-xSnx layers’ growthcomplicated by localized Sn precipitation and bythe development of island features with anamorphous core (localized epitaxial breakdown).

59

Hasan FarrokhzadDepartment Chemical Engineering

PhD defence 07 May 2015

Supervisor Prof. dr. ir. Bart Van der Bruggen

Co-supervisor Prof. dr. ir. Tom Van Gerven

Funding Iran University of Science and Technology

E-mail [email protected]

Introduction / ObjectiveCation exchange membranes (CEMs) are widely used in ion separation technologies. The development of thesemembranes is based on excellency in their electrochemical, thermal and mechanical properties. The main objectives ofthis project was to synthesize novel hybrid and composite CEMs with a high performance salt removal for desalination(comparable to commercial membranes) and selective cation removal for chlor-alkali (caustic soda) and water softeningapplications.Research MethodologyThe novel composite CEMs were synthesized by sulfonation of a hydrophobic polymer (PVDF) and using the solutionblending method (Fig. 1). The influence of the amount of sulfonated PVDF on CEMs properties and performance wasevaluated. Polyaniline (PANi) was used for surface modification of CEMs to make a monovalent cation selectivemembrane. Meanwhile, a composite membrane of PANi/S-PVDF/PVDF was synthesized to enhance bivalent selectivity.The main variable parameters of PANI which are influencing the CEMs performance were: PANi doping agents PANi molecular weight The amount of PANi in the composition.

Results & Conclusions A novel composite CEM was synthesized with a highsalt removal, better than commercial CEM by using 70%S-PVDF. A novel hybrid CEM was synthesized by surfacemodification of S-PVDF/PVDF CEM by PANi using aspecific doping agent that provided an excellentmonovalent selective CEM for chlor-alkali application.A novel composite CEM for water softening wassynthesized by optimization of PANi molecular weight.

Figure 1. Synthesis of S-PVDF polymer and S-PVDF/PVDF CEM

Major publicationH. Farrokhzad, T. Kikhavani, F. Monnaie, S. N. Ashrafizadeh, G. Koeckelberghs, T. Van Gerven, B. Van der Bruggen,Novel composite cation exchange films based on sulfonated PVDF for electromembrane separations, Journal ofMembrane Science, 474 (2015)167–174. [impact factor: 4.908]

Synthesis and modification of novel polymeric cation exchange membranes

60

Juan Van RoyDepartment Electrical Engineering (ESAT)

PhD defence 07 May 2015

Supervisor Prof. dr. ir. Johan Driesen

Funding VITO (2y), EIT KIC InnoEnergy

E-mail [email protected]

Introduction / ObjectiveElectric vehicle (EV) charging in buildings has a non-negligible impact on the in-building and low-voltage (LV) distributiongrid. It is widely accepted that the coordination of EV charging may reduce this grid impact, allowing more EVs to becharged through the power system, without grid infrastructure investments. The literature mainly focuses on (large-scale)optimization coordination for a certain objective, which requires a relative high EV penetration rate to be beneficial.However, local clustering of EVs in buildings or LV distribution grids might already occur in the near-future, requiring localcharging solutions. Therefore, this dissertation focuses on the following local EV charging solutions: local EV charging strategies (rule-based control) in large buildings (apartment & office building), which require minimallocal or EV internal knowledge, and minimal or no communication in and outside the building. the use of DC grids to connect and charge the EVs in buildings.The objective is to assess how these solutions can already limit the grid impact (mitigation of problems), in order to allowa higher penetration rate of EVs and others, such as a heat pumps and PV systems, in the system.

Research MethodologyIn order to assess the grid impact of EV charging, two simulation tools have beendeveloped: A mobility behavior simulation tool, that creates realistic driving profiles for

individual vehicles in the fleet, based on available statistical data for mobilitybehavior in Flanders.

A Modelica library for electrical modeling, which can be used for the integration ofdifferent multidisciplinary energy systems in buildings and districts. The followingmodels have been developed within the IDEAS framework: single/three-phase(unbalanced) AC grids and unipolar DC grids, and a battery and EV model.

Results & ConclusionsAll EV charging strategies succeed in reducing the grid impact, compared to uncoordinated charging: reduction of peakpowers (demand and/or injection), voltage deviations and/or voltage unbalance. Despite, every adaptation to the chargingprofile (uncoordinated charging) may prolong or postpone the charging process, which may negatively impact the usercomfort. Nevertheless, the results show that these local EV charging strategies, which do not require any optimizationsand any communication outside the building, already allow to increase the EV penetration rate largely in buildings.The hybrid AC-DC grid topology interconnects the PV system, the heat pump and the EVs through a common DC bus.The main advantage of using DC grids is the balancing of the AC in-building grid. Both the voltage unbalance and theminimum occurring voltages are positively impacted. Therefore, DC grids allow more EVs to be charged in the buildingbefore the EN 50160 regulations regarding the voltage unbalance and deviations are violated.

Major publications J. Van Roy, N. Leemput, F. Geth, J. Büscher, R. Salenbien, and J. Driesen, “Electric vehicle charging in an office building microgrid with distributedenergy resources,” IEEE Trans. Sustain. Energy, vol. 5, no. 4, pp. 1389–1396, Oct. 2014. J. Van Roy, N. Leemput, F. Geth, R. Salenbien, J. Büscher, and J. Driesen, “Apartment building electricity system impact of operational electric vehiclecharging strategies,” IEEE Trans. Sustain. Energy, vol. 5, no. 1, pp. 264–272, Jan. 2014. J. Van Roy, B. Verbruggen, and J. Driesen, “Ideas for tomorrow: New tools for integrated building and district modeling,” IEEE Power Energy Mag.,vol. 11, no. 5, pp. 75–81, Sep. 2013.

Electric Vehicle Charging Integration in BuildingsLocal Charging Coordination and DC Grids

IDEAS: Tool for integrated modeling

61

Maria BakaDepartment Chemical Engineering

PhD defence 07 May 2015

Supervisor Prof. dr. ir. Jan Van Impe

Co-supervisor Dr. ir. Estefanía Noriega Fernández

Funding FLOF bursaal KU Leuven

E-mail [email protected]

Introduction / ObjectiveThe effect of food (micro)structure on microbiological safety is not integrated yet in predictive models, which are decisionsupporting tools for risk assessments, HACCP systems and process and product design. The objective of this study wasto investigate the effect of food matrix complexity on the growth dynamics of L. monocytogenes, with particular focus onthe influence of (i) background microflora, (ii) physicochemical characteristics and (iii) food (micro)structure of Frankfurtersausages, in order to understand the most important factors necessary to be included in a model system for accurateestimation of microbial growth dynamics.Research Methodology Characterisation of Frankfurter sausages for: (i) backgroundmicroflora, (ii) composition and physicochemical characteristics. Development of model systems of variable (micro)structuresas represented in Fig. 1, including previous research output . Comparison growth dynamics of L. monocytogenes from thedifferent model systems by fitting the data with the Baranyi andRoberts model (1994) at 4, 8 and 12°C under vacuum.Results & ConclusionsIn Figure 2, the following results can be observed: The fastest growth of L. monocytogenes occurs on canned meat , independently of temperature. Liquids and aqueous gels exhibit similar dynamics for L. monocytogenes growth. The slowest growth occurs in/on emulsions and gelled emulsions. Model systems of this study underestimated growth, possibly due to the different source of proteins than on canned meat.

Major publicationM. Baka, E. Noriega, E. Tsakali, J. Van Impe (2015). Influence of composition and processing of Frankfurter sausages onthe growth dynamics of Listeria monocytogenes under vacuum. Food Research International, 70, 94-100.

Influence of food (micro)structure on growth dynamics of Listeria monocytogenesApplication to meat products

Figure 1. Different model systems of variable (micro)structures

Blue: liquidsRed: aqueous gelsPink: emulsionsGreen: gelled emulsionsBlack: canned meat

Figure 2. L. monocytogenes growth dynamics in/on the five different model systems at three temperatures.

(4°C) (12°C)(8°C)

62

Carlos Gonzalez de Miguel

Department Electrical Engineering (ESAT)

PhD defence 12 May 2015

Supervisor Prof. dr. ir. Johan Driesen

Funding KIC-Active SubStations

E-mail [email protected]

Introduction / ObjectiveThe increasing penetration of Distributed Generation (DG) in the distribution grid that is nowadays taking placechallenges the Network Operators to maintain the reliability indexes achieved in prior years. The contribution of the DG-units to the short-circuit currents can cause the mal-operation of devices that are not designed to operate under bi-directional current flows. However, in the isolated-neutral grounding system the bi-directional fault currents do not affectthe performance of fault detection devices in case of phase-to-ground faults, the most common fault type.

For this fault type, this earthing system is known as highly reliable because (i) the line-voltages remain unchanged duringthe fault and (ii) the fault currents are low (difficult to detect), with a high probability of becoming self-extinguishing faults.Because of this, the fault location is mostly based on a trial-and-error switching sequence, until the fault is isolated. Thisprocedure is very time-consuming if performed with manually-operated switches and sectionalizers.

Towards reducing the outage time, grid investment is required. One of the options is installing directional Fault PassageIndicators (FPI). The technology has already proven to be a cost-effective solution to improve reliability in other groundingsystems. Because of the particular features of the isolated-neutral system, directionality is required in any case, with orwithout DG. Implementing directionality in the conventional way requires installing voltage sensors and the devices arestill susceptible to non-detection of high fault impedance faults.

The objective is to propose new algorithms for FPI to achieve directionality with low use of resources, so that the FPIs arereliable and cost-effective, with improved performance.

Research MethodologyIn order to achieve directionality, the phenomena that take place in isolated neutral grids during faulted conditions, as wellas during non-faulted conditions, have been analyzed. Based on the reported phenomena, new algorithms have beendeveloped following the structure of patent documents.

Results & ConclusionsThe main contributions of the thesis are three different methods to detect the direction in phase-to-ground faults inisolated-neutral grids. The first two methods do not require the installation of voltage sensors, whereas the third method isdesigned for small current fault detection, using zero-sequence voltage and current measurements.

Major publicationC. Gonzalez, E. Alvarez and M. Garcia, “Method and Apparatus for Detecting a Direction of a Ground Fault in aMultiphase Network, PCT/EP2015/052719, 10.02.2015 .

Directional Fault Passage Indicators for Isolated Neutral Distribution Grids

63

Begül BilginDepartment Electrical Engineering (ESAT)

PhD defence 13 May 2015

Supervisor Prof. dr. ir. Vincent RijmenProf. dr. Pieter Hartel

Co-supervisor Dr. Svetla Nikova

E-mail [email protected]

Introduction / ObjectiveEmbedded devices are used pervasively in a wide range of applications some of which require cryptographic algorithmsto provide security. However, an attacker can use the physical behavior of the device, such as the instantaneous powerconsumption during execution, to reveal sensitive information. Threshold implementation (TI) is a countermeasuremethod used to remedy this problem. Our goal is to develop techniques such that this method can be applied to a widerange of algorithms and can be used to counteract stronger attack scenarios.

Research MethodologyWe approached the problem from both theoretical and practical aspects. We focused on the typical building blocks of

Results & ConclusionsWe described how all the 3- and 4-bit permutations,some of the cryptographically significant 5- and 6-bitpermutations and 8-bit inversion should beimplemented to achieve a given security level. Weextended our results to cryptographic algorithms suchas AES, KECCAK and KATAN. Our analysis showedthat increased security requires more area (in terms ofNAND-gate equivalence - GE) and randomness and insome cases cause slower implementations. However,this performance loss is minor considering the securitygained as shown in the figures. Left figures show theattack results using 50k (top) and 10 million (middleand bottom) traces. Right figures show what is themaximum correlation coefficient over number of tracesused.Major publicationsBilgin, B., Gierlichs, B., Nikova, S., Nikov, V., and Rijmen, V. Trade-offs for Threshold Implementations Illustrated on AES. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2015, 13 pages, to appear.Bilgin, B., Nikova, S., Nikov, V., Rijmen, V., Tokareva, N., and Vitkup, V. Threshold Implementations of Small S-boxes. Cryptography and Communications, 7(1):3–33, 2015.

Threshold ImplementationsAs Countermeasure Against Higher-Order Differential Power Analysis

cryptographic algorithms such as permutations. Weperformed a mathematical investigation to determineto which of these permutations thresholdimplementations can be applied and what are theimplementation requirements. We also theorized theapplication of this method with increasing attackresources. We then moved to performing analysis onFPGA implementations of cryptographic algorithms inorder to test the practicality of our theory.

1st-order attack on unprotected AES implementation (2.6 kGE)

1st-order attack on 1st-order AES TI (8.2 kGE)

2nd-order attack on 1st-order AES TI (8.2 kGE)

No significant correlation,hence the attack is notapplicable

64

Bruke Daniel JoforeDepartment Chemical Engineering

PhD defence 13 May 2015

Supervisor Prof. dr. Christian Clasen

Co-supervisor Prof. dr. ir. Paula Moldenaers

Funding FWO

E-mail [email protected]

Introduction / ObjectiveThe dynamics of complex fluids close to solid boundaries is of great interest in different processes from physiology to industry. Thedetails of complex fluid behavior depends on the nature of the underlying microstructure as well as on their inter-particle and particlewall interactions. The aim of this work is to study the effect of confinement on the rheology and morphology of fluids containingdeformable particles. It involves experimental investigation of flow of model deformable particles under confinements that reach a lengthscale of the fluid microstructure. This work will provide an insight into the effect of confinement and particle deformability on thedynamics of complex fluid flows and, finally, utilize these experimental observations in the development of scaling arguments for micro-scale flow phenomena.

Research MethodologyThe second generation of the flexure based microgap rheometer (N-FMR) is used to carry out micro-gap rheological experiments. Thisinstrument enables rheological exploration to gaps range of 1 to 400 µm. In this thesis in order to form complex fluids containingdeformable particles model microgel particles with controllable elasticity, size and morphology are used. Furthermore, suspensions ofred blood cells were used to study effect of confinement inPhysiological systems.

Results & ConclusionsThe rheology of complex fluids is strongly gap dependentspecially when the measuring gap is close to the characteristicmicro-structural length scale . The strong gap dependent slipflows observed in smooth shearing geometries can bedescribed by elastohydrodynamic theory. Confinement inducesstructural freezing and postpones flow to higher yield stresses.

The microscopic mechanisms that dictate the flow ofdeformable particles in confinement can be explained interms of coupled elastic and elastohydrodynamic viscousforces.

Major publicationJofore, Bruke D., Philipp Erni, Paula Moldenaers, and Christian Clasen. "Rheology of microgels in single particleconfinement." Rheologica Acta (2015)..

Rheology and morphology of confined fluids containing deformable particles

Universal scalingargument:

Effect of slip: Effect of geometrical confinement:

Build up of stress withdecreasing gap

65

Vincent De SmetDepartment Electrical Engineering (ESAT)

PhD defence 13 May 2015

Supervisor Prof. dr. ir. Luc Van Gool

Funding IWT

iMinds

E-mail [email protected]

Introduction / ObjectiveThis doctoral thesis deals with the enhancement of digital images by increasing their resolution, a field commonly referredto as super-resolution. Whether an image or a video is being used in common multimedia channels like television, printedmedia and the internet or in scientific research domains such as computer vision, a high resolution image is almostalways preferable to a low-resolution image. Our goal in this thesis is to introduce novel single-image super-resolutionmethods that improve over current methods in terms of execution speed and output quality.

Research Methodology To improve the execution speed we formulate the sparse super-resolution problem as a Tikhonov regularization which has a closed form solution. We can then use this to calculate projection matrices offline and store them so that at run-time we can apply the stored projections very efficiently. We call this method ANR (Anchored Neighborhood Regression). We then propose using raw image patches to calculate the projection matrices rather than dictionary atoms. The execution speed remains the same as ANR but the output quality improves significantly. We named this method A+ (Adjusted Anchored Regression).We improve the output quality further by using semantic information known about the scene from automatic detection or manual segmentation and creating specialized training dictionaries. We also propose a generalized framework for super-resolution and image denoising that allows nonuniform image patches to be used. We make this possible within realistic time scales by using integral images to calculate the ideal patch size and shape.

Results & Conclusions ANR improves execution speed 10x-100x while retaining the output quality of other state-of-the-art methods. When keeping the same speed as ANR, the output PSNR of A+ improves 0.2 - 0.7 dB over ANR. Adding semantic information improves some semantic classes more than others, but almost always improves overall results. Allowing nonuniform patches improves average PSNR 0.6 dB for denoising and 0.2 dB for super-resolution. We also show that super-resolution can be useful for minimally invasive surgery and forensic image restoration.

Major publicationsDe Smet, V., Namboodiri, V. and Van Gool, L. "Nonuniform image patch exemplars for low level vision." Applications of Computer Vision (WACV), 2013 IEEE Workshop on. IEEE, 2013.Timofte, R., De Smet, V. and Van Gool, L. "Anchored neighborhood regression for fast example-based super-resolution." Computer Vision (ICCV), 2013 IEEE International Conference on. IEEE, 2013.

Learned Regressors and Semantic Priors for Efficient Patch-Based Super-Resolution

66

Katrien Van NimmenDepartment Civil Engineering

PhD defence 19 May 2015

Supervisor Prof. dr. ir. Guido De Roeck

Supervisor Prof. dr. ir. Peter Van den Broeck

Supervisor Prof. dr. ir. Geert Lombaert

Funding Agency for Innovation by Science and Technology

Introduction / ObjectiveFor footbridges, human-induced vibrations are a matter of growingconcern, often constituting the critical design requirement. Currently,designers are forced to rely on - what are assumed to be `conservative' -equivalent load models, upscaled from single-person forcemeasurements. The concerns of vibration comfort and safety arestrengthened by unexplored human-structure interaction (HSI)phenomena. This work addresses the experimental identification andanalytical modelling of crowd-induced loading with a specific focus on thevertical component and corresponding HSI phenomena.Research MethodologyAn extensive experimental study is performed to identify the relevantdynamic properties of the footbridge and the human body. In addition, amethodology is developed for the ambulatory characterisation of thewalking behaviour (figure 2), providing an essential input for thesimulation and verification of the human-induced loads.A comprehensive parametric study is performed to investigate themechanical interaction between the crowd and the supporting structure.The numerical findings are validated by means of a comprehensive full-scale experimental observations.A numerical model for pedestrian excitation including HSI is proposed(figure 3). The impact of HSI on the structural response is evaluated forvarious pedestrian densities and footbridge parameters.Results & ConclusionsIt is found that HSI-effects are primarily determined by the naturalfrequency of the footbridge and the crowd to structure mass ratio. Themost significant effect of HSI is in the effective damping ratio of thecoupled system which is much higher than the inherent structuraldamping.It is concluded that the mechanical interaction with the crowd is relevantfor the vertical low-frequency dynamic behaviour of footbridges.Moreover, the associated reduction in structural response is sufficientlylarge for consideration in design. Design procedures which disregard HSIare believed to lead to over-conservative designs.

Figure 1: The Eeklo footbridge subject tohuman-induced vibrations.

Major publicationVan Nimmen, K. and Lombaert, G. and Jonkers, I. and De Roeck, G.and Van den Broeck, P., Characterisation of walking loads by 3Dinertial motion tracking, Journal of Sound and Vibration 333(2), 5212-5226, (2014).

Numerical and experimental study of human-induced vibrations of footbridges

Figure 2: Characterisation of human-inducedloading by 3D inertial motion tracking.

Figure 3: The developed moving crowd model

67

Bart OnsDepartment Electrical Engineering (ESAT)

PhD defence 20 May 2015

Supervisor Prof. Hugo Van hamme

Co-supervisor Dr. Jort Florent Gemmeke

Funding IWT-SBO 100049

E-mail [email protected]

Introduction / ObjectiveIn speech-enabled command-and-control applications, the spoken commands are usually restricted to a predefined list of phrases and grammars. These conventions work well as long as the system does not have to stray too far from the training material and the conditions considered by the designer. Speech technology would benefit from training during usage, thus learning the vocalizations and the expressions of the user. Designing a vocal user interface (VUI) model from this developmental perspective is not a trivial problem. The VUI should learn to understand speech from learning examples. A learning example consists of two sources of information: the command spoken by the user and the demonstration of the commanded action. We aim at learning to understand a command from a few incrementally demonstrated learning examples (one command at a time) without phonetic transcriptions or segmentation annotations. We also aim at widen accessibility to users with non-standard speech such as dysarthric speech.Research MethodologyTo this end, we introduce and adopt different procedures in our VUI-model that learns from a few learning examples. The followed approach links the acoustic patterns (embedded in the spoken commands) to concepts (referring to device actions) by using joint non-negative matrix factorization (NMF). The method represents the data by its recurrent acoustic and semantic patterns and the incidence of these patterns in the data. Recurrent patterns are more easily spotted by batch learning in which all data is available at once. More difficult is to learn these patterns incrementally from piecewisepresented data in epochs. Besides batch learning, we develop procedures for incremental and adaptive learning by exploiting maximum a posteriori (MAP) estimation and implementing forgetting factors.

Results & ConclusionsThe learning curves are an assessment of the quality of learning (accuracy, Y-axis) in function of the number of learning examples (X-axis). We analyze the learning curves by numerous experiments in realistic learning scenarios implemented on computer.

Major publicationB. Ons, J. F. Gemmeke, H. Van hamme (2014). Fast vocabulary acquisition in an NMF-based self-learning vocal user

interface. Computer Speech & Language, 28(4), 997-1017. http://dx.doi.org/10.1016/j.csl.2014.03.004

The Self-taught Speech Interface

Figure 2. Here, we used the Domotica-3 corpus containingdysarthric speech. Incremental learning (Pink line) is afeasible alternative to former batch learning procedures. It isalso fully adaptive.

number of utterances0 50 100 150

F-score

30

40

50

60

70

80

90 10 commands

27 commands

30

40

50

60

70

80

90

Incremental learning:adaptive speaker-dependent GMMadaptive speaker-dependent NMFBatch learning:speaker-dependent codebookspeaker-dependent NMF

Figure 1. The full line is the final result on the Acorns corpus. Improvements are obtained by speaker-dependent training, soft clustering and empirical selection of the codebook sizes. The dashed line is the first obtained result on the same data.

50 100 200 400 800 >1750

50

55

60

65

70

75

80

85

90

95

100

68

Christos TrompoukisDepartment Electrical Engineering (ESAT)

PhD defence 20 May 2015

Supervisor Prof. dr. ir. Jozef Poortmans

Co-supervisor Prof. dr. ir. Robert Pierre Mertens

E-mail [email protected]

Introduction / ObjectiveIn order to tackle the problem of incomplete light absorption of thin crystalline silicon (c-Si) solar cells, advanced lighttrapping concepts based on photonic nanostructures have been proposed. However, the limited number of photonicassisted c-Si solar cells is an indication of how challenging their fabrication is due to a trade-off between their optical andelectrical properties. The objective of this thesis is to fabricate and integrate 2D photonic nanostructures in thin c-Si slabsso as to increase the light absorption of thin c-Si slabs without compromising the material’s electrical properties.

Research MethodologyIn this thesis we demonstrated the fabrication of 2D photonicnanostructures by nanoimprint lithography and dry plasma etchingand their integration in thin c-Si solar cells. A significant absorptionenhancement resulted in an increase in the energy conversionefficiency of the photonic-assisted thin solar cells (Figure 1) [1].However, silicon etching by dry plasma etching caused a decreasein the material quality while the resulting topographies posedlimitations on the conformality of subsequently deposited thin-films.In order to avoid these issue, we developed 2D periodic invertednanopyramids fabricated by nanoimprint lithography and wetchemical anisotropic etching as an alternative [2], resulting in:i) low surface recombination velocity andii) good contacting properties

Results & Conclusions2D photonic nanostructures were fabricated by developing and finetuning two lithography (nanoimprint and hole mask colloidal) and twoetching (dry plasma and wet chemical) techniques. We achieved thefabrication of nanopatterns with topographies which range from periodicto random nanostructures and from inverted nanopyramids to parabolichole profiles (Figure 2). The source of the material degradation seenafter dry plasma etching was identified in the presence of:i) interface states and a high density of dangling bonds due to the

surface roughnessii) the presence of sub-surface defectsresulting in high surface recombination velocities and low carrierlifetimes. The inverted nanopyramids developed as an alternative offerthe potential to be successfully integrated in a high efficiency solar cell.

Major publication[1] C. Trompoukis et al., Appl. Phys. Lett. 101 (2012) 103901.[2] C. Trompoukis et al., Progr. Photovolt. Res. Appl. 23 (6), 734-742 (2014), DOI: 10.1002/pip.2489.[3] C. Trompoukis et al., Phys. Status Solidi (a) (2014), DOI: 10.1002/pssa.201431180

Photonic nanostructures for advanced light trapping in thin silicon solar cells

Figure 1. 2D periodic nanostructures integrated ina 1 um thin c-Si solar cell, enhancing itsefficiency from 4.4% to 4.8% due to the betterlight trapping.

Figure 2. An overview of the fabricationpossibilities of 2D photonic nanostructuresstudied in this thesis.

69

Van Nieuwenhuyse Anneleen

Department Electrical Engineering (ESAT)

PhD defence 26 May 2015

Supervisor Prof. dr. ir. Nauwelaers Bart

Co-supervisor Prof. dr. ir. De Strycker Lieven

E-mail [email protected]

Introduction / ObjectiveThis research aimed at localizing transmitting objects in an indoor setup where the occurrence of signal reflections maydisturb the measurements. An object is localized using the Angle of Arrival (AoA) technology performed on multiple anchornodes. The three major tracks of this research are: The development of a linear phased antenna array, operating at 2.435 GHz, with a limited number of antennas and using off-the-shelf components. Define the resolution of the system. How far must two sources must be apart to detect both of them? What are the influences of the design parameters? What localization errors are common for this system for different setups and using different angle of arrival detection algorithms. Theoretical simulations are verified with practical data.

Research Methodology Design of a practical antenna array with four antennas, based on I/Q demodulation to detect incident angles using the AoA algorithms Beamscan, MUSIC and ESPRIT The resulting main beam of the beampattern has a -3 dB beamwidth varying with the direction. Closer sources are indistinguishable and represent the resolution. Theoretical beamwidths are used in simulations to predict the resolutions Verified with practical beamwidths Measured incident angle ≠ true incident angle which leads to localization errors Simulation tool to predict localization errors Verified in practical setups: Anechoic room and empty room with and without shielding

Results & Conclusions Resolution• Better with

extra antennas /extra anchors

• Linearly related to room area

Definition of reference value

Major publicationA. Van Nieuwenhuyse, L. De Strycker, N. Stevens, J.-P. Goemaere, and B. Nauwelaers. Analysis of the Realistic Resolutionwith Angle of Arrival for Indoor Positioning. International Journal of Handheld Computing Research, 4(2):1–16, 2013.

Feasibility of Indoor Localization using Angle of Arrival with Low Complexity Hardware

Beamscan ESPRIT MUSIC

Anechoic Room 29 cm 35 cm 29 cm

Empty Room

1 angle 95 cm 91 cm 120 cm

2 angles - 45 cm 51 cm

Empty room with shielding

1 angle 54 cm 48 cm 53 cm

2 angles - 36 cm 34 cm

Localization Errors • Match between theoretical and practical results

70

Emre Ylmaz

Department Electrical Engineering (ESAT)

PhD defence 26 May 2015

Supervisor Prof. dr. ir. Hugo Van hamme

Co-supervisor Prof. dr. ir. Dirk Van Compernolle

E-mail [email protected]

Introduction / ObjectiveThe main objective of this thesis is to investigate the feasibility of obtaining a noise robust exemplar matching (N-REM)system by combining two data-driven acoustic modeling approaches, exemplar matching and exemplar-based sparserepresentations. Such a system can be achieved by replacing the fixed-length exemplars of the exemplar-based sparserepresentations with the exemplars of the traditional exemplar matching. In this way, we create an exemplar matchingtechnique that is intrinsically noise robust. This thesis investigates the recognition performance of the proposedtechnique under various noise scenarios and describes a time warping scheme and several data selection techniques.

Research MethodologyThe N-REM framework models noisy speech mixtures as a weighted sumof non-negative speech and noise exemplars. The speech exemplars areorganized in separate dictionaries based on the associated speech unitand duration to have a more accurate model for each speech unit in thefeature space. The exemplar weights are obtained by solving a convexoptimization problem. After finding the exemplar weights, thereconstruction error (RE) provided by each dictionary is calculated and therecognition output is found by applying dynamic programming to find thedictionaries yielding the minimum RE.Results & ConclusionsThe performance of the proposed system is compared with the conventional GMM-HMM and exemplar-based sparserepresentation recognizers. The first exemplar-based sparse representation technique, sparse classification (SC), infersstate likelihood estimates for an HMM system from the exemplar weights and performs a modified Viterbi decoding tofind the most likely state sequence. The second technique, feature enhancement (FE), reconstructs the speechcomponent using the speech exemplars and their weights and recognizes the enhanced features using a conventionalGMM-HMM recognizer trained either on original or enhanced training data. The N-REM technique has providedimpressive results at lower SNR levels on small vocabulary noisy recognition tasks compared to the other systems.

Major publicationE. Ylmaz, J. F. Gemmeke, H. Van hamme (2014). Noise Robust Exemplar Matching Using Sparse Representations of Speech. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22 (8), 1306-1319.

Noise Robust Exemplar Matching for Speech Recognition and Enhancement

The extension of the N-REM technique on largervocabulary tasks remains as a future work.Combining the proposed technique with otherstatistical acoustic models is expected to benefitfrom the noise robustness of exemplar-basedacoustic modeling and better discrimination of thestatistical models providing improved recognitionperformance at all SNR levels.

71

72

Research MethodologyWe modeled end-to-end encryption for OSNs using cryptographic building blocks to developed different private sharingschemes, such as symmetric key cryptography, broadcast encryption, identity-based encryption, and secret sharing forthe key management. In addition, we modeled undetectable communication and designed subsequently a general covertinformation scheme delivering provable undetectability. Finally, taking advantage on the users' friendship connections inthe OSN we develop a system for browsing OSNs anonymously, while taking advantage of the high-availability storageand communication tools from modern OSNs. In this way we enforce privacy as content confidentiality for multiplerecipient and group scenarios, such that OSN providers are kept oblivious of the shared content and its intendedrecipients, as illustrated in the following figure.

OSN

Alice Friends of Alice(R A l i ce)

mC

m

Publ i sh Ret r i eve

. . . End-t o-End . . .

Filipe BEATO

Department Electrical Engineering (ESAT)

PhD defence 27 May 2015

Supervisor Prof. dr. ir. Bart Preneel

Funding Fundação para Ciência e Tecnologia (FCT)

E-mail [email protected]

Introduction / ObjectiveOnline social networks (OSNs) have taken the world by storm, boasting users in the hundreds of millions. At the sametime, OSNs create treasure troves of sensitive information, collecting and processing large amounts of data about theusers and their activities, leading to several privacy concerns. Whilst service providers try to mitigate this by restrictingaccess, the information published online is persistent and quickly spread. Currently, there are a limited number ofmechanisms that allow individuals to enforce their own privacy controls over information uploaded to be shared amongOSNs. In this thesis, we propose privacy-enhancing solutions that provide users with more control over the sharedcontent on OSNs, while enforcing privacy by means of practical and efficient cryptographic primitives.

Major publicationBeato, F., Kohlweiss, M., and Wouters, K., Scramble! your social network data. In PETS 2011 (Jul. 2011), S. Fischer-Hübner and N.Hopper, Eds., vol. 6794 of LNCS, Springer, pp. 211–225.Beato, F., Conti, M., Preneel, B., and Vettore, D., Virtualfriendship: Hiding interactions on online social networks. In IEEE CNS 2014(Oct. 2014), Y. Chen and R. Poovendran, Eds., IEEE, pp. 328–336.

Private Information Sharing in Online Communities

Results & ConclusionsSummarizing, we proposed the following along with practical evaluation achieving a limited computation overhead.

• We proposed a Collaborative joint protocol based on secret sharing that achieves confidentiality and allowscollaborative joint access control definitions for OSNs.

• We designed privacy-enhancing schemes for privately sharing information among multiple recipients on OSNs basedon cryptographic primitives, that keep any user oblivious of the content and the identity of the intended recipients.

• We modeled undetectability in the context of OSNs and suggested a general covert sharing scheme achievingundetectable communication.

• We also devised a system that allows users to browse OSNs while keeping their traces anonymous towards theprovider, by relying on friendship connections.

73

Jan Knopp

Department Electrical Engineering (ESAT)

PhD defence 27 May 2015

Supervisor Prof. dr. ir. Luc Van Gool

Funding 3D Coform, FWO, ERC Cognimund

E-mail [email protected]

Introduction / ObjectiveWe focus on these three points: i) automatic categorizationof the previously unseen objects; ii) retrieval of a 3D queryfrom a large database; iii) detecting, segmenting andgleaning objects in 3D scenes.

Proposed methodWe extend the popular 2D SURF feature detector/descriptorto 3D and take advantage of representing objects by a set oflocal features associated with the geometry and relativeposition to the center. Using this, we show state-of-the-artresults in classification and we also introduce new constraints

Shape search results on archaeological data. Given aquery shape, our algorithm is capable to retrieve relevantshapes from the large dataset. Shape search was tested onthe real-life dataset from museums (shown above) as well ason the synthetic benchmarks.

Major publicationJ. Knopp, M. Prasad, G. Willems, R. Timofte, L. Van Gool. Hough Transform and 3D SURF for robust three dimensional classification. In Proceedings of the IEEE European Conference on Computer Vision, 2010

Large-scale Classification and Retrieval of 3D Shapes

Shape completion. Object structure is automatically learnt tocomplete extremely occluded object. The missing regions arehighlighted by cubes in column 1 and are completed in voxelrepresentation in columns 2 (meshed result is also plotted incolumns 3).

Segmentation and labeling on Ottawa dataset. Each color corresponds to a class. Points detected as background are not shown.

(a) Training data (b) 3D ISM (c) Classification

Illustration of 3D ISM model. (a) Features are highlighted bythe color points that represent different visual words. (b) oreach visual word, we store its relative position to the objectcenter. (c) In the test part, visual words cast votes where theyexpect the center of the object.

that help in retrieval. We used our previous findings and combined them for joint detection and segmentation. Completion was achieved by investigating the power of deep learning (especially Restricted Boltzmann Machines) in 3D.

74

Marco PatrignaniDepartment Computer Science

PhD defence 27 May 2015

Supervisor Prof. dr. Dave ClarkeProf. dr. ir. Frank Piessens

Funding FWO

E-mail [email protected]

Introduction / ObjectiveA compiler is a complex software artefact that, among other things, translates programs written in a source-level language into programs written in a target- level one. To be secure, a compiler must preserve source-level security policies in the target-level programs it generates. This thesis presents a secure compiler from an object-oriented Java- like language to untyped assembly code extended with protected module architectures (PMA) – an isolation mechanism of modern processors. Moreover, it studies the behaviour of assembly code extended with PMA by means of fully abstract trace semantics.

Results & ConclusionsAs the compiler is proven to be fully abstract, it is proven to be resilient to malicious attackers injecting A+I code.The fully abstract trace semantics for A+I serves as a basis for reasoning and proving properties about A+I code.

Major publications Patrignani, M., Agten, P., Strackx, R., Jacobs, B., Clarke, D., and Piessens, F. Secure compilation to protected

module architectures. ACM Transactions on Programming Languages and Systems (TOPLAS) 2015. Patrignani, M., and Clarke, D. Fully abstract trace semantics for protected module architectures. In Computer

Languages, Systems & Structures, 2015.

The Tome of Secure Compilation: Fully Abstract Compilation to Protected Modules Architectures

The prototype implementation of thesecure compiler show that theoverhead it introduces is negligible andproportional to PMA (Fides) boundarycrossings.

Research MethodologyTo reason about the compiler, its source and target language are formalised: The source language is called J+E The target language is called A+I

The compiler between J+E and A+I is a function [[ ]] that maps programs inJ+E to programs in A+I.

To prove that [[ ]] is secure, it is proven to be fully abstract [Abadi '99]:

To simplify this proof, contextual equivalence at the A+I level is replaced withtrace equivalence, after A+I is extended with a fully abstract trace semantics:

75

Sen Yan

Department Electrical Engineering (ESAT)

PhD defence 27 May 2015

Supervisor Prof. dr. ir. Guy A. E. Vandenbosch

E-mail [email protected]

Results & Conclusions• Three types of metamaterial are proposed, i.e. the coated dielectric

sphere-based metamaterial with a wide negative refractive index(NRI) band, the encapsulating meta-molecule with high qualityfactor, and the planar metasurface with huge chirality.

• Several antennas based on metamaterials are designed, includingdual-band textile patch antennas based on the CRLH-TL, radialpatch antennas operating at zeroth-order mode and negativemodes, and low profile antennas loaded with an AMC plane.

Major publication1. S. Yan, and G.A.E. Vandenbosch. "Compact circular polarizer based on chiral twisted double split-ring resonator," Applied Physics

Letters, 102.10 (2013): 103503.2. S. Yan, P.J. Soh and G.A.E. Vandenbosch. "A wearable dual-band composite right/left-handed (CRLH) waveguide textile antenna

for WLAN applications,“ Electronics Letters, 50. 6 (2014): 424-426. (colour feature article)3. S. Yan, P.J. Soh, and G.A.E. Vandenbosch. "Low-profile dual-band textile antenna with artificial magnetic conductor plane," IEEE

Transactions on Antennas and Propagation, 62. 12 (2014): 6487-6490.

METAMATERIAL DESIGN AND ITS APPLICATION FOR ANTENNAS Introduction / ObjectiveMetamaterials are materials engineered to have properties that have not yet been found in nature. They have receivedincreasing attention due to their unique electromagnetic properties, and are widely used in the design of microwavedevices and antennas with novel performances. However, due to the fast development of flexible portable devices,antennas with multiple functions, based on variable structures, are still in urgent demand. This is the motivation of thisPhD project. We aim at designing several novel metamaterials, and use them to further improve the performance ofantennas. This will give more design freedom and a better performance for wireless communication systems.

Research Methodology• Metamaterial design:

• Working principles and circuit models for three types ofmetamaterials with different functions

• Full-wave simulation, parameters study, and measurement• Physical explanations behind the phenomena• Potential applications in antennas and other devices

• Antenna design:• Operational theory and design schedule• Modeling, simulation, fabrication and measurement• Dual band textile antennas with high front-to-back ratio

(FBR) and low specific absorption rate (SAR)• Wearable antenna performances on human body• Zeroth-order mode radial patch antennas with arbitrary size,

flexible shape, and omnidirectional radiation pattern• Low-profile patch antennas integrated with slot dipole based

on artificial magnetic conductor (AMC) plane

Fig. 1. Compact circular polarizer based on planar chiral metasurface [1].

Fig. 2. Textile dual-band antenna based on CRLH TL [2].

76

Yi LiDepartment Electrical Engineering (ESAT)

PhD defence 27 May 2015

Supervisor Prof. dr. ir. Guido Groeseneken

Co-supervisor Prof. dr. ir. Liesbet Lagae

Funding imec

E-mail [email protected]

Research MethodologyWe proposed a novel device consisting of a solid-statenanopore and an integrated metallic nanocavity, supportingboth ionic and optical readout. We experimentallycharacterized the ionic performance of the nanocavities andnanopore-in-cavity devices, respectively both in darkconditions and upon laser illumination.

Results & ConclusionsWe interpreted the characterization results and develop anumerical model to describe the ionic transport and heattransport in our metallic nanopore system. We finalize bypresenting the results of DNA transport through thesedevices without and with laser illumination. We analyzed thelight-induced ionic noise and identified an optimal, low-noisedevice geometry (Figure 1). In addition, for the particularcase of a dielectric nanopore embedded in a metallicnanocavity, we have observed light induced switching of theionic current. We were able to explain this qualitatively byinvoking nanobubble generation effects (Figure 2).

Summarizing, the nanopore-in-cavity devices pave the wayto simultaneous ionic and optical readout of singlebiomolecules that can contribute to the realization of high-resolution optical spectroscopy of single molecules.

Figure 1. Plasmonic enhanced ionic noise of metallic nanopores.

Major publication(1) Yi Li, Chang Chen, Sarp Kerman, Pieter Neutens, Liesbet Lagae, Guido Groeseneken, Tim Stakenborg, and Pol VanDorpe. Harnessing Plasmon Induced Ionic Noise in Metallic Nanopores. Nano Lett., 13(4):1724–1729, 2013.(2) Yi Li, Francesca Nicoli, Chang Chen, Liesbet Lagae, Guido Groeseneken, Tim Stakenborg, Henny W Zandbergen,Cees Dekker, Pol Van Dorpe, and Magnus P Jonsson. Photoresistance Switching of Plasmonic Nanopores. Nano Lett.,15(1):776–782, 2015.

Metallic nanopores for single-molecule DNA sensingIntroduction / ObjectiveIn the emerging field of nanopore-based biosensors, DNA and other biomolecules are detected atthe single molecule level by monitoring the resistive blockade that occurs upon threading singlemolecules through a nanopore. These single molecule ionic transport blockades contain manybiophysical properties including sequence information, structural and mechanical features of asingle biomolecule. The development of the nanopore technology could hence enable to revealthe DNA sequence, genomic mapping positions or as simple as the size/length discrimination ofbiomolecules. There are, however, still major challenges, related to the molecular specificity andthe resolution of the read-out and related to the control of the biomolecule translocation process.

Figure 2. Photoresistance switching of plasmonic nanopores.

77

Introduction / ObjectiveIn current radiotherapy treatment, real time measurements to measure the absorbed radiation dose directly in the tumorare seldom performed. The proposed UWB radar will give radiotherapists the ability to measure this absorbed dose,which improves quality assurance. The measurement system relies on the radiation induced changes in the complexpermittivity of organic tissue. The hardware implementation in CMOS is considered for its low-cost properties.

Maarten StrackxDepartment Electrical Engineering (ESAT)

PhD defence 01 June 2015

Supervisor Prof. dr. ir. ing. Patrick Reynaert

Co-supervisor Prof. dr. ir. Paul Leroux

Funding SCK•CEN AWM

E-mail [email protected]

Research MethodologyThis work investigates the targeted application using: Proof-of-concept contactless permittivity measurements.Custom UWB antenna design for remote sensing.1D target modeling for hardware specifications estimation.Hardware implementation in CMOS.

Results & ConclusionsWith the UWB radar setup, it is demonstrated that changes inpermittivity are detected using gel dosimeters. Changes in H2O byadding C12H22O11 or NaCl were also detected without any contact. Forimproved target response, an UWB Vivaldi antenna array with 11 dBgain was developed.A second main research activity focuses on the hardwareimplementation level. For the transmitter, a flexible FPGA based UWBpulse generator was developed, capable of generating pulses with just670 ps of duration, corresponding to a -10 dB bandwidth of 2.8 GHz.On the receiver side, two 40 nm CMOS chips were designed:A 5.5 GHz ERBW, 5.8-b, T/H with bulk switching and a 50 Ω output driver with only 10 fF of input capacitance.A 1.6 GHz ERBW, 4-b, SAR ADC with a novel T/H replica FB technique, reducing the INL and increasing the pulse fidelity factor.

(left) FPGA based UWB pulse generation, (right) 40 nm CMOS SAR ADC with T/H replica feedback.

Major publicationM. Strackx, E. D’Agostino, P. Leroux and P. Reynaert, “Direct RF Subsampling Receivers for Breast Cancer Detectionwith Impulse-Based UWB Signals”, in IEEE Transactions on Circuits and Systems-II: Express Briefs, vol. 62, no. 2,pp. 144-148, Feb. 2015.

Pulsed UWB radar design for remote sensing

Application measurement setup using lab devices.

transmitterreceiver

78

Xue WangDepartment Materials Engineering (MTM)

PhD defence 02 June 2015

Supervisor Prof. dr. ir. Bart Blanpain

Co-supervisor Prof. dr. ir. Jan Degrève

Funding FWO G.0433.10N

E-mail [email protected]

Introduction / ObjectiveGas bubble-melt interaction plays an important role in non-ferrous and ferrous metallurgical processes. Due to the opacityof the liquid metal and high temperature characteristics, bubbles can be better visualized in a quasi-two-dimensionalHele-Shaw cell (Fig. 1). Meanwhile, a numerical simulation can overcome the experimental difficulties and provide apartial understanding of the related multiphase phenomena. The main objective of the study is therefore to simulate thetwo-dimensional bubble dynamics and evaporation quantitatively.

Results & ConclusionsThe model can predict similar bubble behavior (shape,terminal velocity, shape oscillation and path instability).The pressure and velocity distribution in the liquid arequantified.The effect of gap thickness h on the terminal velocity anddrag coefficient is evaluated.The effects of evaporation parameter, shape oscillation,bubble size and temperature on the interface evaporation arediscussed.

Fig. 2: Bubbles in experiment and simulation.

Major publicationX. Wang, B. Klaasen, J. Degrève, B. Blanpain, F. Verhaeghe (2014). Experimental and numerical study of buoyancy-driven single bubble dynamics in a vertical Hele-Shaw cell. Physics of Fluids, 26:123303.

Numerical Simulation of Two-Dimensional Bubble Dynamics and Evaporation

Research MethodologyWater model was used due to similar kinematic viscosity.The bubble dynamics was simulated by a 2D volume of fluidmethod coupled with a continuum surface force model and awall friction model. By adjusting the viscous resistancevalues, bubbles in different gap thicknesses h were simulatedand validated by experimental results (Fig. 2).The interface mass transfer model was coupled to simulatethe evaporation induced bubble growth.

Fig. 1: Illustrution of a Hele-Shaw cell.

79

Roel De Coninck

Department Mechanical Engineering

PhD defence 04 June 2015

Supervisor Prof. dr. ir. Lieve Helsen

Funding KU Leuven Energieinstituut

E-mail [email protected]

Introduction / ObjectiveThe implementation of model predictive control (MPC) in buildings could enable an improved thermal comfort, lower operational costs and lower CO2 emissions. Moreover, such a controller can offer services to the energy market by using the flexibility of the building energy system to shift its loads. Unfortunately, MPC has not yet been applied to many buildings. The main reason is the large implementation effort, in particular for developing the control model.

Research MethodologyThe objective of this work is to develop and demonstrate a tool chain for automated deployment of MPC in buildings based on data-driven, grey-box building models. The tool chain serves two purposes in order to facilitate the transition to a low-carbon society:1. energy efficient building operation and2. optimal use of building flexibility

Result: flexibilityA methodology is proposed toquantify the flexibility of a building.The methodology returns both theamount of electricity that can beshifted and the associated costs forthe building operator. Thisinformation is represented on a costcurve. While most of the day, thestudied building can deliver flexibilityat a lower cost than the imbalanceprice in the Belgian power system,there are several hours where theflexibility is more expensive.

Major publicationR. De Coninck, F. Magnusson, J. Åkesson, and L. Helsen, “Toolbox for development and validation of grey-box building models for forecasting and control”, Journal of Building Performance Simulation, 2015, Accepted on 28/04/2015

Grey-Box Based Optimal Control for Thermal Systems in Buildings Unlocking Energy Efficiency and Flexibility

Result: Energy efficiencyThe implementation of MPC on a pilot project shows heating costsavings of 30% to 40% compared to a conventional controller.

80

Iris Van SteenwinkelDepartment Architecture

PhD defence 10 June 2015

Supervisor Prof. dr. ir. arch Ann Heylighen

Co-supervisor Prof. dr. Chantal Van Audenhove

Funding ERC & Research Fund KU Leuven

E-mail [email protected]

Introduction / ObjectiveDue to memory loss, most people with dementia areincreasingly disorientated in space, time, and identity.The built environment is expected to hold potential foroffering support in orientation, but adequate designknowledge is still lacking. This PhD research offersarchitects insights into experiences of people withdementia, and explores how architecture can supportthem in orientating.

Research MethodologyA novel approach was developed to inscribe this PhDresearch in current emancipatory discourses on housingand caring for people with dementia, and to bring thefindings closer to the discipline of architecture. Theresearch is built up around three case studies: twoprivate housing settings, and one residential care facility.In each case study ethnographic techniques arecombined with an architectural analysis.

Results & ConclusionsThe case studies give voice to people with dementia andprovide insights into their experiences in a format thatallows architects to develop affinity with their perspective.

Points of attention in designing architecture for peoplewith dementia:• Design strategic places that allow people to be

occupied with a daily life activity in a comfortable,more or less active way;

• Include architectural qualities often found incontemporary housing: light, roominess, opennessto the exterior;

• Take into account the social dynamics of peopleliving together;

• Articulate proper boundaries and connectionsbetween different spatial entities and domains.

Mary’s house contains “little worlds”, like herarmchair in the living room: spaces that arenarrow enough to provide a shelteringenvironment and that offer personal placeswhere Mary has her belongings ready-to-hand.

Major publicationVan Steenwinkel, I., Van Audenhove, C., & Heylighen, A. (2014). Mary’s Little Worlds: Changing Person-Space Relationships When Living With Dementia. Qualitative Health Research, 24(8), 1023–1032.

Offering architects insights into living with dementia

This PhD research could enhance dialogues between architects and their clients, and broaden their view on possible roles of architecture in the daily lives of people with dementia.

81

Thomas SuetensDepartment Materials Engineering (MTM)

PhD defence 11 June 2015

Supervisor Prof. dr. ir. Bart Blanpain

Co-supervisor Prof. dr. ir. Karel Van Acker

Funding CR³: Center for resourcerecovery and recycling

IntroductionWhen galvanized steel scrap is recycled, zinc containing Electric Arc Furnace Dust (EAFD) is produced. Since more than50% of all produced zinc is used in galvanizing, EAFD play a key role in the life cycle of zinc. In this work we consideredand compared different dust treatment technologies.After identifying In-Process Separation as the technology with the highest potential, the underlying thermodynamics andkinetics were examined.

Research MethodologyThree technologies were considered: the Waelz Kiln (referencetechnology), Rotary Hearth Furnace (new emerging technology), and In-Process Separation (new concept). In order to compare them objectively,an exergy analysis was performed.

Results & ConclusionsThis work led to 4 major results: In-Process Separation drastically

outperforms other technologies One reaction can hinder Zn

recovery: the gas-solid reaction of Zn-vapor with iron oxide The high-temperature diffusion

kinetics were determined (Figure 3) A model was developed that can

predict the Zn recovery by applying In-process Separation

Figure 1: EDS mapping of EAFDhighlighting Zn (green) and Fe (orange)

Major publicationT. Suetens, B. Klaasen, K. Van Acker, and B. Blanpain (2014). Comparison of electric arc furnace dust treatmenttechnologies using exergy efficiency. Journal of Cleaner Production, 65, 152-167.

The Recovery of Zn and Fe from Electric Arc Furnace DustsThe Feasibility of In-Process Separation

The second part of the PhD focused mainly around identifying potentialrisks for the In-Process Separation technology and providing the scienceto strengthen the concept.The following methods were used: Electromicroscopic analysis of EAFD (Figure 1) Thermodynamic calculations (e.g. FactSage) Kinetic reaction experiments (Figure 2) MATLAB feasibility model based on diffusion results.

Figure 2: Reacted Fe2O3 pellet Figure 3: The diffusion coefficients ofvarious elements in magnetite.

82

Sorna KhakzadDepartment Civil Engineering

PhD defence 17 June 2015

Supervisor Prof. dr. ir. Koenraad Van Balen

Co-supervisor Prof. dr. ir. Luc Verpoest

E-mail [email protected] or [email protected]

Introduction / ObjectiveMaritime and coastal cultural landscape is an important part of ourcultural resources in the coastal areas. Although, integrated coastalzone management (ICZM) has theoretically addressed the importanceof cultural ecosystems, cultural resources have mostly beenoverlooked in holistic management plans, resulted in loss of manybenefits out of cultural resources, which is due to the lack of properdefinition and evaluation of coastal cultural heritage. The presentresearch offers new methods for defining and evaluation of coastalcultural heritage with the aim to include it into ICZM.Research MethodologyAn interdisciplinary method of investigation is applied in order toachieve the objectives of this research. Through learning from naturalresources management experiences, this research applies the theoryof Integrative Complexity, which bridges disciplines such as social,economic, natural sciences to investigate specific frontier situation asit is emerged in the coastal areas between the sea and the land.

Major publicationKhakzad, S. and Van Balen, K. (2012), Complications and Effectiveness of In Situ Preservation Methods for UnderwaterCultural Heritage Sites. Journal of conservation and management of archaeological sites, Vol. 14, 69–78.

Integrated Approach in Management of Coastal Cultural Heritage

Results & ConclusionsFollowing tools were developed through this research and were tested for coastal area of Ostend, Belgium.1. Genaral guidelines to include coastal cultural heritage in ICZM & MSP.2. Integrative Evaluation Tool.3. Defining coastal cultural heritage: Coastal cultural middle-ground.

Coastal cultural middle ground: links &connection between heritage & people.

Ostend Coastal Cultural Middle Ground

Integrative Evaluation

83

Stijn JonckheereDepartment Mechanical Engineering

PhD defence 17 June 2015

Supervisor Prof. dr. ir. Wim Desmet

Co-supervisor Prof. dr. ir. Dirk Vandepitte

Funding IWT Vlaanderen

E-mail [email protected]

Introduction / ObjectiveOver the past years, the vibro-acoustic behaviour has become a key design feature of products. The evolution isinstigated by growing customer expectations and ever tightening regulations. Moreover, the trend towards lightweightmaterials requires even more the use of multilayered damping treatments to keep the vibro-acoustic properties withinrequirements. To avoid time consuming physical prototyping engineers need the tools to develop their products virtually.

Research MethodologyThis dissertation aims at the development user-friendly, highly efficient numerical techniques that allow the numericalmodelling and simulation of vibro-acoustic problems with complex, multilayered damping treatments. These should servea double purpose: User-friendly numerical techniques for quick predictions

Incorporation of Transfer Matrix (TM) models for damping treatments in vibro-acoustics Wave Based (WB) models Efficient and accurate models for detailed insight

Combining the Finite Element Method (FEM) for complex damping treatments with the vibro-acoustic WBM

Results & ConclusionsEspecially for the results, the use of one or more figuresis highly recommended to make your one-page poster toitemize or summarize your results: Extension of the WBM for vibro-acoustic simulation

through inclusion of TMM schemes and improvement of their use in a WB framework through exploitation of their angle dependency (Figure 1).

Extension of efficient and flexible hybrid FE-WB schemes for vibro-acoustics with complex damping treatments (Figure 2).

Figure 2 – Improved efficiency of Hybrid FE-WBM foracoustic-poroelastic problems

Major publication1. S. Jonckheere, E. Deckers, B. Van Genechten, D. Vandepitte, W. Desmet. A direct hybrid Finite Element – Wave

Based Method for the steady-state analysis of acoustic cavities with poro-elastic damping layers using the coupled Helmholtz-Biot equations. Computer Methods in Applied Mechanics and Engineering, 263:144–157, 2013.

2. S. Jonckheere, D. Vandepitte, W. Desmet. A Wave Based approach for the dynamic bending analysis of Kirchhoff plates under distributed deterministic and random excitation. Computers & Structures, 156:42–57, 2015.

Wave based and hybrid methodologies for vibro-acoustic simulation with complex damping treatments

Figure 1 – Improved accuracy of TM models in a WB framework through exploitation of angle dependency

84

Jef MaerienDepartment Computer Science

PhD defence 19 June 2015

Supervisor Prof. dr. ir. Wouter Joosen

Funding Agency for Innovation by Science and Technology

E-mail [email protected]

IntroductionNetworked embedded systems are slowly becoming more ubiquitous.Everything from the lights in our buildings, the locks on the door, till the coffeemachines in the kitchen will be equipped with tiny embedded computers. Oftenwe want to share these devices with others. This clearly turns security is a majorissue. In these shared embedded environments we see two major questions:1) How can the different stakeholders express their security requirements for

these networked embedded systems.2) What is the minimal size of an embedded security framework that enables

this sharing in a secure fashion.

Research MethodologyTo answer these questions, we identified the necessary set ofinteractions our framework must support, based on a threatmodel. Then for each of these interactions we:• Developed a set of abstractions, identifying and modelling

the information necessary to capture the requirements ofall relevant stakeholders.

• Designed a protocol or system securing that interaction.• Implemented a prototype and validated the approach.

Results & ConclusionsWe validated this work further by integrating the different systems in one large integrated framework for securing the entire lifecycle of networked embedded systems. Next we used this framework to build a secure smart office environment. In this smart office, embedded nodes continuously monitor temperature, light, and motion, and provide access control to the cupboard containing the valuable coffee pads for the local smart coffee machine.This validation shows that even resource constrained devices can support thenecessary infrastructure to enable secure sharing of embedded services.Additionally this work has shown the necessity of having good management andsecurity abstractions to significantly decrease the complexity and effort requiredto manage these large deployments of shared networked embedded systems.Major publication

Jef Maerien, Sam Michiels, Danny Hughes, Christophe Huygens, and Wouter Joosen. SecLooCI: A comprehensive securitymiddleware architecture for shared wireless sensor networks. In Ad Hoc Networks, Volume 25, Part A, February 2015, Pages 141-169

A Secure Framework for Shared Networked Embedded Systems

Smart office environment

Software architecture of the security framework

Overview of the different stakeholder roles

85

Sareh Rezaei HosseinabadiDepartment Chemical Engineering

PhD defence 12 June 2015

Supervisor Prof. dr. ir. Bart Van der Bruggen

Co-supervisor Dr. Anita Buekenhoudt

Funding Flemish Government agency for Innovationby Science and Technology (IWT) (IWT 110019).

E-mail [email protected]

Introduction / ObjectiveCollect a matrix of experimental results of new functionalized

ceramic membranes.Explore the full application potential of the new functionalized

membranes in organic solvent nanofiltration (OSN). Better understand and to be able to predict the transport process

in OSN.

Research MethodologyTwo types of characterizations have been performed on both modifiedand unmodified membranes: physico – chemical characterisation(contact angle measurements and micro-ATR/FTIR-spectroscopy) andperformance characterization (flux and retention measurements). 4-day test with the mixture of PS in acetone was done to show thestability of the modified membrane performance. Moreover, tounderstand how to tune solvent-membrane-solute interactions in acontrolled way to enhance OSN performance, an extensive retentionstudy by choosing three PEG molecules, PEG-600, partially methylcapped PEG and fully methyl capped PEG, and polystyrene as solutes,all with almost the same size but different polarities, in a wide range ofsolvents including water, ethanol, dimethylformamide, isopropanol,acetone, dichloromethane, tetrahydrofuran, methyl ethyl ketone,toluene, ethyl acetate, methyl isobutyl ketone, cyclohexane and methylcyclohexane was done. To unravel the transport mechanism properly,the pressure effect of flux and retentions was thoroughly investigated.The Spiegler-Kedem theory, taking into account both diffusion andconvection transport mechanisms, was used as a basis for afundamental explanation of the results and explaining competingcontributions of diffusion and convection in solute transport.

Results & ConclusionsInnovative/flexible grafting using Grignard reagents : FunMem®

amphiphilic (CA < 90°) : high fluxes for water + apolar solvents MWCO FunMem = MWCO nativeAffinity-based separations possible : enhanced performance in organic solvent nanofiltration explanation by changed solvent – solute – membrane affinities separation of solutes with idem size, but different polaritySpiegler-Kedemd theory offers an elegant way of interpretation of all results

Major publicationS. Rezaei Hosseinabadi, K. Wyns, V. Meynen, R. Carleer, P. Adriaensens, A. Buekenhoudt, B. Van der Bruggen, "Organic solventnanofiltration with Grignard functionalized ceramic nanofiltration membranes ", Journal of Membrane Science,454 (2014) 496–504.[impact factor: 4.908]

Organic solvent nanofiltration (OSN) : Unraveling the fundamentals of OSN

Figure 1. Grignard modification of ceramic membranes.

reflection coefficient: σ = 0.95, solute permeability: Ps = 1.5

Spiegler‐Kedem theory :

86

Rutger ClaesDepartment Computer Science

PhD defence 23 June 2015

Supervisor Prof. dr. Tom Holvoet

Co-supervisor Prof. dr. ir. Wouter Joosen

Funding IWT, KULeuven & iMinds

E-mail [email protected]

Introduction / ObjectiveDelegate multi-agent systems based coordination could provide the basis for an Advanced Traffic Information Systemthat helps drivers make better decisions by taking into account the effects of the choices they and their fellow road usersmake. The main research question handled in my thesis is “Can delegate multi-agent systems be used as the coremechanism of an ATIS for large-scale coordination of traffic? And if so, what adaptations are necessary for delegatemulti-agent systems to work in traffic.”

Research MethodologyTo evaluate the use of delegate multi-agent systems in traffic we developed a proof of concept ATIS system calledAntTIS. The effects of AntTIS on traffic on an urban and national scale where evaluated using traffic simulations. Thepercentage of drivers participating in the AntTIS system was variable throughout the simulations to analyze the effects ofpartial participation.

Results & ConclusionsThe simulations show that the ATIS system manages to assist itsusers in making route choices. While it is not an alternative to theexperience drivers gain from daily commuting, it can help driversmake better decisions when faced with unknown situations.

Given sufficient participation, the predictions generated based onthe intention propagation are accurate enough to help drivers avoidunnecessary congestions.

Delegate multi-agent systems can be used as the basis for anATIS

Intention propagation can be used to predict future travel times Sufficient participation is required for meaningful predictions

Major publicationsR. Claes and T. Holvoet. Traffic Coordination Using Aggregation-Based Traffic Predictions, Intelligent Systems, IEEE29(4): 96-100, 2014R. Claes, T. Holvoet, and D. Weyns. A decentralized approach for anticipatory vehicle routing using delegate multiagentsystems. IEEE Transactions on Intelligent Transportation Systems, 12(2):364–373, 2011.

Anticipatory Vehicle Routing

4.0

4.5

5.0

5.5

20 40 60 80Simulation time (min)

Trav

el T

ime

(min

)

RouteOriginal

Alternative

140

160

180

0 20 40 60 80Simulation time (sim)

Pre

dict

ed tr

avel

tim

e

measurement

0 min

1 min

2 min

5 min

10 min

87

Bogaerts Bart Department Computer Science

PhD defence 24 June 2015

Supervisor Prof. dr. Marc Denecker

Co-supervisors Prof. dr. Joost VennekensProf. dr. Jan Van den Bussche

E-mail [email protected]

Introduction / ObjectiveIn the field of knowledge representation and reasoning, many different logics are developed. Often, these logics exhibitstriking similarities, either because they emerged from related ideas, or because they use similar underlying fundamentalprinciples. We aim to formalise these common intuitions in a unifying framework.

Major publicationB. Bogaerts, J. Vennekens and M. Denecker (2015). Grounded fixpoints and their applications in knowledgerepresentation. Artificial Intelligence, 224, 51–71.

Groundedness in logics with a fixpoint semantics

Intuitively, a set x is grounded for an operator if whenever we remove some objects from x, at least one of these objects is re‐derived by the operator.

Research MethodologyIn this text, we focus on the domains of logic programming,autoepistemic logic, default logic and abstract dialecticalframeworks. In these domains, researchers have made use of asimilar intuition: that facts (or models) can be derived from theground up. We provide a formal definition of groundedness inlattice theory and study how it relates to concepts defined inapproximation fixpoint theory, an abstract algebraical frameworkthat unifies semantics of the aforementioned logics.

Results & ConclusionsThe main contributions of this dissertation are as follows:We define grounded lattice points and grounded bilattice pointsand discuss the relationship with other fixpoints studied in AFT.We find a new characterisation of the A-well-founded fixpoint asthe least precise A-grounded fixpoint.We discuss the meaning of groundedness in logic programming,autoepistemic logic, default logic, AFs and ADFs; we show that inthese contexts groundedness often formalises existing intuitions.We define a class of autoepistemic theories with a clear intendedmodel and show that the well-founded semantics fails to identifythis model. We generalise this observation to the algebraicalsetting, resulting in the class of locally monotone lattice operators.We define, algebraically, a refined version of the Kripke-Kleeneand the well-founded semantics and show that the latter semantics,applied to AEL, succeeds to identify the intended model for theaforementioned class of autoepistemic theories.

GroundedUngrounded

88

Ben JeurisDepartment Computer Science

PhD defence 24 June 2015

Supervisor Prof. dr. Raf Vandebril

Supervisor Prof. dr. Johannes Nicaise

Funding Fonds Wetenschappelijk Onderzoek - Vlaanderen

E-mail [email protected]

IntroductionLarge data collections often need to be represented by an average value which upholds certain properties, such asreducing the noise level of repeated measurements or representing the central location of the data in case of clustering.Averaging operations are applicable to a wide variety of data types and structures. We focus on the set of positive definitematrices as a whole and on subsets containing all matrices of a desired structure. The geometric mean of positivenumbers possesses various useful properties in the context of averaging operations, which stimulated the search for ageneralization of the mean towards positive definite matrices.

Research MethodologyThe smooth manifold structure of the set of positive definite matrices can beexploited in the theory of Riemannian optimization. We apply this rich theory to thesetting of the Karcher mean, the main instance of the matrix geometric mean, andinvestigate a large number of first- and second-order optimization techniques.An adaptation of the Karcher mean which accounts for additional matrix structure isintroduced and fully analyzed. In this analysis, an appealing link between linearalgebra and differential geometry was found.Finally, we consider an application-inspired geometry for positive definite Toeplitzmatrices and its associated averaging operation. Both the geometry and theaveraging operation are generalized towards the set of positive definite (Toeplitz-Block) Block-Toeplitz matrices.

Results & ConclusionsWe briefly highlight a few of our main contributions: An extensive overview of the various instances of the matrix geometric mean hasbeen given, accompanied by a detailed analysis of the Karcher mean and itscomputation. The Karcher mean and a newly introduced approximation thereof have been usedin an application in bioinformatics, providing an improvement over the state-of-the-art protein fold classification methods. We have introduced the structured geometric mean, an adaptation of the Karchermean which can preserve additional matrix structure. A generalization of an application-inspired mean towards the set of positivedefinite Toeplitz-Block Block-Toeplitz matrices has been proposed. We have alsoprovided an efficient, greedy approximation to this generalization.

Major publicationD. A. Bini, B. Iannazzo, B. Jeuris, R. Vandebril (2014). Geometric means of structured matrices. BIT NumericalMathematics, 54 (1), 55-83.

Riemannian Optimization for Averaging Positive Definite Matrices

Riemannian steepest descent method.

Representation of a Toeplitz-Block Block-Toeplitz matrix.

89

Maria Josefina CarboneDepartment Chemical Engineering

PhD defence 25 June 2015

Supervisor Prof. dr. ir. Peter Van Puyvelde

Co-supervisor Prof. dr. ir. Bart Goderis

Funding TOTAL

E-mail [email protected]

Introduction / ObjectiveIn the last decades, scientists and industrial partners were pushed to search for substitutes of petroleum-based plasticsdue to economic and social reasons related to environmental pollution and overexploitation of finite fossil resources.Poly(lactic acid) (PLA), a synthetic thermoplastic aliphatic polyester, is one of the most commercially interestingbiopolymer. However, despite its many attractive qualities, there are still some key aspects which need to be improved inorder to be competitive with respect to conventional polymers. The aim of this work is to tune PLA material properties sothat its production becomes more profitable by decreasing production costs and enlarging the application spectrum.

Research MethodologyThis PhD thesis focused on two important PLA issues, its poor melt strength and slow crystallization behavior.

Results & Conclusions• The nucleating effect of polyglycine was confirmed. Low concentrations were sufficient to obtain a significantimprovement without compromising PLA bio-advantages and rheological properties.

•The critical experimental conditions under which the crystallization process is accelerated and significant changes inmorphology happen were determined using parameters from molecular rheology and a criterion based on the appliedmechanical work. There was a fair coincidence with the transitions experimentally measured.

•The self made PLA-based additives solved both issues at once. Six PLA-based additives with various characteristicssuch as architecture and stereoregularity were screened. The additive with the combination of long chain branching andthe presence of HoSCo crystals formed during the extrusion step had an enhancing effect on both properties, the strainhardening as well as on the crystallization behavior of PLA. The study was then expanded to evaluate the effect of themonomer-inimer ratio and the additive content on the enhancement degree.

Major publicationCarbone M. J., Vanhalle, M., Goderis, B., Van Puyvelde, P., Journal of Polymer Engineering (2014) “Amino acids and poly(amino acids) as nucleating agents for poly(lactic acid)”

Poly(lactic acid): characterization and enhancement

• Entrance Flow Method (EFM) + Cogswell’s Analysis

• Extensional Viscosity Fixture (EVF)

• Calorimetry, shear rheology, rheo‐optics (inverted turbidity) and polarized light microscopy

• Isothermal Lotz NucleationEfficiency

• Rheological classification vs mechanical work

• Bio‐based nucleating agents:‐ Polyamino acids (polyglycine)‐ PLA‐based additive

• Flow induced nucleation

Measurement techniques & Analyses

Slow crystallization rates

Material Property Issues Proposed Solution

Poor melt strength &

No strain hardening behavior• PLA‐based additive

90

Cynthia R SUSILODepartment Architecture

PhD defence 25 June 2015

Supervisor Bruno De Meulder

Co-supervisor Peter J.M. Nas, Sudaryono Sastrosasmito

Funding The Interfaculty Council for Development Cooperation (IRO)

E-mail [email protected]

Introduction / ObjectiveThere has been a growing interest in the many large and mega commercial projects that have been introduced intomedium and small Indonesia’s cities. Among these newly established mega commercial projects is the BoulevardCommercial Project (BCP) in Manado, Indonesia. Its construction has generated a sudden transformation of thesurrounding urban context and set a physical development precedent for other eastern Indonesia’s cities to follow. Thepride that local citizens have for the project, however, is mixed with local - and growing - concerns about its impact. Thisresearch unravels the interplay between the project and local users through the interactions with the physical space,uses, practices, activities, discourse and the user experience. It explores different perspectives on the (re)production ofspace generated by the project and examines its influence on the city through the transformation of its public realms. Inother words, this dissertation addresses the rise of new collective spaces in contemporary urban Manado.

Research MethodologyThe aim of this dissertation is to fill the absence of empirical observation concerning the lived space of the BCP. Thisundertaking involves linking the analysis of the built environment of the BCP and the city of Manado in space. Thisrequires one to think of the relationships between the built environment of the BCP with the economy, society, history andcultural sensitivity of Manado. This research, therefore, uses qualitative research methods by combining literary research,fieldwork observations and spatial mapping.

Results & ConclusionsThis overall research finds that a commercial megaproject andthe ordinary local people significantly influence each other whileat the same time being co-dependent on each other. The BCPshows the success of the ordinary citizens in taking back anurban space. It has become machinery that, while offering a newand inviting scene, intensifies and assembles the local, itsrepresentations, manifestations and demonstrations. Everyonepresents and projects oneself in this showcase of modernity.Surprisingly they do not do so to become instantly modern. Onthe contrary, they are reproducing their ordinary habits and dailylife practices. The BCP becomes the focal point of a super localurban culture. It is a scene that makes the ordinary larger thenitself. Nevertheless, since ordinary people do not have a strong‘formal’ decision-making position and they are only capable ofappropriating a commercial megaproject spontaneously in agrassroots scale, leaving the physical development of the city tothe grassroots appropriations of the ordinary people alone isinsufficient to rebalance the massive domination of the nearfuture, upcoming megaprojects.

PUBLIC SPACE TRANSFORMATION IN A SECONDARY CITYThe role of collective space in the Boulevard Commercial Project of Manado – Indonesia

91

Ismail Cheikh HassanDepartment Architecture

PhD defence 26 June 2015

Supervisor Hilde Heynen

Co-supervisor Bruno de Meulder

E-mail [email protected]

Introduction / ObjectiveThe central question of this dissertation rotates around the dilemma of negotiating ‘professional-urbanist’ and ‘political-activist’ roles within the context of Palestinian camps. It aims to illustrate both the potential and limitations of this kind ofpractice while developing a theory of urbanist-activist practice within conditions of Palestinian camps

Research MethodologyThere are two main research lines in this research. The first is an action research based on a reflection on the researcher’s experiences as a professional and activist in Palestinian camps with a particular focus on the case of the reconstruction of Nahr el bared. The second is more theoretical and is concerned with situating Palestinian camps within the discourses of urbanism. This includes charting the history of urban projects within the context of Palestinian camps –particularly in relation to different forms of activism that evolved in these places.

Results & ConclusionsHistorical research on the idea of activist-professionals, typically concluded that these two notions are irreconcilable – and that actors have to choose to be one or the other. This research illustrates the strategic importance and necessity of such juxtapositions and coalitions in confronting ‘extra-ordinary’ realities that exist outside the political context of liberal-democratic societies. However, although an urbanist-activist practice is possible, it can only be manifested within temporal conditions and particular circumstances that need to be carefully negotiated. The possibilities and limitations on such a practice within Palestinian camp realities is thus developed within the frame of 3 kinds of projects : Camp Reconstruction, Camp Improvement and Return to Palestine.

Major publicationSheikh Hassan, Ismael, and Sari Hanafi. “(In)Security and Reconstruction in Post-Conflict Nahr Al-Barid Refugee Camp.” Journal of Palestine Studies 40, no. 1 (November 2010): 27–48. doi:10.1525/jps.2010.XL.1.027.

On Urbanism and Activism in Palestinian Refugee Camps: The Reconstruction of Nahr el Bared

92

Ling QinDepartment Materials Engineering (MTM)

PhD defence 26 June 2015

Supervisor Prof. Dr. ir. Paul Van Houtte

Co-supervisor Prof. Dr. -ing. Marc Seefeldt

Funding IAP & M2I

E-mail [email protected]

Introduction / ObjectiveAluminum alloys have attracted enormous attention in automobile industry in order to save weight and reduce fuel consumption. But the plates do not only have to be strong and light; they must also be good looking. A problem is then the so-called "roping" phenomenon: an unpleasant and easily visible imperfection at the surface. It is fairly common due to the metal forming process used to produce automotive body panels. It manifests itself as a series of ridges and valleys in the rolling direction. It results from an inherent inhomogeneity of plastic deformation. It was first believed that it was due to the most obvious inhomogeneity present in the material, namely the grains of which it consists, and which can be seen by a special instrument ("EBSD"). However surface investigations have evidenced that the length scale of the grain structure is smaller than that of the roping or ridging pattern.

Research MethodologyIn the present study, it is proposed that 'clusters' of grains exist which dueto their combined crystal orientations ("texture") as a whole 'collaborate' toeither cause a ridge or a valley. The EBSD measurements of the surfacehave then been analyzed by a newly designed method ("moving windowmethod") in order to detect these clusters and predict the correspondingridges or valleys.

Results & ConclusionsThe simulation results of the “moving window" roping model matched wellwith those of experimental measurements of surface profiles. (see Fig. 4)Roping can be interpreted as a result of the existence of “sub-volumes"with contrasting textures.Both roping wavelength and amplitude can be predicted.

Fig. 1 Schematic illustration of the simulationprocedure of “moving window" roping model

Major publicationQin, L., Seefeldt, M., Van Houtte, P. (2015). Acta Materialia, 84, 215-228.

Multi-scale modeling of roping of Al alloysEffect of meso-scale texture on surface roughening

Fig. 2 SurfaceEBSD maps

Fig. 3 Surfacetopography

Fig. 4 Simulation (MW) vs. experiment (wyko)

93

Dominique VercammenDepartment Chemical Engineering

PhD defence 26 June 2015

Supervisor Prof. dr. ir. Jan Van Impe

Co-supervisor Prof. dr. ir. Filip Logist

Funding IWT

E-mail [email protected]

Introduction / ObjectiveMathematical models for the growth, survival, inactivation and product formation of microbial organisms are becomingincreasingly important for the model-based design, optimization and control of bioreactors in (industrial) biotechnology,and for assessment of food safety and quality in predictive microbiology. However, existing models mostly focus ondescribing these systems from a macroscopic point of view. To further enhance the applicability of these models,integration of mechanistic knowledge is a necessity. The objective of this research is the development of novel dynamicestimation methodologies based on a metabolic reaction network, that are able to answer the question: “How domicroorganisms change their metabolic state when their environment changes over time?”

Research MethodologyBased on the dynamic metabolic flux analysis model structure, two methodologies where developed during this PhDresearch: an offline methodology, more suited to lab-scale, fundamental research environments, and an onlinemethodology that can be used in an industrial setting to generate real-time flux information, and to control bioreactorsusing continuously updated predictions of cell and metabolite concentrations.•The offline methodology is based on B-spline flux parameterizations, and uses an adaptive knot insertion strategy togradually increase the exoticity of the resulting flux profiles and the complexity of the resulting estimation problem.•In the online methodology, two black-box predictive flux models are combined with the dynamic metabolic flux analysismodel structure. Using the moving horizon estimation technique, continuously updated flux estimates are generated.

Results & ConclusionsBoth methodologies are illustrated using multiplesimulated case studies, to clarify their operation and totest their performance for realistic scenario’s. Bothmethodologies show accurate estimation performance,and significant improvements over previously publishedmethods regarding smoothness, applicability andextendability. Furthermore, the offline algorithm was alsotested on a real-life case study, in which an E. colipopulation is subjected to a sudden shift in temperature,resulting in an induced lag phase. The algorithmsuccessfully estimates the fluxes during this induced lagphase, and does so in a reasonable time frame, with anindication of the confidence intervals on the estimates.This is the first time fluxes are estimated in such a case.

Major publicationD. Vercammen, F. Logist, J. Van Impe (2014). Dynamic estimation of specific fluxes in metabolic networks using non-linear dynamic optimization. BMC Systems Biology, 8, p.132

Optimization-based methodologies and algorithms for dynamic metabolic flux analysis

Estimated fluxes using the B-spline-based offline algorithm, asopposed to the simulated fluxes, and with indication of confidenceintervals.

94

95

Satyakiran MunagaDepartment Electrical Engineering (ESAT)

PhD defence 29 June 2015

Supervisor Prof. dr. ir. Francky Catthoor

Funding IMEC

E-mail [email protected]

Introduction / ObjectiveModern cost-conscious dynamic systems incorporate knobs that allow run-time trade-offs between system metrics ofinterest. In such adaptive systems regular knob tuning to minimize costs while satisfying hard system constraints is animportant aspect. Goal of this work is to propose a systematic framework to help design proactive run-time controllers fornonlinear self-adaptive systems with uncertainties and hard constraints.

Research MethodologyNonlinear systems with uncertainties display time-linkage behavior, i.e., knob selection choices made to optimize thepresent may adversely impact the cost and even the system viability in the future depending upon how the uncertaintiesunfold. Hence such systems require optimizing the present and future together for the predicted likely dynamic situationwhile ensuring that system will meet all current and future hard constraints such as deadlines even in the unlikely worst-case situation. We also propose to bound uncertainties at run-time with the help of suitable models which utilize theadditional information available at the time of decision making. This dynamic bounding will limit the scope of worst-casesituation and increase the freedom for more cost-saving knob selections.

Results & Conclusions State of the art mode scheduling methods achieve on average 2x lower energy gains than an Oracle Proposed proactive scheduler with a practical predictor consistently outperforms all state-of-the-art schedulers with an average gain of 40% and has an average deviation of only 11% from Oracle. This is quite remarkable result given the maturity of the scheduling research domain.

Major publicationS. Munaga, F. Catthoor (2011). Systematic Design Principles for Cost-Effective Hard Constraint Management in DynamicNonlinear Systems. International Journal of Adaptive, Resilient, and Autonomic Systems, 2 (1), 18-45.

Proactive Hard Constraint Management in Cost-conscious Nonlinear Dynamic Computing Systems

Bound uncertainty sources

Reliable future event look-ahead

Bound tightening

Conditional re-optimization

Cost trade-off analysis

DT bounds

Pseudo-proactive Reactive

Truly-proactive

Constraint-driven search space pruning

Proactive conditioning

CTM conditioning

Speculative slackmanagement

Likely future prediction

Bound-driven search space pruning

Use CTMsif available

Worst-caseoptimal slackmanagement

We applied the proposed methodology on a videodecoder case study where the controller decidesprocessor mode on a macroblock granularity to minimizeoverall processor energy consumption including modeswitching overhead. Each macroblock has release timeconstraints and deadlines to avoid buffer underflow andoverflow. Developed a C++ model to evaluate andcompare proposed mode scheduler against the ones inthe literature.

96

Raghvendra MallDepartment Electrical Engineering (ESAT)

PhD defence 30 June 2015

Supervisor Prof. dr. ir. Johan Suykens

Funding ERC

E-mail [email protected]

Introduction / ObjectiveIn this thesis we have explored the role of sparsity in large scale kernel models. The two primary goals have been toobserve the role of sparsity in order to obtain good generalization power for supervised and unsupervised predictivemodels under the least squares support vector machines (LSSVM) primal-dual optimization framework and thescalability of these models for large scale datasets.

Research MethodologyWe explored sparsity in case of LSSVMs using the fixed-size methods and convexrelaxation to L0-norm penalties. An important aspect of kernel based methods isthe selection of a subset on which the model is built. We propose a uniquerepresentative subset selection technique for large scale graphs while retaining theinherent community structure and explore its applicability for big data analysis. Weutilize this subset for kernel spectral clustering (KSC) in case of big data networks andpropose several scalable and computationally efficient techniques for its model selection.We also propose a multilevel hierarchical kernel spectral clustering (MH-KSC)technique which overcomes issues like resolution limit suffered by state-of-the-arthierarchical community detection techniques and also perform sparse reductions on theKSC model. We explored the role of reweighted L1-norm penalty for feature selectionusing LSSVMs in case of high-dimensional classification problems. Finally, we developeda visualization (Netgram) toolkit to track and visualize the evolution of communities intime evolving networks.

Results

Major publications1. Mall R., Suykens J.A.K., "Very Sparse LSSVM Reductions for Large Scale Data", IEEE Transactions on

Neural Networks and Learning Systems, vol. 26, no. 5, Mar. 2015, pp. 1086 - 1097.2. Mall R., Langone R., Suykens J.A.K., "Multilevel Hierarchical Kernel Spectral Clustering for Real-Life

Large Scale Complex Networks", PLOS One, e99966, vol. 9, no. 6, Jun. 2014.

Sparsity in Large Scale Kernel Models

Challenges faced by LSSVM based methods

Original hierarchical network (left) and estimated hierarchical network (right) by MH-KSC for a network with 10, 000 nodes.

Group Lasso based reduced set (left) comprising just 2 red points and its image segmentation result (right).

97

Ioannis PitropakisDepartment Materials Engineering (MTM)

PhD defence 30 June 2015

Supervisor Prof. dr. ir. Martine Wevers

Co-supervisor Dr. Helge Pfeiffer

Funding European Commission’s Project “AISHA II”

E-mail [email protected]

Introduction / ObjectiveThe safe use of aircrafts can only be guaranteed when efficient means of damageassessment are in place. In the last years there is an increasing interest inStructural Health Monitoring (SHM) systems for aircrafts. Structural healthmonitoring is a technology where integrated sensors are used to enable continuousmonitoring of the structural integrity. The main target of this research was to findsystems that can be embedded on the aircraft and monitor its structural health.

Research MethodologyThe PhD research covers the field of SHM using non-destructive testing methods with advanced sensors focussing onsensor implementation and data analysis. The sensors used in this research are electrical, chemical, electrochemical,electromagnetic, optical as well as piezoelectric and are presented as follows: Flat coil sensors for SHM of aircraft components using eddy current technology Embedded electrical crack gauges for continuous crack monitoring Detection of acoustic impact in composite materials using optical fibres SHM using Lamb waves and the application of pseudo defects for signal validations Detection of aqueous corrosive liquids in confined parts using percolation sensors

Results & ConclusionsThe experimental results showed successful ways to detectcracks, small structural discontinuities or delaminations. Defectdetection was achieved from impedance measurements usingflat coil sensors, signal analysis from acoustic waves usingpiezoelectric sensors and single-mode optical fibres in apolarimetric setup, the interruption of electrical conductivity usingelectrical crack gauges and the collapse of percolationconductivity using percolation sensors.

The sensors were embedded on aluminium 2024-T3plates, a Eurocopter EC135 Tail boom made fromhoneycomb composite, an Airbus A320 Slat-track andon Carbon Fibre Reinforced Epoxy (CFRE) plates.

Major publicationCrack detection in aluminium plates for aerospace applications by electromagnetic impedance spectroscopy using flat coilsensors, I. Pitropakis, H. Pfeiffer, M. Wevers, Sensors and Actuators A: Physical, Vol. 176, April 2012, p. 57-63.

Dedicated Solutions for Structural Health Monitoring of Aircraft Components

References(1) H.Assler, Design of Aircraft Structures under Special Consideration of NDT, Presented by J. Telgkamp, 9th ECNDT, Berlin, Germany, 25-29 September 2009; (2). Cranfield University online, Aircraft Fatigue and Damage Tolerance course(35496 flight hours and 89680 flight cycles) ; (3). http://english4aviation.pbworks.com/w/page/24012191/Bad%20weather; (4). J. Kaletka, H. Kurscheid and U. Butter, FHS, the new research helicopter: Ready for service, Aerospace Science andTechnology, Vol. 9, 2005, p.456-467; (5). R. Longo, S. Vanlanduit and P. Guillaume, Laser vibrometer measures surface acoustic waves for nondestructive testing, International Society for Optical Engineering, Sensing and Measurement,29 November 2006, SPIE Newsroom. DOI: 10.1117/2.1200611.0377; (6) H. Speckmann and H. Roesner, Structural Health Monitoring: A contribution to the intelligent Aircraft Structure, European Conference in Non-Destructive Testing, 25-29Sep., Berlin, 2006.

(1)

(2)

(3)

(4)

(5)

(6)

98

Joseph C. SzurleyDepartment Electrical Engineering (ESAT)

PhD defence 30 June 2015

Supervisor Prof. dr. ir. Marc Moonen

Co-supervisor Prof. dr. ir. Alexander Bertrand

Funding Fonds Wetenschappelijk Onderzoek

E-mail [email protected]

Introduction / ObjectiveIn recent years, there has been a proliferation of wireless devices for individual use to the point of being ubiquitous.Recent research has started to exploit the increased processing power of these wireless devices to perform taskspertaining to audio signal acquisition and processing forming wireless acoustic sensor networks (WASNs). The researchobjectives of this thesis included such topics as: the improvement in noise reduction performance with shared multi-channel signals, efficient allocation of communication bandwidth, prolonging the lifetime of WASNs, topologyconstruction, scalability and self-healing.

Research MethodologyThe foundation of this work was centered on the multi-channel Wiener filter, which looks to estimate a desired audiosignal that has been corrupted by noise. This was first studied where a listener was assumed to have two bilateralhearing prostheses (forming a binaural hearing system) that communicated with a distributed microphone. This wasextended to multiple devices, or nodes, each with a set of microphones performing distributed audio signal estimation.This type of audio signal estimation was performed in: Fully connected networks (each node had a direct connection

with every other node). Heterogeneous networks composed of different devices. Ad-hoc topologies deployed in a random fashion.

Results & ConclusionsThe audio signal estimation was performed in a distributedfashion where each device broadcast a compressed versionof its microphone signals. It was shown that: By communicating with one another, the nodes were able to increase their audio signal estimation performance. Independent of the unerlying topology, each device converged to the same estimate as if all of the nodes broadcast all of their microphone signals to one another.

An envisaged wireless acoustic sensor network with adesired speech source and background noise.

Major publicationSzurley J., Bertrand A., Moonen M., "Distributed Adaptive Node-Specific Signal Estimation in Heterogeneous and Mixed-Topology Wireless Sensor Networks”, Accepted for publication in Signal Processing, 2015.

Distributed Signal Processing Algorithms for Acoustic Sensor Networks

Desired speech Noise

Hearing prosthesis

Mobile Phone

Wireless device

Distributed microphone

Binaural hearing system

Increase in signal-to-noise ratiooutput when including additionalmicrophone signals.

99

Fei ZhangDepartment Materials Engineering (MTM)

PhD defence 30 June 2015

Supervisor Prof. dr. ir. Jozef Vleugels

Co-supervisor Prof. dr. Ignace Naert

Funding KU Leuven OT/10/052FWO G.0431.10

E-mail [email protected]

Research MethodologyUnravelling the relationships between themechanisms and kinetics of hydrothermal ageingwith different critical parameters, such as grainsize, grain boundary chemistry, dopant type anddopant content.

Results & ConclusionsAgeing kinetics• Linear ageing kinetics.• Tensile stress accumulation at the transformation

front is responsible for ageing propagating into thematerial. (Fig.1)

Ageing mechanism• Annihilation of oxygen vacancies. (Fig.2 & Fig.3)• The zirconia grain boundaries play a key role in the

hydrothermal ageing behavior of 3Y-TZP ceramics.(Fig.2 & Fig.4)

Major publicationF. Zhang, K. Vanmeensel, M. Batuk, J. Hadermann, M. Inokoshi, B. Van Meerbeek, I. Naert, J. Vleugels (2015). ActaBiomaterialia, 16, 215-222.

Ageing-resistant zirconia ceramics for dental restorations

Fig.4. STEM-EDS elemental maps of zirconia grain boundaries

Fig.3. Ionic conductivityFig.2. Ageing kinetics by XRD

Introduction / ObjectiveZirconia ceramics are becoming highly attractive in prosthetic/restorative dentistry. However, they suffer from thespontaneous tetragonal to monoclinic phase transformation in the presence of water (hydrothermal ageing). This workaims to design ageing-resistant zirconia ceramics while retaining their high strength, fracture toughness and aesthetics.

Ageing-resistant zirconia ceramics• Large trivalent dopant cations such as La3+ or Nd3+

with a strong segregation to the ZrO2 grain boundaryare preferred.

• 0.2 mol% La2O3 and 0.1-0.25 wt.% Al2O3 co-doped3Y-TZP ceramics combined high translucency,superior hydrothermal stability and excellentmechanical properties.

Fig.5. Translucency

Fig.1. Phase and stress maps by micro-Raman spectroscopy

100

Syed Ali Abbas ShiraziDepartment Mechanical Engineering

PhD defence 01 July 2015

Supervisor Prof. dr. ir. Liliane Pintelon

Funding KULeuven

E-mail [email protected]

Introduction / ObjectiveQuality tools have been proven successful in manufacturing industry to improve process efficiency and now there isincreasing focus on the quality improvement tools application in healthcare. It is challenging to use quality tools inhealthcare which have origin in industry. There are hundreds of tools available in market but without proper guideline orprocedure to apply in healthcare. The selection of quality tools in healthcare is based on experience and hit-and trial-method. This dissertation focuses on a decision based framework for quality tool selection in healthcare.

Research MethodologyFor this dissertation “Design Methodology” by Simon (1996) has been selected. The methodology consist of four steps:design objective, design criteria, design development, and design iteration & evaluation.Design Objective: To develop a working decision framework which will allow novice users to select useful quality toolsin healthcareDesign Criteria: The performance of framework is evaluated by design criteria using test results and experts’ opinionsDesign Development: Developed piloting framework , tested and results used for iterationDesign Iteration & Evaluation: Prototype I is developed , tested and evaluated through design criteria by experts &project owners

Results & Conclusions•Selection Framework•Framework is working and evaluation is successful•Methodology for setting up such framework.•User skill assessment•Comprehensive tool classification•Generic and easy to customized for other sectors

Major publicationShirazi Syed Ali Abbas, Pintelon Liliane (2012),” Lean thinking and Six Sigma ; proven techniques in industry. Can they help healthcare?. International Journal of Care pathways vol:16 No.4, pp 160-167

.

Framework for Quality Tool Selection in Healthcare

101

Piet Callemeyn

Department Electrical Engineering (ESAT)

PhD defence 01 July 2015

Supervisor Prof. dr. ir. Steyaert Michiel

E-mail [email protected]

Introduction / ObjectiveMonolithic integration of electronic systems is one of the major techniques to reduce cost, size and power consumption inconsumer applications. This trend has been present in RF CMOS and is now also continuing in the field of Power CMOS.A major driver here is cost reduction by reducing the bill of materials. The use of on-chip converters provides an elegantand compact solution with a minimum of external components. This work will set out to take the next leap in PowerCMOS by exploring the different possibilities to realize fully-integrated DC-AC conversion.

Research MethodologyThis research explores the possibilities for monolithic DC-AC conversion. The presented architectures lend themselvesideally to be integrated on-chip. This has been validated by several chip implementations. The major bottlenecks thatwere discovered are the step towards high output voltages in a standard low voltage CMOS technology, the elimination ofexternal components and the generation of a very low frequency on-chip. Techniques were introduced in the presenteddesigns to stack several chips and achieve higher output voltage and power. By going towards higher switchingfrequencies on-chip, the size of the passives can be reduced, allowing a designer to put all passives on-chip. Usingmodulation techniques, the author was able to achieve a very low frequency (50 Hz) output on-chip.

Results & ConclusionsThe research has been validated in three chipimplementations. Several topologies were exploredwhere the buck converter serves as a basic buildingblock. A resonant topology is implemented, a series-stacked topology is presented and a self-containedphotovoltaic DC-AC converter was designed andmeasured.

The major contributions of this work are: The realization of a first fully-integrated DC-AC

converter High output voltage in standard CMOS achieved

using a series-stacked topology The elimination of bulky external passives Bipolar output voltage achieved on-chip On-chip very low frequency output First scavenging integrated DC-AC converter

Major publicationP. Callemeyn and M. Steyaert, High Voltage DC-AC conversion in Standard 1.2V CMOS Technology, May 2015, AnalogIntegrated Circuits and Signal Processing, Springer.

Fully-integrated CMOS DC-AC converters

A half-bridge DC-AC block, series-stacked using inter-die bonding

An on-chip photovoltaic class-D DC-AC converter

102

Sam WeckxDepartment Electrical Engineering (ESAT)

PhD defence 01 July 2015

Supervisor Prof. dr. ir. Johan Driesen

Funding FWO - Vito

E-mail [email protected]

Introduction / ObjectiveThe increasing amount of solar power generation challenges the future grid operation. Furthermore, electric vehicles are gaining popularity and the charging of these can lead to large and undesirable peaks in the electrical consumption. New control algorithms are needed to successfully integrate these solar panels and electric vehicles in the network. With the introduction of an automatic metering infrastructure and a two-way communication infrastructure, several new algorithms can be implemented.Research MethodologyThe research was structured as follows:• What is the impact of solar power on the network? What are typical grid topologies?• Can we develop a model of a distribution grid based on smart meter data, if no model is available yet?• How can we use this model and the new communication infrastructure to develop new voltage control strategies?• What are the advantages and disadvantages of algorithms that rely on a communication infrastructure?• How do we keep the algorithms optimal and scalable, as such that they can handle effectively the growing amount of

solar generation and electric vehicles?Results & ConclusionsThe main contributions of the work:• A detailed analysis and an improved modelling of three-phase,

four-wire, distribution grids• Control rules for both single-phase and three-phase PV units

and loads have been designed specifically for three-phase,four-wire, distribution grids

• Development of a scalable real-time pricing scheme tomitigate voltage problems in distribution grids

• The extension of a scalable multi-agent demand responsesystem to provide voltage control and frequency control

Transferring power from one phase to another toimprove grid conditions

Major publicationWeckx, S., González de Miguel, C., Driesen, J. (2014). Combined Central and Local Active and Reactive Power Control of PV Inverters. IEEE Transactions on Sustainable Energy, 2014, vol.5, no.3, pp.776-784, July 2014

Optimization of Network Support by Distributed Energy Resources

The neutral displacement due to single-phase loads inunbalanced distribution grids

A real-time pricing scheme toresolve voltage problems andto control the consumption

Provider and DSO send out

price

Customers send back response

Provider and DSO update

price

103

Yuyi WangDepartment Computer Science

PhD defence 01 July 2015

Supervisor Prof. Maurice Bruynooghe

Co-supervisor Dr. Jan Ramon

Funding European Research Council

E-mail [email protected]

Introduction / ObjectiveData mining and machine learning techniques deal with discovering interesting knowledge from data and improving theperformance of methods to do so. However, most traditional techniques cannot be applied to networked data which isusually represented by graphs or hypergraphs, e.g., social networks, traffic networks and biological networks. Our goal isto build statistically sound and efficient methods to mine and learn from networked data. In order to design a practicalgraph mining system, one needs an efficiently computable and reliable graph support which measures the frequency of agiven pattern in a large network. To do statistical learning from the networked data, a natural question is what is a goodway to combine this data to obtain a large effective sample size.

Research MethodologyMany graph supports are based on the concept of overlap graphs. However, all existing overlap graph based supportmeasures are expensive to compute. We introduced the notion of overlap hypergraphs and studied the conditions underwhich an overlap hypergraph based support measure has good properties.

Major publicationsY. Wang, J. Ramon, T. Fannes (2013). An efficiently computable subgraph pattern support measure: Countingindependent observations. Data Mining and Knowledge Discovery, 27 (3), 444-477.J. Ramon, Y. Wang, Z. Guo (2015). Learning from networked examples. Journal of Machine Learning Research.(Accepted)

From graph patterns to networked statistics

To investigate the learnability of networkeddata, we used hypergraphs to model this data,and then made reasonable assumptions forthis model. We considered combining thesedata points by weighting them and derivedconcentration bounds to guarantee the quality.

An example of subgraph pattern P, largenetwork D, the occurrences of P in D andthe corresponding overlap graph andoverlap hypergraph.

Movie(genre, actor popularity, ...)

Person(age,

gender, ...)

Theater(location

, ...)

1 movie1 person1 theater1

2 movie1 person2 theater2

3 movie2 person2 theater1

4 movie3 person3 theater3

5 movie4 person4 theater3

6 movie5 person4 theater3

Results & ConclusionsWe proposed an overlap hypergraph based graphsupport measure that is efficiently computable. This new graph support measure has several goodproperties, e.g., anti-monotonicity, so it allows us pruningthe search space effectively and save time.We can efficiently minimize the variance whenperforming U-statistics (e.g., mean-value estimation) onnetworked data.We derived Chernoff-Hoeffding type inequalities fornetworked random variables which achieve highereffective sample size and help us design algorithms tolearn from networked data.

A movie rating problem: a rating is given by a person who watched a movie in atheater. These ratings are not independent of each other, because a person canwatch and then rate several different movies and a movie can be watched andthen rated by different persons. This data can be represented by a table, but amore natural representation way is a hypergraph whose nodes are objects(persons, movies and theaters) and hyperedges are examples.

104

Adi Xhakoni

Department Electrical Engineering (ESAT)

PhD defence 01 July 2015

Supervisor Prof. dr. Ir. Geoges Gielen

Funding IWT Hipercim, IWT 3SIS

E-mail [email protected]

Introduction / ObjectiveThe research in the imaging field is typically focused on increasing the spatial resolution, the dynamic range and theframe rate. Several existing methods allow to achieve these characteristics singularly. However, their simultaneouscombination in the same sensor becomes very challenging. To maintain the same frame rate at increasing spatialresolution, higher band-width readout circuits are needed, increasing the thermal noise and reducing the dynamic range.The goal of this research was the development of new architectures to increase the imager’s performance metrics.

MethodologyWe explored various solutions, both in standard CMOS imagesensor technology (CIS) and in stacked technology.In CIS technology we developed circuits which efficientlyreduce the thermal and 1/f noise of the pixel, circuits whichreduce the parasitic capacitance of the column bus and circuitswhich improve the parallelism of the readout.In stacked technology we developed highly parallelarchitectures which allow a high frame rate, a high spatialresolution and a high dynamic range.In addition, we developed the first incremental Sigma-Delta(ISD) ADC with photon-transfer-curve quantization step whichfinds a perfect fit with the stacked imager architectures.

Results & ConclusionsSeveral test chips were designed to verify the validity of theproposed imager readout architectures.• The designed imager with column parallel ISD ADCs

achieved a low readout noise of 0.2mV and 12 bit low-lightresolution with 40 clocks cycles vs. the 110 clock cycles ofthe state of the art.

• The designed 1D-decoding readout architecture achieved1.4e- noise and 80dB DR at 730 frames per second, ideallyconstant at any spatial resolution.

• The test chip with global shutter 64 x 64 sub-pixelsachieved 132dB. If applied to an 8K-format sensor, thereadout architecture can achieve 475 frames per second.

Fig. 1. Test chip of imager with column-parallelincremental Sigma-Delta ADCs (left), and test chip withglobal shutter sub-pixels for stacked imagers (right).

Major publicationXhakoni A., Le Thai H., Gielen G., "A Low-Noise High-Frame-Rate 1D- Decoding Readout Architecture for Stacked ImageSensors," in IEEE Sensors Journal, vol. 14, no. 6, pp. 1966-1973, 2014.

High-Frame-Rate and High-Dynamic-Range Imager Readout Circuits for CIS and Stacked Technology

Fig. 2. Test setup for electrical/optical measurements(left), and test image captured by the designed chip(right).

105

Siamak Mehrkanoon

Department Electrical Engineering (ESAT)

PhD defence 02 July 2015

Supervisor Prof. dr. ir. Johan Suykens

Funding ERC: European Research Council

E-mail [email protected]

Introduction / ObjectiveIn many practical applications, some forms of additional prior knowledge is often available. Incorporatingavailable prior knowledge into the data driven modeling task can potentially improve the performance of the model. Therefore exploiting and incorporating the available prior information into the learning framework is the scope of this thesis.

Research MethodologyThis thesis explores the possibilities of incorporating the available side-information in the learning process. One can start with a suitable core model corresponding to the given task, and integrate the prior-knowledge of the task into the model via adding a set of constraints or regularization term. The core models considered in this thesis are Least Squares Support Vector Machines (LSSVM) and Kernel Spectral Clustering (KSC).

Results & Conclusions Learning solution of a dynamical system using LSSVM based model. Development of an integration-free approach for parameter estimation of dynamical system. Introducing a novel semi-supervised learning algorithm, MSS-KSC that can learn from both labeled and unlabeled data points.

Major publicationsS. Mehrkanoon, J.A.K. Suykens (2012). LSSVM approximate solution to linear time varying descriptor systems",Automatica, 48(10), pp. 2502-2511.S. Mehrkanoon, S. Mehrkanoon, J.A.K. Suykens (2014). Parameter estimation of delay differential equations: anintegration free LSSVM approach", Communication in Nonlinear Science and Numerical Simulation, 19 (4), pp. 830-841.S. Mehrkanoon, C. Alzate, R. Mal, R. Langone, J.A.K Suykens (2015),“Multi-class Semi-supervised learning based uopnkernel spectral clustering", IEEE Transactions on neural networks and learning systems, 26(4), pp. 720-733.

Incorporation of Prior Knowledge into Kernel based

Core

Model

LSSVM

KSC

Side Info

Regularization

Constraints

Optimal

Model

Semi-supervised clustering Two-moons data setParameter estimation of ODEs

106

Maheshi DanthurebandaraDepartment Materials Engineering

PhD defence 03 July 2015

Supervisor Prof. dr. Karel Van Acker (KU Leuven)

Co-supervisor Prof. dr. Steven Van Passel (UHasselt)

Funding IWT, R&D project with Group Machiels

E-mail [email protected]

Introduction / ObjectiveEnhanced Landfill Mining (ELFM) is an innovative concept which allows the recovery of land, reintroduction of materialsback to the material cycles and recovery of energy from a considerably large stock of resources held in landfills. Theknowledge about the critical factors for environmental and economic performance of ELFM is necessary in order to propelELFM from the conceptual to the operational stage. Hence the objective of this work was to investigate the environmentaland economic performance of the novel ELFM concept.

Research Methodology• Develop the general process flow diagram• Develop a model based on life cycle assessment

(LCA) and life cycle costing (LCC)• Assess the overall impact of the entire ELFM

system, individual processes, and also of thetrade-off between the environmental andeconomic performances

Major publicationDanthurebandara, M., Van Passel, S., Vanderreydt, I., Van Acker, K.. (2015). Assessment of environmental andeconomic feasibility of Enhanced Landfill Mining. Waste Management. DOI: 10.1016/j.wasman.2015.01.041

Environmental and Economic Performance of Enhanced Landfill Mining

Scenario 1 Plasma gasification with landfilling of plasmastoneScenario 2 Incineration with landfilling of bottom ashScenario 3 Incineration with aggregate production out of bottom ashScenario 4 Plasma gasification with aggregate production out of plasmastoneScenario 5 Plasma gasification with inorganic polymer cement production out of plasmastoneScenario 6 Plasma gasification with inorganic polymer block production out of plasmastoneScenario 7 Plasma gasification with blended cement production out of plasmastoneScenario 8 Plasma gasification with blended cement block production out of plasmastone

Results and Conclusions• ELFM shows clear environmental benefits

against the landfill’s existing situation (Figure 1).• The thermal treatment process (plasma

gasification) is the most contributing process(Figure 2)

• The environmental performance of the plasmagasification process can be improved byvalorizing the residues (plasmastone) (Figure 3).

• There is a clear trade-off between theenvironmental and economic performances ofplasma gasification scenarios (Figure 4)

Figure 1: Environmental profile ofELFM vs Do-nothing scenario

Figure 2: Contribution of ELFMprocesses to the totalenvironmental impact

Figure 3: Environmental profile ofdifferent thermal treatmentscenarios

Figure 4: Trade-off analysis ofdifferent thermal treatmentscenarios

Maheshi DanthurebandaraDepartment Materials Engineering

PhD defence 03 July 2015

Supervisor Prof. dr. Karel Van Acker (KU Leuven)

Co-supervisor Prof. dr. Steven Van Passel (UHasselt)

Funding IWT, R&D project with Group Machiels

E-mail [email protected]

Introduction / ObjectiveEnhanced Landfill Mining (ELFM) is an innovative concept which allows the recovery of land, reintroduction of materialsback to the material cycles and recovery of energy from a considerably large stock of resources held in landfills. Theknowledge about the critical factors for environmental and economic performance of ELFM is necessary in order to propelELFM from the conceptual to the operational stage. Hence the objective of this work was to investigate the environmentaland economic performance of the novel ELFM concept.

Research Methodology• Develop the general process flow diagram• Develop a model based on life cycle assessment

(LCA) and life cycle costing (LCC)• Assess the overall impact of the entire ELFM

system, individual processes, and also of thetrade-off between the environmental andeconomic performances

Major publicationDanthurebandara, M., Van Passel, S., Vanderreydt, I., Van Acker, K.. (2015). Assessment of environmental andeconomic feasibility of Enhanced Landfill Mining. Waste Management. DOI: 10.1016/j.wasman.2015.01.041

Environmental and Economic Performance of Enhanced Landfill Mining

Scenario 1 Plasma gasification with landfilling of plasmastoneScenario 2 Incineration with landfilling of bottom ashScenario 3 Incineration with aggregate production out of bottom ashScenario 4 Plasma gasification with aggregate production out of plasmastoneScenario 5 Plasma gasification with inorganic polymer cement production out of plasmastoneScenario 6 Plasma gasification with inorganic polymer block production out of plasmastoneScenario 7 Plasma gasification with blended cement production out of plasmastoneScenario 8 Plasma gasification with blended cement block production out of plasmastone

Results and Conclusions• ELFM shows clear environmental benefits

against the landfill’s existing situation (Figure 1).• The thermal treatment process (plasma

gasification) is the most contributing process(Figure 2)

• The environmental performance of the plasmagasification process can be improved byvalorizing the residues (plasmastone) (Figure 3).

• There is a clear trade-off between theenvironmental and economic performances ofplasma gasification scenarios (Figure 4)

Figure 1: Environmental profile ofELFM vs Do-nothing scenario

Figure 2: Contribution of ELFMprocesses to the totalenvironmental impact

Figure 3: Environmental profile ofdifferent thermal treatmentscenarios

Figure 4: Trade-off analysis ofdifferent thermal treatmentscenarios

Maheshi DanthurebandaraDepartment Materials Engineering

PhD defence 03 July 2015

Supervisor Prof. dr. Karel Van Acker (KU Leuven)

Co-supervisor Prof. dr. Steven Van Passel (UHasselt)

Funding IWT, R&D project with Group Machiels

E-mail [email protected]

Introduction / ObjectiveEnhanced Landfill Mining (ELFM) is an innovative concept which allows the recovery of land, reintroduction of materialsback to the material cycles and recovery of energy from a considerably large stock of resources held in landfills. Theknowledge about the critical factors for environmental and economic performance of ELFM is necessary in order to propelELFM from the conceptual to the operational stage. Hence the objective of this work was to investigate the environmentaland economic performance of the novel ELFM concept.

Research Methodology• Develop the general process flow diagram• Develop a model based on life cycle assessment

(LCA) and life cycle costing (LCC)• Assess the overall impact of the entire ELFM

system, individual processes, and also of thetrade-off between the environmental andeconomic performances

Major publicationDanthurebandara, M., Van Passel, S., Vanderreydt, I., Van Acker, K.. (2015). Assessment of environmental andeconomic feasibility of Enhanced Landfill Mining. Waste Management. DOI: 10.1016/j.wasman.2015.01.041

Environmental and Economic Performance of Enhanced Landfill Mining

Scenario 1 Plasma gasification with landfilling of plasmastoneScenario 2 Incineration with landfilling of bottom ashScenario 3 Incineration with aggregate production out of bottom ashScenario 4 Plasma gasification with aggregate production out of plasmastoneScenario 5 Plasma gasification with inorganic polymer cement production out of plasmastoneScenario 6 Plasma gasification with inorganic polymer block production out of plasmastoneScenario 7 Plasma gasification with blended cement production out of plasmastoneScenario 8 Plasma gasification with blended cement block production out of plasmastone

Results and Conclusions• ELFM shows clear environmental benefits

against the landfill’s existing situation (Figure 1).• The thermal treatment process (plasma

gasification) is the most contributing process(Figure 2)

• The environmental performance of the plasmagasification process can be improved byvalorizing the residues (plasmastone) (Figure 3).

• There is a clear trade-off between theenvironmental and economic performances ofplasma gasification scenarios (Figure 4)

Figure 1: Environmental profile ofELFM vs Do-nothing scenario

Figure 2: Contribution of ELFMprocesses to the totalenvironmental impact

Figure 3: Environmental profile ofdifferent thermal treatmentscenarios

Figure 4: Trade-off analysis ofdifferent thermal treatmentscenarios

Maheshi DanthurebandaraDepartment Materials Engineering

PhD defence 03 July 2015

Supervisor Prof. dr. Karel Van Acker (KU Leuven)

Co-supervisor Prof. dr. Steven Van Passel (UHasselt)

Funding IWT, R&D project with Group Machiels

E-mail [email protected]

Introduction / ObjectiveEnhanced Landfill Mining (ELFM) is an innovative concept which allows the recovery of land, reintroduction of materialsback to the material cycles and recovery of energy from a considerably large stock of resources held in landfills. Theknowledge about the critical factors for environmental and economic performance of ELFM is necessary in order to propelELFM from the conceptual to the operational stage. Hence the objective of this work was to investigate the environmentaland economic performance of the novel ELFM concept.

Research Methodology• Develop the general process flow diagram• Develop a model based on life cycle assessment

(LCA) and life cycle costing (LCC)• Assess the overall impact of the entire ELFM

system, individual processes, and also of thetrade-off between the environmental andeconomic performances

Major publicationDanthurebandara, M., Van Passel, S., Vanderreydt, I., Van Acker, K.. (2015). Assessment of environmental andeconomic feasibility of Enhanced Landfill Mining. Waste Management. DOI: 10.1016/j.wasman.2015.01.041

Environmental and Economic Performance of Enhanced Landfill Mining

Scenario 1 Plasma gasification with landfilling of plasmastoneScenario 2 Incineration with landfilling of bottom ashScenario 3 Incineration with aggregate production out of bottom ashScenario 4 Plasma gasification with aggregate production out of plasmastoneScenario 5 Plasma gasification with inorganic polymer cement production out of plasmastoneScenario 6 Plasma gasification with inorganic polymer block production out of plasmastoneScenario 7 Plasma gasification with blended cement production out of plasmastoneScenario 8 Plasma gasification with blended cement block production out of plasmastone

Results and Conclusions• ELFM shows clear environmental benefits

against the landfill’s existing situation (Figure 1).• The thermal treatment process (plasma

gasification) is the most contributing process(Figure 2)

• The environmental performance of the plasmagasification process can be improved byvalorizing the residues (plasmastone) (Figure 3).

• There is a clear trade-off between theenvironmental and economic performances ofplasma gasification scenarios (Figure 4)

Figure 1: Environmental profile ofELFM vs Do-nothing scenario

Figure 2: Contribution of ELFMprocesses to the totalenvironmental impact

Figure 3: Environmental profile ofdifferent thermal treatmentscenarios

Figure 4: Trade-off analysis ofdifferent thermal treatmentscenarios

107

Maheshi DanthurebandaraDepartment Materials Engineering

PhD defence 03 July 2015

Supervisor Prof. dr. Karel Van Acker (KU Leuven)

Co-supervisor Prof. dr. Steven Van Passel (UHasselt)

Funding IWT, R&D project with Group Machiels

E-mail [email protected]

Introduction / ObjectiveEnhanced Landfill Mining (ELFM) is an innovative concept which allows the recovery of land, reintroduction of materialsback to the material cycles and recovery of energy from a considerably large stock of resources held in landfills. Theknowledge about the critical factors for environmental and economic performance of ELFM is necessary in order to propelELFM from the conceptual to the operational stage. Hence the objective of this work was to investigate the environmentaland economic performance of the novel ELFM concept.

Research Methodology• Develop the general process flow diagram• Develop a model based on life cycle assessment

(LCA) and life cycle costing (LCC)• Assess the overall impact of the entire ELFM

system, individual processes, and also of thetrade-off between the environmental andeconomic performances

Major publicationDanthurebandara, M., Van Passel, S., Vanderreydt, I., Van Acker, K.. (2015). Assessment of environmental andeconomic feasibility of Enhanced Landfill Mining. Waste Management. DOI: 10.1016/j.wasman.2015.01.041

Environmental and Economic Performance of Enhanced Landfill Mining

Scenario 1 Plasma gasification with landfilling of plasmastoneScenario 2 Incineration with landfilling of bottom ashScenario 3 Incineration with aggregate production out of bottom ashScenario 4 Plasma gasification with aggregate production out of plasmastoneScenario 5 Plasma gasification with inorganic polymer cement production out of plasmastoneScenario 6 Plasma gasification with inorganic polymer block production out of plasmastoneScenario 7 Plasma gasification with blended cement production out of plasmastoneScenario 8 Plasma gasification with blended cement block production out of plasmastone

Results and Conclusions• ELFM shows clear environmental benefits

against the landfill’s existing situation (Figure 1).• The thermal treatment process (plasma

gasification) is the most contributing process(Figure 2)

• The environmental performance of the plasmagasification process can be improved byvalorizing the residues (plasmastone) (Figure 3).

• There is a clear trade-off between theenvironmental and economic performances ofplasma gasification scenarios (Figure 4)

Figure 1: Environmental profile ofELFM vs Do-nothing scenario

Figure 2: Contribution of ELFMprocesses to the totalenvironmental impact

Figure 3: Environmental profile ofdifferent thermal treatmentscenarios

Figure 4: Trade-off analysis ofdifferent thermal treatmentscenarios

108

Vincent DebonneDepartment Architecture

PhD defence 20 August 2015

Supervisor Prof. dr. Thomas Coomans de Brachène

Funding FWO Vlaanderen

E-mail [email protected]

IntroductionAlthough retaining the attention of architectural historians since the mid 19th century, the current view on medieval brickarchitecture in Flanders is basically the same as in the years 1930. It is believed to be the indigenous architecture of thepolders of West-Flanders, where brick was little more than a cheap substitute for stone. However, this determinist view doesnot suffice to explain the sudden emergence and spread of brick architecture in Flanders in the 13th and 14th centuries.

Research methodologyTo answer why builders and patrons used brick, a renewed look onthe buildings themselves was necessary. These were researchedby use of building archaeology. To assemble a chronologicalframework of medieval brick architecture in Flanders, scientificdating techniques were applied, most notably dendrochronologyand, to a lesser extent, 14C-dating of anthropogenic CO2 in mortar.

Results & ConclusionsBrick came into use in Flanders around 1225, not becauseclay was available but because of all local materials brickwas most suited for Gothic architecture. As such, the use ofbrick was not limited to the Flemish coastal plain; by 1300 itwas produced and used in the entire area between theNorth Sea coast and river Scheldt.Flanders was not the first region in Europe where brick wasused. However, it was the first region to see the use ofbrick in Gothic design. In other words, Flanders was at thecrossroads of the Northern-European brick tradition andFrench Gothic architecture.

Major publicationDEBONNE V. & HANECA K. 2012: Damme (Flandre occidentale). Analyse dendrochronologique du choeur-halle de l’église Notre-Dame, Bulletin Monumental 170.1, 60-62.DEBONNE V., BAILIFF I., BLAIN S., ECH-CHAKROUNI S., HUS J., VAN STRYDONCK M. & HANECA K. 2015: Wase baksteengedateerd. Natuurwetenschappelijk dateringsonderzoek in de Sint-Andreas- en Sint-Gislenuskerk in Belsele (Sint-Niklaas), Relicta. Archeologie, Monumenten- en Landschapsonderzoek in Vlaanderen 12, 181-218.

Out of clay, laid in bond.Building with brick in the county of Flanders, 1200-1400

Oak samples awaiting dendrochronological research.Roof of Belsele church, felling date 1266-1271 AD.

St. John’s church in Poperinge. Previously thought to havebeen built around 1300, dendrochronology of the roof placesconstruction in ca. 1350.

109

Wouter MathuesDepartment Chemical Engineering

PhD defence 20 August 2015

Supervisor Prof. dr. Christian Clasen

Funding ERC Starting grant 203043 NANOFIB

E-mail [email protected]

logo fundingagency ifapplicable

Introduction / ObjectiveLiquid jets, dripping faucets and stretched filaments between two solid surfaces are common examples of free-surfaceflows where a low viscous liquid filament is destabilised by surface tension to create spherical fluid droplets. Forapplications such as inkjet printing, fertiliser spraying or dispensing of pharmaceuticals, it is important to control thedroplet size and size distribution in order to prevent unwanted phenomena such as satellite droplets or misting. Addingspecific components such as polymers or particles to the fluid changes the breakup dynamics in these flows. Theresulting solution or dispersion is called a complex fluid and these fluids can have either a stabilising or destabilisingeffect on the filament breakup process. This research aims for a better understanding and an improved characterisationof drop formation for these liquids.Research MethodologyTo study the dynamics of the pinching process, the performance ofthe commercial Capillary Breakup Extensional Rheometer (CaBER)is enhanced in two ways. The resolution is improved by using a high-speed camera equipped

with a custom-made microscopic tube lens. The image processing routines can locate the filament edges with

sub-pixel precision.

Results & Conclusions

Jets of dilute polymer solutions are studied as a model system with afilament stabilising effect. The polymer molecules unravel due to the highstrain rates and induce strong elastic stresses that balance the surfacetension. Very fast polymer relaxation processes can be quantified in complexliquids that are used in commercial spraying and printing applications. A new shorter time scale is discovered during the jetting experiments,which implies that jets break up faster than static CaBER filaments ofthe same liquid.

Major publicationsD. Vadillo, W. Mathues, C. Clasen (2012). Microsecond relaxation processes in shear and extensional flows of weaklyelastic polymer solutions. Rheologica Acta, 51 (8), 755-769.W. Mathues, C. McIlroy, O.G. Harlen, C. Clasen (2015). Capillary breakup of suspension near pinch-off. Physics ofFluids.

Filament stabilisation in free-surface flows of complex fluids

Filament destabilsation is investigated for suspensions of non-colloidalparticles. The amplification of particle density fluctuations generates aheterogeneous filament with diluted zones that exhibit faster thinning rates.

110

Rana HabibiDepartment Architecture

PhD defence 24 August 2015

Supervisor Prof. dr. ir. Bruno De Meulder

Co-supervisor Prof. dr. ir. Viviana d’Auria

Funding Architecture Department

E-mail [email protected]

ObjectiveThe modern mass housing of Tehran (1945-1979) appeared as an agent of change in terms of social life, economy andurban structure. The new neighborhoods were both a reflection of modern life and a response to a housing shortage andembodied different aspect of modernism and internationalism. This research is mainly searching for indigenousmodernization process through mass housing practices.

Research MethodologyThis dissertation aims to demonstrate how the process of adaptation of modernity unfolded in the context of Tehran,using five modern, middle class neighbourhood case studies: Chaharsad Dastgah (1946), Narmak (1952), Kuy-e Farah(1961), Kuy-e Chaharom-e Aban (1969) and Ekbatan (1975). This research is part of the modern history of architecture.Through case studies, the research aims to have a closer observation of the history of Tehran modernist neighborhoods.The research documented the maps and photography of the projects and reinterpreted some projects through drawings.

Results & ConclusionsThis dissertation also looks at the influence ofinternational models and cultures and how the house andurban neighborhood could be seen as a culturalproduction. The mixing of ideas, lifestyle, and othersocio-economic conditions affected the form ofarchitecture models and urban structures. This processof translation and integration of new models to existingmodels is, most of the time, a creative and realisticprocess that can result in culture changes. It seems, inmodern mass housing projects of Tehran, thearchitecture of buildings inspired a lot from the othercultures while the land design and landscape remainedlocal and indigenous. In Tehran, most of the currentdevelopments are based on modern urban rules andstructures that were created in this half of the 20th

century. In this respect, understanding the features andimplications of these urban rules and neighborhoodstructure is crucial for understanding of contemporarycondition of Tehran.

Major publicationHabibi, R. & De Meulder, B. (2015) Architects and Architecture without Architects- Iranian Housing Modernization and the Birth of New Urbanization, Cities Journal, pp.29-41.

Modern Mass Housing in Tehran- Episodes of Urbanism 1945-1979

111

Wim BuyensDepartment Electrical Engineering (ESAT)

PhD defence 24 August 2015

Supervisor Prof. dr. ir. Marc Moonen / Prof. dr. Jan Wouters

Co-supervisor Dr. Bas van Dijk

Funding IWT

E-mail [email protected]

Introduction / ObjectiveA Cochlear Implant (CI) is a medical device that enables profoundly hearingimpaired people to perceive sounds by electrically stimulating the auditory nerveusing an electrode array implanted in the cochlea. Most CI users perform quite wellin terms of speech understanding. On the other hand, music perception andappreciation are generally very poor. The main goal of this PhD project was toinvestigate and to improve the poor music enjoyment in CI users.

Research MethodologyAn initial experiment with multi-track recordings and a mixing console was carried out to examine the musicmixing preferences of CI users for the different instruments in polyphonic or complex music. Based on thisknowledge, a music pre-processing scheme for mono and stereo recordings was developed. Subsequently,the music pre-processing scheme was evaluated in a take-home experiment with postlingually deafened CIusers and different genres of music.

Major publicationBuyens, W., van Dijk, B., Wouters, J., and Moonen, M. (2015). A stereo music pre-processing scheme for cochlearimplant users. IEEE Transactions on Biomedical Engineering (in press).

Music pre-processing for cochlear implants

Results & Conclusions1. Music mixing preference: in general, a preference for clear vocals and attenuated instruments was found with

preservation of bass and drums. Individual differences across subjects were observed.2. A music pre-processing scheme was developed for mono and stereo recordings which is capable of balancing

vocals/bass/drums against the other instruments, based on the representation of harmonic and percussivecomponents in the spectrogram, and on the spatial information of the instruments in typical stereo recordings

3. Take-home evaluation of the scheme implemented on an iPhone with CI users and different genres of musicprovided encouraging results for building a tool for music training or rehabilitation programs.

112

Bart Verbruggen

Department Electrical Engineering

PhD defence 26 August 2015

Supervisor Prof. dr. ir. Johan Driesen

E-mail [email protected]

Introduction / ObjectiveTo implement nearly zero energy buildings (NZEB), buildings can no longer be viewed asuncontrollable loads. Increased control should mitigate a negative impact on the grid.This research investigates ways to facilitate further evolution towards active buildings,which implement local control, looking at the regulations and offering tools for simulationand for evaluation of control algorithms.

Research MethodologyIn a first step, the electricity systems in active buildings are presented. Next, the applicable regulations are reviewed,among which Synergrid C10/11 and regulations imposed by the distribution system operators (DSOs).Object oriented models for a full electrical grid model (single-phase equivalent or three-phase), an in-home grid modeland a generic model for photovoltaic systems, based on the 5 parameter model, are developed.Grid impact indicators, to evaluate the performance of control algorithms on the interaction of a building with the grid, were developed and benchmarked using single-building simulations and control algorithms.

Results & Conclusions

Grid impact indicators arebenchmarked and can beused to evaluate theperformance of controlalgorithms. They can bestbe combined, for instance,one for evaluation and onefor monitoring.

Major publicationB. Verbruggen and J. Driesen, “Grid Impact Indicators for Active Building Simulations,” IEEE Trans. Sustain. Energy, vol.6, no. 1, pp. 43–50, Jan. 2015.

Grid impact of active electricity systems in residential buildings

Models are developed to be used in a multi-disciplinaryenvironment. They are included in the OpenIDEASlibrary, a result from the collaboration with otherdepartments.

DG systems, larger than 4 kVA, should move towards truethree-phase systemsLarge load can have a significant effect on the grid, theyshould use a three-phase connection.When combining heat pumps and small DG, theircollaboration, mitigating negative effects on the distributiongrid, should be required.

Regulations are not yet ready, nor supporting anevolution towards active buildings. Prudent steps havebeen taken, but a further evolution is needed.

The developed models are used, in simulations of a three-phase distribution grid with NZEBs, to evaluate the limit on the single-phase connection of small distributed generation (DG) units.

113

Lorena Siguenza-GuzmanDepartment Mechanical Engineering

PhD defence 27 August 2015

Supervisor Prof. dr. ir. Dirk Cattrysse

Co-supervisor Prof. dr. Henri Verhaaren

Funding VLIR-UOS, SENESCYT

E-mail [email protected]

Introduction / ObjectiveThe aim of this study is to develop an integrated model that supports libraries in making optimal budgeting and resourceallocation decisions among their services and collection by means of a holistic analysis. Four major research questionsare posed: 1) What data need to be collected?, 2) How to calculate the cost of library services?, 3) What architecture isadequate to store the data collected?, and 4) What tools and strategies can be used to visualize and analyze strategicinformation to support libraries in decision-making?

Research Methodology1. A holistic structure and the required toolset to holistically assess libraries is proposed to collect and organize the data

from an economic point of view. A four-pronged theoretical framework is used in which the library system andcollection are analyzed from the perspective of users and internal stakeholders.

2. A data warehousing approach is proposed to integrate, process, and store the holistic-based collected data.3. Several techniques to visualize and analyze the stored data that can help libraries in their decision-making, such as

reporting and using data mining tools and optimization models, are proposed and tested.

Results & ConclusionsBudget allocation is a core problem faced by all academic libraries independent of their size and funding mechanism.Although resource allocation is a complex process, it is ever more necessary especially in environments of constantchange and budget adjustments. By proposing this holistic approach, the research study contributes knowledge byproviding an integrated solution to assist library managers to make economical decisions based on an “as realistic aspossible” perspective of the library situation.

Major publicationSiguenza Guzman, L., Van den Abbeele, A., Vandewalle, J., Verhaaren, H., Cattrysse, D. (2015). A Holistic Approach toSupporting Academic Libraries in Resource Allocation Processes. The Library Quarterly: Information, Community, Policy,85(3), 295–318.

Optimal Resource allocation and Budgeting in Libraries

114

Maarten SonnaertDepartment Materials Engineering (MTM)

PhD defence 02 September 2015

Supervisor Prof. dr. ir. Jan Van Humbeeck

Co-supervisorDr. ir. Jan SchrootenProf. dr. Frank P. LuytenDr. ir. Inge Holsbeeks

Funding IWT

E-mail [email protected]

Introduction / ObjectiveThe development of automated procedures for the production of stem cell based advanced therapy medicinal productswill be essential for their clinical implementation. In this context the main objective of this dissertation was to determinethe influence of a 3D perfusion bioreactor system on an osteogenic progenitor cell population to assess the potential useof this system as a platform for automated cell expansion.

Research MethodologyMethodological developments• Quantitative on-line monitoring of cell proliferation• Quantitative analysis of neo-tissue formation• Automated method for on-line cell recovery

Results & Conclusions

Major publicationSonnaert, M., Luyten, F.P., Schrooten, J., Papantoniou, I., (2015) Bioreactor-based online recovery of human progenitor cells withuncompromised regenerative potential: a bone tissue engineering perspective. PLoS ONE DOI 10.1371/journal.pone.0136875

Towards an Advanced Therapy Medicinal Product: 3D bioreactor culture of periosteal progenitor cells

Influence of in vitro culture environment on cell characteristics• Proliferation kinetics• Neo-tissue formation• Cell differentiation• Post expansion functionality

• Proliferation was shown to be significantly enhanced due to perfusionbioreactor culture but no flow-rate dependent influence was observed inthe assessed operating window (Fig. 1)

• Alizarin Red staining of cells expanded in the 3D perfusion bioreactor innormal growth medium (Fig. 2.B) and osteogenic medium (Fig. 2.C)showed no significant mineralization in the growth medium confirming thegene expression data that showed no major changes in the expression ofkey markers (black)(Fig. 2.A)

• Contrast enhanced nanofocus computed tomography enabled neo-tissuevisualization through the constructs cultured in different conditions (Fig. 3)

• in vivo bone forming capacity of 3D expanded and recovered cells wasuncompromised in comparison to standard 2D expanded cells confirmingthe potential use of this system as a platform for automated cellexpansion (Fig. 4)

Figure 1 Figure 2 Figure 3

Figure 4

115

Minxian WuDepartment Materials Engineering (MTM)

PhD defence 03 September 2015

Supervisor Prof. dr. ir. Jan Fransaer

Prof. dr. Koen Binnemans

Funding FWO‐Flanders (G0B9613N) IWT‐Flanders (SBO‐project 80031 “MAPIL”)

E-mail [email protected]

Introduction / ObjectiveElectrodeposition is a cost-efficient and convenient method to prepare thin film materials on conductive substrates. Thefilm morphology, composition and deposition rate can be easily controlled by changing the deposition conditions such asapplied potential, current, temperature, agitation and bath composition. This PhD research investigated theelectrodeposition of semiconductor materials: electrodeposition of bismuth-tellurium and bismuth-antimony-telluriumthermoelectric materials from ethylene glycol solutions; and electrodeposition of germanium from ionic liquids.Research MethodologyThe electrochemical behavior during deposition were studied by means of cyclic voltammetry, linear scan voltammetry,electrochemical quartz crystal microbalance, rotating (ring-)disk electrode, etc. The deposited films were characterized byscanning electron microscopy, energy-dispersive X-ray spectroscopy, X-ray diffraction, transmission electron microscopy,Auger electron spectroscopy, inductively coupled plasma optical emission spectroscopy, etc.

Results & ConclusionsThermoelectric materialsEquilibrium alloy deposition of Bi2Te3 was achieved in chloride-free ethylene glycol solutions which means film composition can be easily controlled by the bath composition. Thermoelectric multilayers were deposited from a single bath (Fig. 1). Such multilayer structure can reduce the thermal conductivity and improve the thermoelectric performance.

Fig.1 Cross-section of thermoelectric multilayer and a cake

GermaniumNew germanium compound [GeCl4(BuIm)2] was synthesized and used for germanium deposition in [BMP][DCA].

Major publicationM. Wu, G. Vanhoutte, N.R. Brooks, K. Binnemans, J. Fransaer (2015). Electrodeposition of germanium at elevatedtemperatures and pressures. Phys. Chem. Chem. Phys., 17, 12080-12089.

Electrodeposition of semiconductor materials from non-aqueous electrolytes

Fig.2 Color change of the electrolyte (left) and counter electrode (right) before and after electrolysis.

The film deposited from [BMP][DCA] do not contain any halide impurities.Electropolymerization of [DCA]- anions occurred on the anode during electrodeposition (Fig.2)At 180 °C, the grey shiny Ge films could be electrodeposited from [BMP]Cl containing GeCl4 (Fig.3).High deposition rate of 6 μm/h could be obtained.

Fig.3 Optical (left) and SEM (middle) images of the top view of Ge film. The cross-section of a thick Ge film (right).

before after

116

Goals of the Research Improve probabilistic inference: We researched methods to optimize ProbLog performance.

– Different pipeline implementations– Optimization of intermediate results

Extend the ProbLog language: We investigated new methods to support additional language constructs for ProbLog:

– Annotated disjunctions– First-Order Logic constraints

Results & Conclusions Although the knowledge compilation component has the highest complexity, a crucial impact on the overall system performance have the type and the complexity of the formula generated by the Boolean formula conversion component. A new pipeline proposed in this work shows very promising results on our benchmarks.

Major publications Theofrastos Mantadelis, Dimitar Shterionov, and Gerda Janssens, Compacting Boolean formulae for inference in probabilistic logic programming, LPNMR 2015, 13 pages, 27-30 September, 2015, Lexington, Kentucky, USA Dimitar Shterionov, and Gerda Janssens, Implementation and performance of probabilistic inference pipelines, PADL 2015, pp. 90-104, 18-19 June 2015, Portland, Oregon, USA

Design and Development of Probabilistic Inference Pipelines

IntroductionProbLog is a probabilistic logic programming framework: a language and an inference system. In this thesis we study thepipeline architecture of the system, called a ProbLog inference pipeline. We discuss existing implementations, presentnew ones and introduce optimizations. Then we focus on the extension of the ProbLog language with constraints andannotated disjunctions, together with the methods to support inference with these language constructs.

Research Methodology Analysis:

– Separate pipeline components – Data structure of intermediate results

Design & implementation: – New pipelines – Optimization methods

Evaluation

Our Boolean formula compaction improves knowledge compilation to ROBDDs and to sd-DNNFs with c2d; a relaxed version of our algorithm improves knowledge compilation to sd-DNNFs with DSHARP. We show that Boolean formulae optimization should be performed with respect to the type of application and the tool that used for knowledge compilation.We developed ProbLog inference for the language cProbLog – the extension of ProbLog with constraints. We defined constraints as a generalization of evidence; our implementation is based on this property. In this thesis we proposed a new encoding of Annotated Disjunctions (ADs) that uses a form of cProbLog constraints. Our encoding allows to correctly solve the MARG, COND, but also the newly introduced MPE inference task for ProbLog programs with ADs.

Dimitar ShterionovDepartment Computer Science

PhD defence 08 September 2015

Supervisor Prof. dr. ir. Gerda Janssens

E-mail [email protected]

117

Kjell CnopsDepartment Electrical Engineering (ESAT)

PhD defence 08 September 2015

Supervisor Prof. dr. ir. Paul Heremans

Co-supervisor Prof. dr. ir. Jan Genoe

Funding imec

E-mail [email protected]

Introduction / ObjectivePhthalocyanines (Pc) and subphthalocyanines (SubPc) are explored as activematerials in organic photodiodes. By exploiting the unique optoelectronic propertiesof these organic semiconducting molecules, this work aims to enhance theperformance of organic photovoltaics (OPV) and organic photodetectors (OPD).

Research MethodologyOrganic thin films and devices were fabricated by vacuum thermal evaporation. Characterization of the materialproperties and study of the device physics provides guidelines for deliberate modifications of the photodiode architecture.

Results & Conclusions

Major publications1.K. Cnops, G. Zango, J. Genoe, P. Heremans, M.V.Martinez-Diaz, T. Torres, D. Cheyns, Energy Level Tuningof Non-fullerene Acceptors in Organic Solar Cells, J. Am.Chem. Soc. 137, 8991–8997 (2015).2.K. Cnops, B.P. Rand, D. Cheyns, B. Verreet, M.A. Empl,P. Heremans, 8.4% efficient fullerene-free organic solarcells exploiting long-range exciton energy transfer, Nat.Commun. 5, 3406 (2014).

Phthalocyanine-based organic solar cells and photodetectors

2. Cascade architecturesMultilayer cascade architectures aredeveloped, in which an efficient interlayerexciton energy transfer process enhancesthe photocurrent generation.Cascade OPV cells with efficiencies of8.4% are demonstrated.

3. Near-infrared OPDNear-infrared (NIR) sensitive OPDsare obtained by templating thecrystal growth of the non-planarPbPc molecule.Successful application as NIR-OPD however requires a reductionof the dark current density.

1. Non-fullerene acceptors in OPVFour SubPc-derivatives are employed tosystematically study the effect of heterojunctionenergetics on OPV performance.Optimization of the device structure yields OPV cellswith an efficiency of 6.9% and a VOC above 1 V,without the use of fullerenes as acceptor.

118

Yueqi WangDepartment Materials Engineering (MTM)

PhD defence 11 September 2015

Supervisor Prof. dr. ir. Paul Van Houtte

Co-supervisor Prof. dr. ir. Dimitri Debruyne; Prof. dr. Pascal Lava

Funding IDO Project

E-mail [email protected]

Introduction The digital image correlation (DIC) technique is a full-field non-contact optical deformation measurement tool. However, as ameasurement method, the accuracy of DIC is still a problem.Plenty of studies have been done concerning the performance ofDIC, but yet there is no solid solution. This thesis is intended toprovide a thorough study on assessing the uncertainty of DICmeasurements, and also the impact of the measurement erroron material identifications by finite element model updating(FEMU).

Research MethodologyFollowing the route of DIC applications, errors of raw data (digital images), which are originated from experimentalimperfections, pass to the DIC measurements and finally to the material properties. The links between the errors arestudied analytically and/or experimentally.

Results & ConclusionsAnalytical solutions show the relationships between the error ofdigital images and the error on DIC measurements.Using virtual DIC tests associated with FEM is a generic approachto assess measurement errors in practical applications.DIC measurement error has obvious impact on the materialproperties identified by FEMU. Using appropriate weighting matricesto include the impact of measurement errors leads to a moreaccurate identification.

Typical 2D DIC setup.

Major publicationY. Wang, P. Lava, S. Coppieters, M. De Strycker, P. Van Houtte, D. Debruyne (2012). Investigation of the uncertainty ofDIC under heterogeneous strain states with numerical tests. Strain, 48 (6), 453-462.

Uncertainty quantification of digital image correlation and the impact on material identification

Swift hardening laws identified from thestandard tensile test (benchmark) anddifferent FEMU approaches. The FEMUcurves are adjusted to the benchmark byusing weighting matrices.FEMU_2D: no weighting; FEMU_2D_W /FEMU_2D_NW : using different weightingmatrices to include the DIC measurementerror.

Strain fields from DIC and finite element simulation at the optimizedmaterial properties identified by FEMU, and the residual field.

119

120

Anđelo MartinovićDepartment Electrical Engineering (ESAT)

PhD defence 14 September 2015

Supervisor Prof. dr. ir. Luc Van Gool

Funding European FP7 projects V-City, 3D-COFORM, VarCity, and KU Leuven Research Fund

E-mail [email protected]

Introduction / Objective3D city modeling is a thriving area of research, as high quality models of real-world cities are in ever-rising demand, notonly among architects and urban planners, but also in virtual tourism and entertainment industry. Manual modeling ofindividual buildings usually provides good results, but the process is very time consuming and expensive. Currentautomatically-built models provide inadequate 3D visual perception and lack any semantic knowledge about the scene.Yet, adding a good understanding of what it is that needs to be modeled is a strong cue, not only to improve the visualand 3D quality of the model, but also to substantially widen its usage.Conversely, procedural modeling provides an effective way to create detailed and realistic 3D building models that docome with all the semantic labels. This elegant yet powerful framework represents models such as buildings throughinstantiations of a series of parameterized rules, forming a grammar. The resulting procedural models are compact, rich interms of semantics, and allow for more aesthetic rendering than would be possible from pure 3D capturing.

Research MethodologyWe investigate how procedural models can be used in the context of urban reconstruction. Our ultimate goal is toautomatically create procedural models of structures as-built (inverse procedural modeling). The main challenge is todetermine the rules and parameters of the procedural grammars, which typically results in a large search space. In the first part of the thesis we develop a system for 3D building reconstruction based on a known grammar, and

select the appropriate style grammar based on the recognition of the architectural style of the observed building. In the second part of the thesis we simplify the prior knowledge necessary for building reconstruction to a set of

general and style-independent architectural principles. In the third part of the thesis, we address the problem of procedural grammar scarcity by proposing to learn the

grammars from data.Results & ConclusionsThe proposed models have been evaluated on several datasets of urban scenes, advancing the state of the art in termsof accuracy and speed. More importantly, it is the conclusion of this thesis that the problem of inverse proceduralmodeling of buildings could be solved with grammar learning from labeled and noisy data, obviating the need for ahuman in the loop, and opening up novel directions for future research.

Major publicationsA. Martinović, M. Mathias, J. Weissenberg, and L. Van Gool. “A Three-Layered Approach toFacade Parsing,” in European Conference on Computer Vision (ECCV), 2012.A. Martinović and L. Van Gool. “Bayesian Grammar Learning for Inverse ProceduralModeling,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.

Inverse Procedural Modeling of Buildings

121

Glenn Reynders

Department Civil Engineering

PhD defence 14 September 2015

Supervisor Prof. dr. ir.- Arch. Dirk Saelens

Funding VITO

E-mail [email protected]

IntroductionLarge-scale integration of renewable energy is often suggested as a key technology to counter climate change. Incombination with the fluctuating demand, a wide-spread integration of intermittent renewable energy sources introducesnew challenges in terms of grid stability. In that context, active demand response (ADR) using thermal energy storage

ConclusionsResidential buildings show a strong potential for short-termthermal storage. High capacities (12-60 kWh) are found for typicalBelgian dwellings. Nevertheless, ADR always results in higherheat demands as efficiencies typically vary between 60-95 %.The parameter study showed how the latter mainly depend on theheat loss coefficient and available thermal mass, and aredependent on the dynamic boundary conditions.The developed, simulation-based, quantification framework is acomprehensive tool to quantify the instantaneous flexibility andsupports optimization of the use of structural .

Major publicationReynders G, Diriken J, Saelens D (2014). “Quality of grey-box models and identified parameters as function ofthe accuracy of input and observation signals,” Energy & Buildings, vol 82, pp 263-274

Quantifying the impact of building design on the potential of structural storage for active demand response in residential buildings

Research MethodologyI. Quantifying potential for ADR4 performance indicators are definedfor a generic quantification of flexibility:- Available storage capacity [kWh]- Storage efficiency [-]- State of charge [-]- Power shifting capability [s]An extensive parameter studyquantifies the impact of building designparameters on the ADR potential.

II. Characterizing existing buildingsAs the main potential for ADR isexpected in existing buildings a grey-box modelling method is presented tocharacterize the dominant thermalproperties from measurements fromexisting buildings.The main focus is on the relationbetween the model structure, theexperiment design and the physicalinterpretation of the model parameters.

III. Applying on Belgian building stockThe grey-box modelling framework andquantification method are combined toquantify the ADR potential of the Belgianresidential building stock.An integrated operational model of theBelgian electricity sector is used toquantify the CO2-emission savings andoperational aspects related to ADR usingheat pumps and thermal energy storagein a high renewable scenario.

shows to be an effective technology to provide the flexibilityfor optimizing the electricity market.This work explores the structural thermal mass embedded inresidential buildings as an active storage capacity in suchADR programs.The main goal is to quantify the building’s potential for ADRand assess the impact of building design parameters on thispotential.

TimeTem

pera

ture

No ADRADR

Figure 1. Concept of activating thermal mass for ADR by temporary increase of indoor temperature within comfort range

Figure 2. Available capacity (top) and efficiency (bottom) as a function of the duration of the ADR-event for different typologies of the Belgian building stock.

122

Leandro Dantas de SantanaDepartment Mechanical Engineering

PhD defence 14 September 2015

Supervisor Prof. dr. ir. Wim Desmet

Supervisor Prof. dr. ir. Christophe Schram

Funding Brazilian Coordination for Improvement of Higher Education Personnel – CAPES

E-mail [email protected]

Introduction / ObjectiveThe noise generated aerodynamically by airfoil-shaped parts is a major issue in applications of large societal interest,such as office and home appliances, wind power generation, air and ground transportation vehicles, etc. At early designstages, semi-analytical noise prediction methodologies are preferred over more CPU-intensive methods, and haverecently gained considerable accuracy through advanced physical modeling. This thesis work aims to push furtherthe accuracy and reliability of state-of-the-art semi-analytical techniques for the prediction of incoming-turbulence airfoilnoise.

Research MethodologyTo validate the proposed techniques, a novelexperimental rig was developed and used to collectdetailed databases for rod-airfoil and turbulence-airfoilinteraction cases. The new facility was characterizedfrom aerodynamic and acoustic viewpoints, andphysical aspects relevant to semi-analytical noiseprediction were quantified.

Results & ConclusionsFor the frequency range where the acoustic wavelengthis larger than the airfoil chord. i.e. when the airfoil isacoustically compact, this work proposes an extension tothe Amiet theory, proposing two extra applications of theSchwarzschild theorem to improve the convergence and,consequently, increase the noise prediction accuracy.Results show that the computation of two extra iterationsimpacts significantly the predicted noise spectrum in thefrequency range of interest, and are verified against theexperimental results developed in this thesis showingimproved agreement. To address geometrical effects,this work develops a technique which applies theBoundary Element Methodology (BEM) to solve thelinearized flow equations. This procedure is verifiedagainst analytical results, given by the Amiet technique toa flat-plate geometry, and applied to noise computationsof generic geometry airfoils. Results shows that the airfoilshape impacts the acoustic prediction.

Schematic representation of the flow facility.

Major publicationLeandro D. Santana, C. Schram, Wim Desmet (2015). Low-frequency extension of Amiet’s theory for compact airfoil noise predictions. Journal of Sound and Vibration, Accepted for publication.

Semi-analytical methodologies for airfoil noise prediction

Acoustic prediction by the proposed 3rd and 4th

iteration compared with experimental results.

123

Sayeh MirzaeiDepartment Electrical Engineering (ESAT)

PhD defence 14 September 2015

Supervisor Prof. dr. ir. Hugo Van hamme

Co-supervisor Prof. dr. ir. Yaser Norouzi

Funding OT & GOA

E-mail [email protected]

Research Methodology First, we introduced a Bayesian and later a Bayesian non-parametric NMF framework for word discovery and recognition andshowed advantages in automatically determining the number of words present in a collection of utterances.We developed a Bayesian NTF framework for separating underdeterminedstereo instantaneous mixtures and showed performance improvements over thestandard KL-NTF framework. We have introduced an angular spectrum based methodfor source counting and channel estimation purposed in this case.We introduced a complex NMF framework for separating stereo anechoic mixtures. We have also proposed a 2-D spectrumbased method for estimating both the channel attenuation and delay parameters as well as the number of sources. For separating reverberant mixtures, we tried to improve the performance of some existing methods based on full-rank spatialcovariance matrix modeling through introducing proper prior distributions for the parameters, hence we solve a Bayesianproblem. We have also taken the temporal continuity constraint into account in our developed model.

Results & ConclusionsThe experiments are generally performed on mixtures of speech ormusic sources of the dev2 dataset of SiSEC. The separation qualityis measured in terms of some well-known blind source separationevaluation metrics including SDR, SIR, ISR and SAR. The averageperformance metrics over all sources and mixtures is calculated anddepicted against different reverberation time values in the followingfigure. The performance improvement achieved by the first twoBayesian NMF based approaches over the other methods ismanifested observing these graphs.

Major publicationS.Mirzaei, H.Van hamme, Y.Norouzi, "Blind audio source counting and separation of anechoic mixtures using the multichannel complexNMF framework." Signal Processing 115 (2015): 27-37.

Application of Bayesian matrix factorization methods in audio signal processingIntroduction / ObjectiveThis research work is dedicated to the application of Bayesian or Bayesian non-parametric matrix factorization methods indecomposing an audio signal to its components. In the first phase of the work, the effectiveness of some proposed BayesianNon-negative Matrix Factorization (NMF) frameworks are investigated when applied to the task of word recognition of speechutterances. These Bayesian approaches are used for estimating the number of word patterns needed for modeling speechutterances. The second and main phase of our research is dedicated to underdetermined blind source separation of stereoaudio mixtures. We have proposed some approaches based on Bayesian NMF, Bayesian Non-negative Tensor Factorization(NTF), complex NMF or some extended version of these methods for addressing these tasks under different mixing scenariosincluding instantaneous, anechoic and convolutive (reverberant) mixtures. The goals of this study can be summarized as aninvestigation of how two signal processing problems can benefit from Bayesian approaches to matrix factorization. In wordfinding we want to show that the model order selection property of Bayesian methods leads to an automatic inference of thenumber of words and pronunciation variants. In source localization and segregation we want to show the benefits of the sameproperty in building a parsimonious model of real sources leading to superior source segregation performance.

124

Bo WangDepartement Materials Engineering (MTM)

PhD defence 15 September 2015

Supervisor Prof. dr. ir. Ingrid De Wolf

Co-supervisor Prof. dr. ir. Martine Wevers

Funding imec

E-mail [email protected]

Introduction / ObjectiveSeveral classes of MEMS are sensitive to the internal pressure inside their packages. From functionality and reliability point of view, knowing andmonitoring this pressure is very important. This thesis demonstrates a new method to measure the internal pressure of the package and, inaddition, its leakage rate. It is based on an innovative use of the Focused Ion Beam (FIB) together with a measurement of pressure sensitiveparameters.

Looking inside MEMS packages: investigation of the hermeticity of MEMS thin film packages

ConclusionWe demonstrated a new FIB open/sealing methodology to measure the package leakage rate and internal pressure. It can be applied to variousMEMS packages: Instead of monitoring the capacitance, one can also optically monitor the package curvature or measure the quality factor of aMEMS inside the package.

Experiments and results

Purpose Method

Conclusions

125

Baetens RubenDepartment Civil Engineering

PhD defence 15 September 2015

Supervisor Prof. dr. ir. Arch. Dirk Saelens

Funding FWO

E-mail [email protected]

Implementing heat pumps as an energy efficient method to provide heat and rooftop photovoltaic installations as a sourceof renewable energy in dwellings has a possible impact on the distribution, transmission, billing and trading of electricity.Evaluating this electrification of building energy services from a building perspective as such unwittingly externalizescosts. Excluding these effects underestimates the overall societal cost of possible systems, resulting in a disproportionatetrade-off between different possible policy measures.Research MethodologyThe presented work estimates the externalised effects of low-energy dwellings at the low-voltage distribution grid basedon comprehensive building and energy system simulations. The comprehensive framework at neighbourhood level waseffectuated through the development of two novel modelling environments in the open-source OpenIDEAS framework.The IDEAS Library allows district energy simulations which integrateall main electric and thermal aspects of the energy systems in and between buildings. The StROBe Module provides stochastic occupant behaviour as boundary conditions for all main variables, i.e. the receptacle loads, the hot water tap flows and the space heating set-points.

Results & ConclusionsThe heating system design, average insulation level,system sizing and feeder strength are identified aspotential system variables which influence the overallimpact of the integration, and are thus kept variable. In arural context, the possible external costs for maintainingelectricity distribution at the low-voltage grid are found tolie in the same order of size as the marginal presentworths of the considered dwelling designs, and cost-effective design options are available which lower oravoid the externalities.Contrary, the impact of heat pump based dwellings onthe low-voltage grid has been found negligible in anurban context when we consider their implementation inup to sixty percent of the dwellings.

Figure. Summed marginal (present worth of the)revenue requirements and total cost of ownershipPWTCOr + dPWRRr expressed in relation to thetotal cost of ownership PWTCOr per dwellingexcluding the increased costs for the distributionsystem operator for the rural cases.

Major publicationBaetens, De Coninck, Van Roy, Verbruggen, Driesen, Helsen & Saelens (2012). Assessing electrical bottlenecks atfeeder level for residential net zero-energy buildings by integrated system simulation. Applied Energy 96, 74-83.

On Externalities of Heat Pump-based Low-Energy Dwellingsat the Low-Voltage distribution grid

126

Valentin RomanovDepartment Materials Engineering (MTM)

PhD defence 22 September 2015

Supervisor Prof. Stepan V. Lomov

Co-supervisors Dr. Larissa Gorbatikh, Prof. Ignaas Verpoest

Funding “IMS&CPS” FP7 European project

E-mail [email protected]

Introduction / ObjectivePossessing excellent stiffness and strength, carbon fiber reinforced polymers (CFRPs), however, have a limitedtoughness. The first damage in CFRPs usually occurs in transverse plies where stiff carbon fibers are microscopic stressconcentrators in the matrix. The toughness of CFRPs can be enhanced by adding carbon nanotubes (CNTs) – nano-reinforcements of a high aspect ratio and exceptional stiffness – into the polymer. CNTs are believed to redistribute matrixstresses by lowering the matrix stress concentration scale from micro-level – around carbon fibers – to nano-level –around CNT tips – thereby hindering damage onset. The aim of this work was to understand the effect of CNTs on thestress distribution in CFRPs using a numerical approach.

Research MethodologyA novel finite element (FE) model was developed that represents thousands of individual CNTs with a “true-to-life”morphology in a composite with microscopic fibers in a single simulation. For this, a numerically efficient EmbeddedElements method, which superimposes FE meshes of CNTs and mesh of matrix, was used.

Results & ConclusionsThe discovered heterogeneity of the matrix stress fields in nano-engineered fiber reinforced composites with CNTs(nFRCs) was found to be strongly affected by the length, position, orientation, waviness and concentration of the CNTs.

Major publicationRomanov, V.S., S.V. Lomov, I. Verpoest, and L. Gorbatikh, “Modelling evidence of stress concentration mitigation at the micro-scale in polymer composites by the addition of carbon nanotubes”. Carbon, 2015. 82: p. 184-194.DOI: 10.1016/j.carbon.2014.10.061

Modeling tools for micro-scale stress analysis of nano-engineered fiber-reinforced composites

A novel concept of intelligent hierarchical nFRCs was proposed and modeled. Combining precise localization andorientation of CNTs, a complete elimination of microscopic inter-fiber stress concentrations was achieved by alignedCNT “bridges” constructed interdependently with the fiber positions in FRC.

The developed model captured thematrix stresses between individualCNTs, thereby allowing themicroscopic matrix stresses withinthe CNT-rich matrix regions to becaptured as well.

CNT agglomerates were shown to behave asstiff microscopic particles and to exacerbate theexistent stress concentrations. CNTs introducedat fiber surfaces by fiber grafting orsizing/coating with CNTs were found to increasestresses in resin-rich zones between the fibers.CNTs grown on fibers were shown to effectivelysuppress stress concentrations in the matrixclose to the fiber surface.

127

Frederik DebrouwereDepartment Mechanical Engineering

PhD defence 23 September 2015

Supervisor Prof. dr. ir. Jan Swevers

Co-supervisor Prof. dr. ir. Joris De Schutter,Prof. dr. Moritz Diehl

Funding IAP-DYSCO, GOA

E-mail [email protected]

Introduction / ObjectiveIn industry, almost every production line involves one or more robotic manipulators performing a variety of path followingtasks, where a geometric path needs to be followed. Increasing the (time-) optimality of each robot task is hence of greatsignificance. The optimal path following problem then finds an optimal trajectory along the path. The aim of this thesis isto extend the existing path following approach such that it applies to a large array of realistic and industrially relevantapplications such as manipulation of objects, realistic robot models and collision avoidance.

Research MethodologyThis research proposed to extend the existing path following formulation. This formulation was developed to render basicpath following tasks for simple robot models into a convex optimization problem by projection onto the path and a non-linear transformation of variables. The extensions proposed in this thesis are mainly non-convex, however by using aconvex-concave decomposition of the non-convex parts, a solution can be obtained efficiently with guaranteedconvergence of the algorithm. The aim of this thesis is hence to formulate the practical path following applications suchthat they can efficiently be solved. Furthermore, research has gone into the development of a more numerically efficientalgorithm to solve these convex-concave problems.

Results & ConclusionsThis research presented 22 industrially relevantextensions to the basic path following formulation. 20 ofthem are convex or convex-concave, making themefficient to solve using a robust algorithm, which is ofgreat significance to industry. 2 of the proposedextensions are not convex, however can still be solvedefficiently by exploiting the properties of the original pathfollowing formulation. Figure 1 shows the results of anexperiment where jerk constraints were introduced toreduce vibrations in the motion (due to the excitation ofresonance frequencies in the robot structure). Thesevibrations would result in the object tipping over (top partof the video frame), while the reduction of thesevibrations results in the object not tipping over (lowerpart).

Figure 1. Experimental results of jerk-constrained motion.

Major publicationF. Debrouwere, W. Van Loock, G. Pipeleers, Q. Tran Dinh, M. Diehl, J. De Schutter, J. Swevers, "Time-Optimal Path Following for Robots with Convex-Concave Constraints using Sequential Convex Programming", IEEE Transactions on Robotics, 29 (6), 1485-1495. 2013

Optimal Robot Path Following

128

Johan KerkhofsDepartment Mechanical Engineering

PhD defence 28 September 2015

Supervisor Prof. dr. ir. Liesbet Geris

Supervisor Prof. dr. ir. Hans Van Oosterwyck

Funding FWO, ERC

E-mail [email protected]

Introduction / ObjectiveEach year 5 to 10% of the 6 million fractures in the US suffer from mal- or non-union. The development of tissueengineered (TE) bone products would give surgeons more and better options to treat these bone defects. The lack of arational methodology for the development of TE processes and the poor characterisation thereof often result in a poorpredictability and consistency of current TE concepts. As a result, the translation of these concepts towards a clinicalapplication is often low. This PhD focuses on modelling the intricate molecular machinery underlying the developmentalprocess of bone formation. The overarching goal is to increase the predictability and the consistency of the behaviourof bone TE products by exploiting the robustness and modularity of in vivo bone formation.Research MethodologyIn this PhD an additive framework, geared towards qualitatively modelling regulatory networks, is developed. Through ananalysis of the canalisation and stability to perturbation the network model allows for an in silico screening for the effect ofincorporated genes in various stages of cartilage differentiation.

Results & ConclusionsThe developed models identify and elucidate vital signalling pathways and their interactions in bone development. Inaddition, they highlight factors that have a crucial role in determining cell behaviour and can consequently instruct boneand cartilage tissue engineering strategies. In addition, they suggest therapeutic targets in cartilage-associated diseases,such as osteoarthritis.

Regulatory network and outcome of (partial) in silico screening

Major publicationJ. Kerkhofs, S. Roberts, F. P. Luyten, H. Van Oosterwyck, L. Geris. (2012) Relating the Chondrocyte Gene Network toGrowth Plate Morphology: from Genes to Phenotype. PLoS One, 7(4), e34729.

Chondrogenic differentiation in the growth plate:A computational modelling approach

129

Wyffels JeroenDepartment Electrical Engineering (ESAT)

PhD defence 29 September 2015

Supervisor Prof. dr. ir. Nauwelaers Bart

Co-supervisor Prof. dr. ir. De Strycker Lieven

Funding IWT

E-mail [email protected]

Introduction / ObjectiveThis research proposes a methodology for indoorlocalization in any healthcare environment, based onsignal strength measurements. We focus on distributedlocalization with room level accuracy (or region levelaccuracy in corridors). Bluetooth Low Energy chips areused as wireless extensions to the existing wired nursecall networks, giving rise to cost effectiveness. Patientsand mobile staff are handed a mobile device which areable to locate themselves adequately. Healthcare facilitiesare characterized by long corridors and more or lessequally sized rooms at each side of the corridor. Thistypical layout is exploited in the localization process, aswell as the limited amount of possible access points to thenurse call network: near a bed, near a door to a patientroom, and inside corridors.Research MethodologyBy letting all wireless extensions broadcast information regarding their position of installation, a mobile node canquickly obtain information about its current position. The present research focusses on the development of an easyto implement, decision tree based algorithm, which enables the mobile devices to locate themselves up to theenvisioned level of localization accuracy.By means of an RSS simulation framework, the developed localization algorithm can be validated for different kindsof topologies, construction materials and room sizes.A method for ensuring the broadcasted data of the wireless extension contains valid information regarding theirposition of installation, is developed within the scope of this research.

Results & Conclusions• Distributed indoor localization by means of a decision tree based on signal strength measurements is possible.

The strength of the approach taken is the fact that typically no RSS-to-distance conversions are used.• The performance of the algorithm does not depend on the construction materials or dimensions of the building• The algorithm is robust against NLOS conditions between mobile device and one or more wireless extensions• The processing power imposed on the mobile devices is limited, which contributes to a low-cost solution

Major publicationWyffels, J., De Brabanter, J., Crombez, P., Verhoeve, P., Nauwelaers, B., De Strycker, L. (2014). Distributed, Signal Strength based Indoor Localization Algorithm for Use in Healthcare Environments. IEEE Journal of Biomedical and Health Informatics, Volume 18 Issue 6, pp 1887 – 1893.

Indoor Localization Aspects of a Personalized Mobile CommunicationSystem for Intelligent Healthcare Facilities

130

Deurinck MiekeDepartment Civil Engineering

PhD defence 30 September 2015

Supervisor Prof. dr. ir.-arch. Roels Staf

Co-supervisor Prof. dr. ir.-arch. Saelens Dirk

E-mail [email protected]

IntroductionEnergy savings in the residential building sector are typically predicted by means of simplified, normative calculationtools, relying on standardized user behaviour. In reality however, actual energy savings prove to be only 20 to 60% ofthose predicted, seriously questioning the use of these tools in reliable cost efficiency analyses and robust policy making.Additionally, the tools are mostly conceived deterministically, giving no insight in the uncertainties inherent to predictingenergy savings. The main aim of this work is to provide a more reliable energy saving prediction method,embedded in a probabilistic framework.

Research MethodologyAn evidence-based probabilistic behavioural model is developed, reflecting the large variety in dwelling use. Keyaspects of the final behavioural model are (i) the use of time-dependent occupancy profiles and (ii) the implementation ofspace-dependent heating patterns. As the simple thermal building models of the normative tools are no longer suitable toimplement this behavioural model, a transient zonal building model is set up as well. By using the well-known Monte-Carlo technique, energy saving predictions can be generated in terms of probability distributions.

Results & ConclusionsWhen applied on an existing case study district, theresults show that the above methodology is able to predict energy useestimates that are very comparable to measured data(both in average values and statistical spread),confirming its overall reliability. the methodology is capable of capturing typicalretrofitting effects like the temperature takeback –incontrast to the currently used and simplified calculationtools. the probabilistic setup proves to be worthwhile inassessing energy savings at a large-scale building stocklevel (district, city, region, ...). Because the buildingparameters are conceived probabilistic as well, it allowsfor an incorporation of the global uncertainty of statisticalbuilding stock data within the final energy savingestimates.

Figure 1 – Energy savings [kWh/year] following a roofinsulation measure: the probabilistic behavioural model, incombination with the stochastic assessment of the housingstock characteristics (full black line), is able to reflect the highvariability in energy savings, all depending on which userlives in which dwelling.

Major publicationDeurinck, M., Saelens, D., and Roels S. (2012). Assessment of the physical part of the temperature takeback forresidential retrofits. Energy and Buildings, 52:112-121.

Energy savings in the residential building sectorAn assessment based on stochastic modelling

131

321

Joost LauwersDepartment Chemical Engineering

PhD defence 01 October 2015

Supervisor Prof. dr. ir. Jan Van Impe

Co-supervisor Prof. dr. ir. Filip Logist, prof. dr. Kris Willems

Funding KU Leuven

E-mail [email protected]

Introduction / ObjectiveDynamic models for biochemical systems generally contain a large number ofparameters. When parameters in a model need to be estimated from data, twoquestions can be asked: Can the parameters be estimated, even when perfectmeasurement and experiment conditions are available. This is the question ofstructural identifiability. If so, can the parameters be estimated well enough,considering limitations on the experiments and measurements? This is the question ofpractical identifiability.This work evaluates the structural and practical identifiability for large-scale bioprocessmodels. In this work, the Anaerobic Digestion Model No. 1 (ADM1) is used as a case-study. Emphasis is given on the structural aspects causing (non-)identifiability.

Research methodologyStructural identifiability: two approaches are studiedC-identifiability approach: via a transformation the non-linearity in the equation is eliminated.

• Partial results for the stoichiometric parameters.• Non-linear observability: parameters are constant statesof which the observability is tested.

• Local results, but for all parameters.

Practical identifiability: Monte Carlo parameter estimation procedure:

• Experiments are designed such that there issufficient variation in state values.• Measurements are simulated and the parametersare estimated on this data• The variation in the parameter estimates is ameasure for practical identifiability.• Solved in a orthogonal collocation framework: large-scale optimization problem (+/- 38,000 variables).

Results & ConclusionsStructural identifiability:• Both methods are applicable to large-scale bioprocess models• Nearly all parameters of ADM1 are structurally locally identifiable• Non-identifiability can be attributed to dead-ends in the system• Interconnectivity makes the model harder to analyze but yields better identifiability propertiesPractical identifiability• Sufficient variation in state values is necessary• 1st order kinetic parameters and parameters associated with directly excited parameters identifiable• Interconnectivity of the model reflected in the estimation results

Major publicationJ. Lauwers, P. Nimmegeers, F. Logist, J., and J. F. Van Impe. (2015). Identifiability of large-scale non-linear dynamic network models. Application to the ADM1-case study. (In preparation.).

Advanced methods for the structural and practical identification of large-scale non-linear biochemical systems. Application to anaerobic digestion models

Visualization of the error between the model predictions and the measurements fordifferent parameter combinations. The black cross/line indicates where the error isminimal: (1) Structural non-identifiability. all parameter combinations on the blackline are equivalent. (2) structural but not practical identifiability. (3) structural andpractical identifiability and practically identifiable.

Anaerobic Digestion Model No. 1 (ADM1)• Benchmark model• 29 differential and 10 algebraic states• 78 parameters

Model

Accurate estimate

Experiment

Parameter estimation

Use model

Design ofexperiments

YesNo

Practical identifiability

Structural identifiability

Goal

132

Boudewijn DecropDepartment Civil Engineering

PhD defence 01 October 2015

Supervisor Prof. dr. ir. Erik Toorman

Co-supervisor Prof. dr. ir. Tom De Mulder

Funding IWT Baekeland

E-mail [email protected]

Introduction / ObjectiveTurbidity plumes are an important topic in the environmental aspects of dredging. Turbid sediment plumes can causeadverse effects when they reach environmentally sensitive areas such as coral reefs, sea grass fields and wetlands. Themain source of turbidity while employing Trailer Suction Hopper Dredgers (TSHD) is the release of excess water throughthe overflow shaft. The objective of the research is to improve the prediction and mitigation of these plumes.

Research MethodologyIn this research, Computational Fluid Dynamics (CFD) simulations are used as a toolto determine the three-dimensional flows of water, sediment and air bubbles directlyafter release from the overflow shaft. In order to develop and validate a reliablenumerical model, the following steps are followed. Laboratory experiments on scaled sediment plumes in crossflow CFD model of the laboratory-scale plumes CFD model of the full-scale sediment plumes, including ship propellers and air bubbles Oceanographic measurements in open-sea dredging plumes, to provide validation data

Results & ConclusionsThe CFD model simulations have been validated against results oflaboratory experiments and field observations. These fieldobservations consisted of measurements of the sedimentconcentration in the overflow plume behind a TSHD at work at sea.

Based on a batch of CFD model results, asimplified near-field plume dispersion modelhas been developed, based both onanalytical solutions and on empirical models(grey-box model). This model allows for thenearly instantaneous assessment of moststandard cases, with a limited reduction inaccuracy.

Both the CFD model and the simplifiedmodel allow for a more accurate predictionof the plumes and a more adequate designof mitigation measures.

Major publication-Decrop, B., De Mulder, T., Toorman, E. and Sas, M. (2015). Large-Eddy Simulations of turbidity plumes in crossflow.European Journal of Mechanics - B/Fluids (53), p68-84.- Decrop, B. , De Mulder, T., Toorman, E. and Sas, M. (2015). New methods for ADV measurements of turbulentsediment fluxes – Application to a fine sediment plume. Journal of Hydraulic Research 53 (3), p 317-331.

Numerical and Experimental Modelling of Near-Field Overflow Dredging Plumes

133

•Pre-treatment techniques:•Microwave•Ultrasound•Electrokineticdisintegration•Electrolysis

•Anaerobic digestion:•Lab-scale (1 L)•Small pilot-scale (50 L)

Houtmeyers SofieDepartment Chemical Engineering

PhD defence 06 October 2015

Supervisor Prof. dr. ir. Van Impe Jan

Co-supervisor Prof. dr. ir. Appels Lise and Prof. dr. Willems Kris

Funding IWT Tetra project (IWT100196): 2011-2012

E-mail [email protected]

Introduction / ObjectiveAnaerobic digestion as a sewage sludge treatment step is widely known. Organic matter in the thickened waste activatedsludge (WAS) is transformed into an energy-rich biogas (55-70 % methane), which results in a reduction of the amount ofbiosolids that needs to be disposed. Biogas production via anaerobic digestion is often limited by the slow hydrolysis rate.The objective of this research was to investigate if pre-treating the WAS before introduction into the anaerobic digesterscan improve the digestion efficiency.

Research Methodology

Results & Conclusions•Only techniques applying low specific energy (< 4.54 kJ/kg sludge); the electrokinetic and electrolysis pre-treatmentseem economically viable.•A model to predict the biogas production was built with PLS, based on a dataset (14 data points) of solely non-treatedsampling points (2 components and 7 variables):

Biogas production = 0.37–9.0.10-4(TS)–1.2.10-3(VS)+7.4.10-8(COD)–3.1.10-7(sCOD)–2.8.10-7(Carbs)–3.4.10-5(sCarbs)–0.010(pH)

Major publicationsHoutmeyers S., Degrève J., Willems K., Dewil R., Appels L. (2014). Comparing the influence of low power ultrasonic and microwave pre-treatments on the solubilisation and semi-continuous anaerobic digestion of waste activated sludge, Bioresource Technology, 171, 44-49.Appels L., Houtmeyers S., Degrève J., Van Impe J., Dewil R. (2013). Influence of microwave pre-treatment on sludge solubilization and pilot scale semi-continuous anaerobic digestion, Bioresource Technology, 128, 598-603.Appels L., Houtmeyers S., Van Mechelen F., Degrève J., Van Impe J., Dewil R. (2012). Effects of ultrasonic pre-treatment on the sludge characteristics and anaerobic digestion, Water Science and Technology, 66 11), 2284-2290.

The influence of pre-treated waste activated sludge on anaerobic digestion efficiency and microbial community structure

Gas meter

Pump

50 L working volume 16 L

buffer tank

(un)treated WAS in digester

Water lock

Digestate

• The results from 454 amplicon pyrosequencing (Bacteria & Archaea) and qPCR (Archaea) allowed to gain some insights into the the structure and dynamics of the microbial community found in anaerobic digesters fed with (un)treated WAS.

• The hydrogenotrophic methanobacteriales (Archaea): most abundant at the start of the digestion but thereafter overtaken by the actoclastic methanosarcinales.

• The largest change in community structure (nMDS ordination) was achieved in the first phase of the digestion test run (before pre-treated WAS was fed to the digesters) => the observations cannot be linked directly to disintegration or solubilisation of the WAS caused by the applied pre-treatments.

134

Leqi ZhangDepartment Electrical Engineering (ESAT)

PhD defence 08 October 2015

Supervisor Prof. dr. ir. Guido Groeseneken

Co-supervisor Prof. dr. ir. Dirk J. Wouters

Funding imec

E-mail [email protected]

Introduction / ObjectiveResistive Random Access Memory (RRAM) is a promising candidate for future non-volatile memory technology. However, implementingresistive memory into high density cross-point (X-point) arrays needs additional non-linear selector device in serially connected with eachresistive memory in a one-selector one-resistor (1S1R) configuration, to suppress the potential leakage currents. This Ph.D. focuses onthe selector element, aims at addressing the following questions: What are the selector requirements for achieving an acceptable 1S1R memory array performance? How to make a selector device that fulfills these requirements?

Research Methodology Top-down approach: The performance requirements for implementing the selector device are derived from a circuit perspective (under predefined array performance constraints), by employing a hybrid circuit simulation and an analytical analysis approach. Bottom-up approach: Demonstrate Metal/amorphous-Silicon(a-Si)/Metal (MSM) tunneling selector structure experimentally.

Results & Conclusions

Major publicationL.Zhang, B.Govoreanu, et al.,“High-Drive Current (>1MA/cm2), Highly Nonlinearity (>103) TiN/Amorphous Silicon/TiN Scalable Bidirectional Selector with Excellent Reliability and Its Variability Impact on the 1S1R Array Performance”, in. Proc. IEDM, pp.6.8.1-6.8.4, 2014.

Study of the selector element for resistive memory

Resistive memory (1R) X-point memory array (1R only)

Leakage current issue

A

BC

D

A B B

C D D

Voltage

Log(

I)

High current enables write/read operationHigh current enables write/read operation

Low current enables large array size

Low current enables large array size

Generic selector (1S) behavior

Signal for the selected cellLeakages are suppressed by non-linear selector

-4 -3 -2 -1 0 1 2 3 410-210-1100101102103104105106107

Voltage [V]

J [A

/cm

2 ]

-5 -4 -3 -2 -1 0 1 2 3 4 510-210-1100101102103104105106107

J [A

/cm

2 ]

Voltage [V]

The extracted selector parameter requirements targeting for 1Mbit array at 10nm-scale (simulation)

Switching current 1μA 10uA

Program/Erase voltage ±1.5V ±1.5V

Read voltage |0.1V| |0.1V|

Max. current density (A/cm2) >106 >107

|Operating voltage (Vop)| $) >1.5V >2.4V

Half bias non-linearity (NL½) >800 >2000

Reference: [1] S.Hyun et al., “Next generation Nonvolatile Memory Its impact on computer system”, 2013.

[1]

#

*

#) Reference: HfOx-based resistive memory characteristics*) Simulated performance target for selector$) Voltage at which the maximum current density is achieved

Фb

TiN

TiN

tSi

Demonstrated MSM selector CMOS-friendly process Voltage compatible with resistive memory Non-linearity enhancement (>1500) by

anneal (defect density reduction in a-Si) Non-linearity improvement (>6000) by

tunneling barrier engineering

Trap in a-Si

e- (1) Capture (2) Emission

NL½ > 1500

400C

600C

NL½ > 6000

e-

Non-linearity booster

a-Si

a-SiSiNx

Tunneling barrier

Ultra-thin undoped a-Si

X-point memory array (1S1R)

135

0 5 10 15 20 25 30 35 40 45 5010−5

10−4

10−3

10−2

10−1

100

SNR [dB]

BE

R

OFDM−CP (MMSE)OFDM−MMSE−FD−EXTZFE−FD−EXTSC−CP (MMSE)MMSE−FD−FOLDMMSE−FD−EXTOFDM−MMSE−ZRZFE−ZRMMSE−ZRZFE−TDMMSE−TD

The duality between Chebyshevpolynomials and cosines leads to anefficient interpolation algorithm.

Gert CuypersDepartment Eectrical engineering (ESAT)

PhD defence 09 October 2015

Supervisor Prof. dr. ir. Marc Moonen

Funding IWT

E-mail [email protected]

Introduction / ObjectiveThe research revolves around efficient frequency-domain techniques to equalize digital transmissions (e.g. WiFi, xDSL) ina challenging environment, leading to a higher bit rate. An important goal was the avoidance of interference towards otherusers, and the handling of received interference to allow a co-existence of different systems.

Research MethodologyMost results were obtained by computer simulation in Matlab.• We developed a smart type of windowing function for both the transmitter

and the receiver, to make the signals more spectrally contained.• We proposed a technique, “zero restoration” to restore data that is lost when

frequency components are missing due to destructive interference. This is acommon occurrence in a wireless environment.

• We also proposed an interpolation technique, based on the discrete cosinetransform, implemented uses Chebyshev polynomials.

Results & Conclusions.

Major publicationCuypers, G., Vanbleu, K., Ysebaert, G., Moonen, M., Vandaele, P. Combined per tone equalization and receiver windowing in DSL receivers: WiPTEQ. Elsevier Signal Processing 85, 10 (2005), 1921–1942.

Equalization, windowing and zero restoration for OFDM and single-carrier block transmission

1800 1820 1840 1860 1880 1900 1920−110

−108

−106

−104

−102

−100

−98

−96

−94

−92

−90Wr,sW3,optW5,optWopt

A special class of windowing functions, derived from aninverse raised cosine, leads to reduced out-of-bandradiation while preserving the desired signal’s integrity.

Comparing the bit error rate of zero-restoration (in red, blueis baseline) and other equalization schemes shows a niceimprovement for challenging environments; lower is better.

−1

1

x

θ

0

πZ

X

Y

136

Alessandro ChiumentoDepartment Electrical Engineering (ESAT)

PhD defence 09 October 2015

Supervisor Prof. dr. ir. Sofie Pollin

Co-supervisor Prof. dr. ir. Liesbet Van der Perre

Funding Imec

E-mail [email protected]

Introduction / ObjectiveCellular networks become more and more complex with each new generation. A modern network needs to serve amassive amount of users in dynamic and challenging environments. In order to optimize the spreading of the resourcesand deliver the best possible service, each user should be catered to adaptively, considering its requirements at all times.This comes at a cost, both in terms of distributing the resources efficiently and in making sure that the overhead, causedby this allocation control, is minimized. This thesis addresses these two challenges.Research MethodologyThe results obtained in this work are the fruits of simulated heterogeneous LTE-A downlink environments. First the inter-cell interference problem for co-tiered large and small base station has been analysed and a distributed, low-complexity,interference management solution has been proposed. The solution makes use of inter base station communication whenpossible and spectrum sensing when necessary. Afterwards, the control information overhead, usually necessary tooperate the network has been analysed and modelled.

Results & ConclusionsThe interference management results for anheterogeneous network are shown in Figure 2. Theproposed solution grants the interfered users 50%throughput improvement at a very limited cost for the noninterfered users for both large and small cells (Macro andPico). The distributes small cells (Femto) use spectrumsensing to minimize losses.

The channel quality of a mobile user can be predictedand Figure 3 presents results for this estimation usingGaussian Process Regression and Figure 4summarises the results.

Major publicationChiumento, A.; Pollin, S.; Desset, C.; Van der Perre, L.; Lauwereins, R., "Impact of CSI Feedback Strategies on LTE Downlink and Reinforcement Learning Solutions for Optimal Allocation", IEEE Transactions on Vehicular Technology, 2015.

Dynamic Resource Allocation and Self-Organizing Signalling Optimisation in LTE-A Downlink

Machine learning solutions have been proposed to stir the network elements into good operating points while requiring little control information both in time and frequency.

Figure 1: modelled ideal cell division (left) VS real cell division due to shadowing

0 100 200 300 4000

5

10

15Real and predicted CQI values

time [ms]

CQ

I val

ues

RealPredicted

0 20 40 60 80 1000.8

1

1.2

1.4

1.6

Percentiles

Gai

n ov

er re

sour

ce a

lloca

tion

thro

ughp

ut

MacroPicoFemtoMacro+Pico

Figure 2: Throughput gain of the proposed solution over the interfered scenario without coordination

Figure 3: Predicted and simulated channel quality for a user moving at 10 km/h

Figure 4: Amounts of control information with time prediction with 2 different resource allocation methods

137

Lieven VerveckenDepartment Mechanical Engineering

PhD defence 14 October 2015

Supervisor Prof. dr. ir. Johan Meyers

Co-supervisor Dr. ir. Johan Camps

Funding SCK•CEN

E-mail [email protected]

0.0

0.5

1.0

1.5

0.0

0.5

0.0

0.5

-0.5

x [km] y [km]

z [km]

Introduction / ObjectiveThis thesis investigates how Computational Fluid Dynamics (CFD) can be applied in the context of nuclear emergency preparedness and response. The focus is on both improving the accuracy of CFD as a base model, and on the formulation of fast reduced order models (ROMs) that retain the accuracy of CFD.

Research MethodologyWe first focus on improving RANS modeling of atmospheric dispersion. A simple approach is introduced to estimate,based on experiments, the correct level of variability in wind direction that is required as additional boundary condition forthe simulations. In a second part of the work we make use of LES to study the variability of radiological dose rate atground level due to instantaneous turbulent mixing processes. For this, the LES model is coupled with dose rate modelsfor beta and gamma radiation. The third part of the work aims at constructing fast ROMs that retain full CFD accuracy.We focus on deriving a ROM by projection of the CFD model onto a Krylov-subspace that is produced by the Arnoldialgorithm. The ROMs are applied to the forward simulation and the source reconstruction problem.

Results & ConclusionsThis dissertation demonstrates the large potential of CFD in the framework of nuclear emergency preparedness and response. The main achievements are:• A very effective model reduction methodology is developed for

the fast and accurate simulation of the pollutant dispersion andsource reconstruction [1]

• The variability of radiological dose rate from cloud shine due toinstantaneous turbulent mixing processes was examined [2]

• The accuracy of near-range dispersion simulations usingRANS was improved [3] Top figure: Application of ROM to dispersion at Doel

nuclear power station. The CFD simulation time isreduced by a factor of 2500 to only 38.6 ms persecond of real time [1].

Left figure: In gray: instantaneous Argon-41concentration; back plane, concentration in thestream-wise cross section through the point ofrelease [2].Major publications

[1] L. Vervecken, J. Camps, J. Meyers (2015). Stable reduced-order models for pollutant dispersion in the builtenvironment. Building and Environment, 92, 360-367.[2] L. Vervecken, J. Camps, J. Meyers (2015). Dynamic dose assessment by Large Eddy Simulation of the near-rangeatmospheric dispersion. Journal of Radiological Protection, 35, 165–178[3] L. Vervecken, J. Camps, J. Meyers (2013). Accounting for wind-direction fluctuations in Reynolds-averaged simulationof near-range atmospheric dispersion. Atmospheric Environment, 72, 142–150.

Improved modeling and real-time simulation of near-range atmospheric dispersion of radioactive gases

138

Introduction / ObjectiveThis thesis focuses on the analysis of privacy in composite services whereby individuals interact with a service front-endcalling on underlying sub-services that perform specific tasks. Individuals disclose personal data towards services. Asmany services have ambiguous privacy policies, it remains unclear to individuals to what extent the acquired informationis distributed and processed, what other personal data is included and/or merged in user profiles. Moreover, much morepersonal data is often revealed than strictly necessary. As is well known, many authentication technologies, such asX.509 certificates, disclose personal data and make transactions linkable. Service providers must be limited in collectingan individual’s personal data. In this context, data protection legislation, such as the EU General Data ProtectionRegulation, aims for data handling practices that are transparent to individuals. One of the key data protection principlesis privacy by design, which imposes service providers to build privacy safeguards into each design stage, from theearliest design stage onwards. Privacy by design engineering is complex as designers must not only consider functionalrequirements, their design must satisfy privacy requirements as well. Computer-aided support tools assisting designerswith their decisions are necessary.

Koen DecroixDepartment Computer Science

PhD defence 21 October 2015

Supervisor Prof. dr. ir. Bart De Decker

Supervisor Prof. dr. Vincent Naessens

Funding IWT

E-mail [email protected]

Research MethodologyA logic-based privacy analysis framework (see Figure 1) is defined in this thesis. First, the key concepts necessary forcapturing privacy in composite services are identified. The concepts are part of a blueprint that is defined for modelingcomposite services. Next, a framework containing the aforementioned concepts is defined that supports the automatedanalysis of privacy in services. Finally, queries that are defined provide qualitative feedback meaningful for both end-users and designers. The framework is realized using IDP, a knowledge-base system that automates the privacyanalysis.

Results & ConclusionsThe IDP realization of the framework is applied to multiple case studies from different application domains. The framework is flexible as it supports:• detecting conflicts between the designed

services and privacy requirements.• comparing users with different trust perceptions.• comparing the impact of selected credential

technologies.• comparing alternative services

Major publicationDecroix, K. Lapon, J., De Decker, B. and Naessens, V. (2013). A Framework for Formal Reasoning about PrivacyProperties Based on Trust Releationships in Complex Electronic Services. In Bagchi, A. (Ed), Ray, I. (Ed). InformationSystems Security: Vol. 8303. ICISS 2013, Kolkota, pp. 106-120. Berlin Heidelberg Springer-Verlag.

Model-Based Analysis of Privacy in Electronic Services

Vocabulary Behavior + Inference Rules

Trust Perception

Vocabulary

Credentials

Identities

Pseudonyms

Organizations Services

Access Storage

Distribution Output

System Independent Modeling Part

User Model

Identifier Model

System Model

Service Policies Logi

c C

ompo

nent

Con

clus

ions

Inpu

t Mod

el

Figure 1: Logic-based framework

139

Pieter Jacqmaer

Department Electrical Engineering (ESAT)

PhD defence 21 October 2015

Supervisor Prof. dr. ir. Johan Driesen

E-mail [email protected]

Introduction / ObjectiveThe doctoral work researches the feasibility of using increased switching frequencies and fast components in powerelectronic converters. Not only the fast wide-bandgap semiconductor components are studied and characterized, but alsoa lot of interest is devoted to optimizing resonant topologies. In the next parts of the thesis, the possible problems inpower converters operating with fast components are investigated: ringing, signal degradation, overvoltages andovercurrents, crosstalk and the production of electromagnetic interference. A tool is developed based on the PEEC-method to accurately predict these phenomena.

Research Methodology A figure of merit is used to rank wide-bandgap power switching components according to their performance. It is based

on the gate-charge and the dynamic on-resistance. This latter is measured using a novel voltage-clamping circuit [1]. The different resonant converter-topologies are discussed. The conditions for zero-voltage switching are mathema-

tically derived. An iteration analysis is performed to determine the optimal parameters of the resonant tank in an LLC-converter.

The full-wave PEEC technique is used to model PCB-interconnections and Jefimenko’s equations are employed for the numerical field calculations.

Results & Conclusions A measurement methodology is developed to determine a high-frequency

figure of merit. With it, fast GaN-components can be compared to each other:

A 50 W LLC-converter, switching at 2.5 MHz is developed, realizing zero voltage switching and an efficiency of 84%

A freeware software PEEC-tool, operating in thetime- and frequency-domain is developed.

The near and far fields around a converter can bepredicted with software receiving input from thePEEC-tool:

Major publicationR. Gelagaev, P. Jacqmaer and J. Driesen, " A Fast Voltage Clamp Circuit for the Accurate Measurement of the Dynamic On-Resistance of Power Transistors," IEEE Transactions on Industrial Electronics, vol. 62, issue 2, February 2015, pp. 1241-1250

Hard- and Soft-Switching High-Frequency Power Electronics: Modelling of Parasitics and Electromagnetic Fields

140

Jiuyang Lin

Department Chemical Engineering

PhD defence 22 October 2015

Supervisor Prof. dr. ir. Bart Van der Bruggen

Funding China Scholarship Council (CSC)

E-mail [email protected]

Introduction / ObjectiveTextile wastewater is generated during the dye production and textile dyeing process and typically has a high salinity.Conventional approaches (i.e., adsorption, coagulation, biological treatment and advanced oxidation processes) mainlyfocus on dye removal or destruction, reducing the possibility for dye recovery and salt reuse from textilewastewater. Therefore, state-of-the-art treatment technologies focusing on recovery of valuable resources from textilewastewater are on the research agenda.

Research Methodology Application of loose nanofiltration membrane for thefractionation of dye/NaCl mixtures. Integration of loose nanofiltration (NF) and bipolarmembrane electrodialysis (BMED) process for the resourcerecovery (i.e. dye, acid, base and pure water) from textilewastewater. Application of tight ultrafiltration (UF) membrane for theseparation of dye/Na2SO4 mixtures.

Results & Conclusions Loose nanofiltration membrane can effectively fractionatethe dye/NaCl mixtures, and yield a >99.96% dye rejection and~97.5% NaCl permeation (Fig. 1), indicating an alternative todense NF membrane for textile wastewater treatment. Loose NF and BMED hybrid process can technicallyrecover the resources (i.e. dye extraction, acid/base and purewater production) from the textile wastewater, closing thematerial loop and realizing the zero liquid discharge in textilewastewater tretamtent (Fig. 2). Employment of tight UF membrane can effectivelyfractionate the dye/NaCl mixtures, yielding a >99.0 dyeretention efficiency and complete Na2SO4 permeation,exhibiting a new opportunity for the separation of dye/saltmixtures (Fig. 3).

Fig. 2: Schematic of resource recovery from textilewastewater by loose NF and BMED process

Major publicationJiuyang Lin, Wenyuan Ye, Huiming Zeng, Hong Yang, Jiangnan Shen, Siavash Darvishmanesh, Patricia Luis, ArcadioSotto, Bart Van der Bruggen. Fractionation of direct dyes and salts in aqueous solution using loose nanofiltrationmembranes. Journal of Membrane Science 2015, 477, 183-193.

Membrane technologies for fractionation of dye/salt mixture and resource reuse in textile industry

Fig. 1: Schematic of fractionation of dye/NaClmixture by loose NF membrane

Fig. 3: Schematic of fractionation of dye/Na2SO4mixture by tight UF membrane

141

Wauman BarbaraDepartment Civil Engineering

PhD defence 23 October 2015

Supervisor Prof. dr. ir. arch. Saelens Dirk

Co-supervisor dr. ir. Breesch Hilde

E-mail [email protected]

Introduction / ObjectiveIn Flanders, a monthly, quasi-steady-state calculation tool is used for energy rating and certification of building designs.Due to the implemented monthly averaged input data or inaccurate model simplifications, calculation results differsignificantly from dynamic simulation results, restricting the effectiveness of the energy building policy. Therefore, theaccuracy of the currently applied calculation method is analysed, focusing on school buildings in Flanders in particular.

Major publicationB.Wauman, H.Breesch, D. Saelens (2013). Evaluation of the accuracy of the implementation of dynamic effects in thequasi-steady-state calculation method for school buildings, Energy and Buildings, 65, 173-184

Evaluation of the quasi-steady-state method for the assessment of energy use in school buildings

Research Methodology An uncertainty analysis through the Monte Carlo Latin Hypercube sampling technique reveals the impact of the input

data. A sensitivity analysis, using the elementary effect method of Morris, determines the predominant boundaryconditions for which more representative values are set based on survey data.

The influence of typical school buildings’ characteristics on the energy demand are studied by dynamic simulations. The correlation-based correction factors used in the monthly method are adapted accordingly using regression analysis techniques.

A series of integrated, dynamic building and HVAC system simulations are performed to assess the reliability of the simplified, sequential subsystem calculation approach used for energy use calculations.

Results & ConclusionsThe following modifications are suggested, ordered based on their priority: The implementation of a more diverse

room type profile, including representative boundary conditions

A revision of the tabulated subsystem efficiencies based either on the results of the dynamic simulations or on the alternative calculation approach of EN 15316

New values for the utilisation and intermittency factor, specifically adapted to the typically Flemish school use

Fig: Sankey diagram of heating energy flows

Fig: impact of suggested changes on the calculation results(a) heating demand (b) final energy use for heating

142

Andy GijbelsDepartment Mechanical Engineering

PhD defence 26 October 2015

Supervisor Prof. dr. ir. Dominiek Reynaerts

Co-supervisor Prof. dr. ir. Jos Vander Sloten

Funding IWT

E-mail [email protected]

Introduction / ObjectiveRetinal Vein Occlusion is an eye disorder where clots formed inside retinal vessels cause severe loss of vision. Thedisease affects an estimated 16,4 million people worldwide. Retinal vein cannulation is a promising treatment where thesurgeon would insert a needle via a small incision in the eye into the affected veins and subsequently inject an adequatedose of a clot-dissolving drug (Fig. 1). Given the scale (30 µm - 400 µm in diameter) and the fragility of retinal veins onone side and the surgeons limited positioning precision and force perception on the other side, this procedure isconsidered too risky to perform manually. This work reports on the development and evaluation of dedicated robotictechnology to enable surgeons to perform retinal vein cannulation in a safe and successful manner.

Research MethodologyStarting from an early prototype, a fully dedicated robotic system for retinal surgery is developed in this research (Fig. 2).The surgeon and the robot simultaneously hold the instrument. The surgeon retains full control over the instrumentmotion. Viscous forces generated by the robot minimize the surgeon’s hand tremor such that the needle can be preciselyinserted into the targeted vessel. Further, the world’s thinnest stainless steel injection needle is developed, having a tipdiameter of only 80 µm (Fig. 3). Additionally, the needle is equipped with an optic force sensor which is shown to becapable of automatically detecting 98% of all puncture events. Auditory feedback is used to inform the surgeon on suchevent. Finally, the robot can lock the needle into position once inserted into the vessel. This enables a prolonged hands-free injection of the drug.

Results & ConclusionsA junior retinal surgeon was invited to perform retinal vein cannulation on retinal vessels of ennucleated porcineeyes with the aid of the developed technology. The results are extremely encouraging as the technology enabledthe surgeon to perform twenty successful cannulations and injections out of twenty attempts.

Fig.1: Retinal vein cannulation.

Major publicationA. Gijbels, E.B. Vander Poorten, B. Gorissen, A. Devreker, P. Stalmans and D. Reynaerts (2014). ExperimentalValidation of a Robotic Comanipulation and Telemanipulation System for Retinal Surgery. Proceedings of theInternational Conference on Biomedical Robotics and Biomechatronics, 144-150.

Development and evaluation of robotic technology for safe and successful retinal vein cannulation

Fig.3: A conventional needle (upper)and the developed 80 µm-diameterforce-sensitive cannulation needle(lower).

Fig. 2: Surgeon controllingthe robotic system.

143

Wenyuan YeDepartment Chemical Engineering

PhD defence 26 October 2015

Supervisor Prof. dr. ir. Bart Van der Bruggen

Co-supervisor Prof. dr. ir. Patricia Luis Alconero

Funding China Scholarship Council (CSC)

E-mail [email protected]

Introduction / ObjectiveThe prospect of climate change due to global warming has attracted international concerns, due to the increase inatmospheric CO2 concentration. The CO2 capture by the absorbent, i.e. monoethanolamine (MEA) is feasible, but its highvolatility and high cost for regeneration may significantly hinder its application. NaOH can be an alternative to MEA due tothe faster reaction and low price. Therefore, state-of-the-art technologies for NaOH production from wastewater are onresearch agenda, in view of both cost and energy requirements.

Research Methodology Implementing of bipolar membrane electrodialysis(BMED) in the treatment of glyphosate neutralization liquorfor NaOH production, in view of CO2 capture. Application of hollow fiber (HF) membrane for Na2CO3crystallization in CO2 capture scenario. Employment of dense membranes (i.e., reserve osmosisand forward osmosis membrane) for Na2CO3crystallization in CO2 capture scenario.

Results & Conclusions BMED process can effectively recovery the glyphosateand produce the base from the glyphosate neutralizationliquor. The base with high concentration show a highpotential for CO2 capture (Fig. 1). HF membrane shows an excellent performance for thecrystallization of pure Na2CO3, which can exclude theimpurities (i.e., Na2SO4, NaCl, NaNO3) (Fig. 2). However, itexhibits a low water flux. Compared to HF membrane, dense membranes offer ahigher water flux, which can improve the crystallizationefficiency of Na2CO3 without the sacrifice of crystal purity(Fig. 3).

Major publicationW. Ye, J. Wu, F. Ye, H. Zeng, A.T. Tran, J. Lin, P. Luis, B. Van der Bruggen. Potential of osmotic membranecrystallization using dense membranes for Na2CO3 production in a CO2 capture scenario. Crystal Growth & Design, 15(2015), 695-705.

Hybrid membrane technologies with CO2 mitigation and resource recovery

Fig. 1: Schematic of BMED for NaOH production and glyphosate recovery from wastewater

Fig. 2: Schematic of HF membrane for Na2CO3 crystallizaiton with different impurities

Fig. 3: Schematic of dense membrane for Na2CO3 crystallizaiton for CO2 capture scenario

144

Annemans MargoDepartment Architecture

PhD defence 30 October 2015

Supervisor Prof. dr. ir. Ann Heylighen,

Co-supervisor Prof. dr. Chantal Van Audenhove, Arch. Hilde Vermolen

Funding Agency for Innovation by Science and Technology

E-mail [email protected]

Introduction / ObjectiveHospital buildings tend to be experienced by patients from a, for architects, atypical perspective, namely lying in a hospital bed, statically and in motion. This altered perspective has a significant impact on patients’ spatial experience. Gaining insight into this experience is for most architects not trivial, but crucial if they are to design truly patient-centredhospitals. This PhD research started from a twofold aim. The first aim was to gain insight into patients’ spatial experience. To thisend aspects relevant to architectural practice that have an impact on patients' spatial experience of a hospitalenvironment are investigated. The second aim was to inform hospital design on this experience to anticipate the needsof patients and other users. Therefore it was investigated how insight into patients' spatial experience can betranslated in a format that is applicable in architectural practice.

Research ApproachA sensory-rich, experience-oriented, and flexible research approachaddressing motion, was developed. Combining different methods thattake into account the different sensory modalities involved in patients’spatial experience allowed tailoring the approach to patients’ particularsituation. Four research settings covering three patient profiles werestudied: in-patients being transported along a familiar route to thedialysis; patients arriving at the emergency department; and patientsat two day surgery centres with a distinct managerial and spatialconcept.

Results & ConclusionsThe PhD contains specific contributions for architects, healthcare providers, and researchers. Apart from offeringarchitects guidance to conduct fieldwork themselves, it formulates explicit recommendations on how to design morepatient-centred hospitals. Healthcare providers are shown how to pay more explicit attention to the impact of the builtenvironment on managerial organisation and patients’ experience, both in care practice and during design briefing. Forresearchers, the PhD documents a research approach specifically addressing motion, a topic that is underresearchedon a building scale. It also sheds a new light on the impact of space on patients’ experience, static and in motion,which could add to existing research on patient experience, mostly from a nursing perspective. Finally it contributes todesign research by pointing at the added value of experiential information for architectural practice. By adequatelytranslating the insight gained into patients’ spatial experience to these three groups, this PhD contributes to realisingtruly patient-centred hospital buildings.

Combination of techniques showing how a patientin a bed is transported through the hospital

Major publicationAnnemans, M., Van Audenhove, C., Vermolen, H., Heylighen, A., 2012. Hospital Reality from a Lying Perspective:Exploring a Sensory Research Approach. In: Langdon P., Clarkson P., Robinson P., Lazar J., Heylighen A. (Eds.),Designing Inclusive Systems, Chapt. 1. Springer-Verlag, London, pp. 3-12. (Awarded CWUAAT2012 Best Paper Prize)

The experience of lying: Informing the design of hospital architecture on patients’ spatial experience in motion

145

Jeroen TacqDepartment Materials Engineering (MTM)

PhD defence 03 November 2015

Supervisor Prof. dr. Marc Seefeldt

Co-supervisor Prof. dr. ir. Bert Verlinden

Funding FWO

Introduction / ObjectiveMany engineering materials consist of multiple phases, whose very presence results in superior material properties.During deformation, residual stresses and strains will develop in the material, and will either positively or adversely impactthe properties of the material. It is therefore essential to understand how and why residual stresses develop inengineering materials, if we want to predict and improve their properties. As a case study, pearlitic steel will beinvestigated because of (1) its industrial relevance: sawing wire, cabling for bridges, train rails; (2) the daunting scientificquestions that remain unanswered; and (3) its mesmerizing microstructure, consisting of and intricate assembly of softferrite (dark) and hard cementite (bright) lamellae. The ultimate goal of this fundamental research is to measure, butabove all explain the evolution of the residual strain with ongoing plastic deformation and, in the process, learn moreabout the deformation behavior of this beautiful, but elusive material.

Research MethodologyNeutron and synchrotron diffraction studies on both cold drawn and coldrolled pearlite were performed at various international laboratories in order tomeasure the residual lattice strain evolution. The microstructure of thematerial was also investigated. These results were compared to an intuitivemodel, in which the lamellar microstructure of pearlite is idealized as a perfectstack of flat, alternating ferrite and cementite lamellae.

Results & ConclusionsThree stages in the residual lattice strain (RS) evolution were found: (1) a quick increase of RS, (2) a saturation of RSand (3) strong intergranular RS development. Following the approach outlined above, a full explanation for all threestages is suggested. A number of particularly interesting conclusions could be drawn: Ferrite and cementite have to deform by a different amount, even within the (imperfect) lamellar microstructure. The cementite phase likely exhibits some type of strain hardening, linked to increasingly difficult dislocation nucleation. A dislocation rich layer develops at the ferrite-cementite interface.

Residual and internal stress development resulting from plastic deformation of multi-phase alloys – The case of pearlite

Deformation

Rol

ling

dire

ctio

n

Cementite strain hardening occurs

146

Despoina VriamiDepartment Materials Engineering (MTM)

PhD defence 05 November 2015

Supervisor Prof. dr. ir. Omer Van der Biest

Co-supervisor Prof. dr. ir. Jozef Vleugels

Funding IWT (SBO-PROMAG 60056)KU Leuven GOA/2008/007

E-mail [email protected]

Introduction / ObjectiveTexture research aims to develop materials with favorable properties. Texturing of materials is very important since manymaterial properties are orientation specific. The main goal of this research work is to develop methods for texturingpolycrystalline ceramics with special properties (electrical, piezoelectric and mechanical).

Research MethodologyThe employed methods to achieve texturing were templated graingrowth (TGG) and magnetic alignment, by colloidal processing in astrong magnetic field (Fig 1 & 2). The colloidal processes used wereelectrophoretic deposition (EPD) and slip casting and the materialsinvestigated were BaTiO3, α-Al2O3, 3Y-TZP and 12Ce-TZP.

Results & Conclusions Templated Grain Growth (TGG)Highly textured BaTiO3 achieved by the TGG process (Fig. 3). Thepiezoelectric constant of the textured BaTiO3 increased by 47%compared to the randomly oriented ceramic. Magnetic Alignment High texture achieved in α-Al2O3 by slip casting in a strongmagnetic field of 14 T and in BaTiO3 in 17.4 T strong magnetic field. Highly textured zirconia (3Y-TZP and 13Ce-TZP) aligned in 17.4 T.

Major publicationD. Vriami, E. Beaugnon, J-P. Erauw, J. Vleugels, O. Van der Biest (2015). Journal of the European Ceramic Society, 35,3959-3967

Application of a strong magnetic field for texturing of technical ceramics

Fig. 1: Schematic of the alignmentof the platelets by the doctor blade

Fig. 2: Magnetic alignment during EPDin a strong magnetic field

Fig. 3: 100 and 101 pole figures of sintered BaTiO3 aligned with BaTiO3 platelets Fig. 4: Vickers indentations and radial crack pattern on the sintered zirconia slip

cast outside a magnetic field (a) and the surface parallel (b) to the 17.4 T field.

Mechanical properties of 3Y-TZP are significantly enhanced by texturingachieved by colloidal processing (EPD or slip casting) in 17.4 T (Fig.4).

a b

147

Benjamin GorissenDepartment Mechanical Engineering

PhD defence 06 November 2015

Supervisor Prof. dr. ir. Dominiek Reynaerts

Co-supervisor Prof. dr. ir. Michaël De Volder

Funding Research Foundation – Flanders (FWO)

E-mail [email protected]

Introduction / ObjectiveModern industrial and medical applications require miniature actuators that are able to generate large strokes without the risk of damaging their surroundings. Flexible fluidic actuators are especially interesting for applications in close proximity to living organisms due inherent safety features caused by material compliance. This research focusses on flexible fluidic actuators with a bending and twisting deformation under pressurization. Covered topics range from analysis and optimization to production, augmentation and application.

Research MethodologyThe large bending deformation an inflatable asymmetric structure exhibits when pressurized, is analyzed using a new model consisting of an axial deformation step and a longitudinal deformation step. Prototyped actuators, fabricated using a newly developed single step micromolding process and a full lithography production process are used to validate the suggested analytical model. These production processes lead to new application possibilities: a ciliary propulsion mechanism and a flexible endoscope. Further, flexible fluidic twisting actuators are analyzed, following the same methodology, resulting in a two degrees of freedom tilting mirror platform application.

Results & ConclusionsThe proposed bending deformation model shows good correspondence with measurements on actuator prototypes as can be seen on the top figure on the right . This model is used to design optimal actuators that experience minimal internal stresses for a giving bending deformation.By avoiding manual process steps, the new full lithography productions process is used to further minimize bending actuator dimensions with a lower limit in feature size of 4µm. The deformation of a prototyped bending actuated made by using this process is shown on the left image of the 2nd

figure. The minimal attainable size, on the right image.Optimal flexible fluidic twisting actuators, that are made out of a single material, are produced that can achieve a twisting deformation of 70°. The deformation of such an actuator is shown on the 3rd figure.These optimal twisting actuators are incorporated in a tilting mirror platform, capable of inducing twice a tilting deformation from -25° to + 25°. Optimal bending actuators are incorporated in a ciliary propulsion system that shows a max net fluid flow of 19mm/s. Finally, a bending actuator is equipped with a small camera to form a flexible endoscope with a diameter of 1,66mm and an increase in field of view of 45°, as is shown on the last figure.

Major publicationB. Gorissen, M. De Volder, D. Reynaerts (2015). Pneumatically-actuated artificial cilia array for biomimetic fluidpropulsion. Lab on a Chip.

Pneumatic and Hydraulic Microactuators: Research and Development

148

Emmanuel MidhemeDepartment Architecture

PhD defence 10 November 2015

Supervisor Prof. Frank Moulaert

Co-supervisor Prof. Maarten Loopmans

Funding Interfacultaire Raad voorOntwikkelingssamenwerking (IRO)

E-mail [email protected]

Introduction / ObjectiveThis research examines the social production of urban space within two rapidly transforming secondary cities of Kenya,namely Voi and Kisumu. The objective is to assesses the institutional capacity and practices of both ‘official’ and ‘popular’agents of space production, with respect to how their respective knowledges, demands, and practices are unrolled andintegrated in (re)producing various urban spatialities.

Research MethodologyThe study conceptualizes urban space as a ‘social construction’ that is contingent upon experiences, practices and powergeometries that shape the relations between various social groups and institutional logics in the city. The researchleverages on the Lefebvrian concepts of the production of space and the right to the city to provide a critical reading ofhow marginalized groups employ various forms of social innovation and insurgent urbanism to appropriate and defendcrucial spaces of livelihoods, shelter and urban services. The empirical study employs a multiple case design usingethnographic techniques of data collection and analysis.

Results & ConclusionsThe findings dismantle formal/informal duality discourses, and raise key questions of legitimacy and transcendingdivides in the everyday practices of producing urban space within rapidly transforming cities. Nevertheless, the focus oneveryday agency does not obscure structural relations and material dimensions that continue to shape urban spatialproduction. Socially innovative practices for example suffer cooptation by the state and other powerful agents and mayunwittingly reproduce the liberal discourse that celebrates the entrepreneurial and absolves the state from its coreresponsibilities. Similarly, insurgent urbanism may not always be progressive, especially where powerful actors usurpthe process, reproducing the very structures of exclusion that insurgency aims to dismantle.

Major publicationsMidheme, E. and Moulaert, F. (2013) 'Pushing back the frontiers of property: community land trusts and low-income housing in urban Kenya', Land Use Policy, 35: 73–84.Midheme, E. (2013) ‘Venturing off the beaten path: Social innovation and settlement upgrading in Voi, Kenya’ in F. Moulaert, D. MacCallum, A. Mehmood and A. Hamdouch (eds). The International Handbook on Social Innovation. Cheltenham, UK and Northampton, MA, US: Edward Elgar. Midheme, E. (2013) ‘(Re)designing urban land tenure to meet housing needs of the poor: implementing community land trusts in urban Kenya’, Planum: The Journal of Urbanism 1(26): 1–12.

Modalities of space production within Kenya’s rapidly transforming citiesCases from Voi and Kisumu

149

Niels LeemputDepartment Electrical Engineering (ESAT)

PhD defence 13 November 2015

Supervisor Prof. dr. ir. Johan Driesen

Funding IWT (2012-2015)

IET-KIC InnoEnergy (2013-2015)

E-mail [email protected]

Introduction / ObjectiveThe number of plug-in electric vehicles (PEVs) on the road is growing significantly, which allows to reduce theconsumption of greenhouse gas emitting fossil fuels, such as gasoline and diesel. The absence of tailpipe emissionsreduces the local concentrations of harmful pollutants, which is benefits human health. As their number increases, thegrid impact of PEV charging is observed more widely. Due to the clustering of PEV users, high local concentrations mayalready occur in the near-term future, which will already impact certain distribution grids.The objective of this dissertation is to analyze how to locally control the charging process of PEVs, to mitigate theirdistribution grid impact, Local active and reactive power charging strategies, requiring no external inputs or merely a userinput (next departure time), are investigated,. The distribution grid impact and sizing requirements of fast charginginfrastructure are assessed, being an indispensable option for battery electric vehicles, complementary to slow charging,

Research MethodologyAn unbalanced load flow model is used for the distribution gridimpact assessment of the investigated charging strategies andfast charging infrastructure. The main contributions are: Improved PEV load modeling, by taking into account

mobility behavior, fleet composition, battery capacity, standardized charging power rating, and charging opportunities.

The identification of the need for local PEV charging strategies, to mitigate the distribution grid impact, to manage local clusters of PEVs that will occur prior to widespread PEV penetration.

The combined modeling of slow and fast charging behavior, as they complement each other.

Results & ConclusionsThe investigated PEV charging strategies allow to substantially mitigate the distribution grid impact of PEV charging, withlimited adaptations compared to their current implementation. The active power control strategies could be implementedon all of the currently used onboard PEV chargers. The reactive power control strategies can be implemented on onboardPEV chargers with a full-bridge active rectifier topologies, as used for several PEVs. The distribution grid impact of theslow charging control strategies is more significant than the presence of fast charging infrastructure. Therefore, thedistribution grid impact of fast charging infrastructure can even be compensated for by implementing the proposed controlstrategies for slow charging.

Grid-Supportive Charging Infrastructure for Plug-In Electric Vehicles

Major publications N. Leemput, F. Geth, J. Van Roy, J. Büscher, and J. Driesen, “Reactive power support in residential LV distribution grids through electric vehicle

charging,” Sust. Energy, Grids and Networks, vol. 3, pp. 24-35, Sept. 2015. N. Leemput, F. Geth, J. Van Roy, P. Olivella-Rosell, J. Driesen, and A. Sumper, “MV and LV residential grid impact of combined slow and fast charging

of electric vehicles,” Energies, vol. 8, no. 3, pp. 1815-1822, Mar. 2015. N. Leemput, F. Geth, J. Van Roy, A. Delnooz, J. Büscher, and J. Driesen, “Impact of electric vehicle on-board single-phase charging strategies on a

Flemish residential grid,” IEEE Trans. Smart Grid, vol. 5, no. 4, pp. 1815-1822, Jul. 2014.

Impact of local charging strategy on the aggregated PEV chargingbehavior (top), and on the total MV feeder load (bottom).

150

Gabrijel SmoljkicDepartment Mechanical Engineering

PhD defence 19 November 2015

Supervisor Prof. dr. ir. Jos Vander Sloten

Co-supervisor Prof. dr. ir. Dominiek Reynaerts

Funding FP7

E-mail [email protected]

Introduction / ObjectiveA growing interest can be seen in the use of continuum robots in tasks where interaction with fragile structures in theenvironment is required. The reason for this interest is that the compliant nature of these devices brings in an inherentlevel of safety. Continuum robots and instruments would, at least initially, bend away upon contact or impact. Work in thisthesis treats the problems in the domain of modelling and control of continuum robots.

Research MethodologyModeling of a continuum robot relates to finding the shape of the robot when it is loaded with various forces. Followingthese derivations a novel framework for derivation of robot differential kinematics has been developed. Having these keyingredients the work proceeds towards control of continuum robots. Here a constraint-based control principle has beenfound suitable due its ability to combine control of various tasks. The constraint based control is a control principlecommonly used to control rigid link robots. Here this framework is applied to control continuum robots in tasks whichrequire robot positioning and force control. Finally an integrated system for applications in robotic surgery has beendeveloped. The integrated system features a hybrid combination of rigid and continuum robots. This combination allowsfor increased dexterity of the robotic system and compliant behavior.

Results & ConclusionsThe developed framework allows for calculation of the differential kinematics as a function of externallyapplied loads. Through the application of the constraint-based control framework, the hybrid rigid-continuumrobot achieves smooth motions in presence of various constraints present in the robotic system.

The developed continuum robot (top figure) and theintegrated hybrid rigid-continuum robot (right figure)

Major publicationSmoljkic, G., Borghesan, G., Reynaerts, D., Vander Sloten, J., Vander Poorten, E. (2015). Constraint-Based Interaction Control of Robots Featuring Large Compliance and Deformation. IEEE Transactions on Robotics, 31 (5), 1252-1260.

Modelling and constraint-based control of continuum robots

151

José Luis SantosDepartment Computer Science

PhD defence 20 November 2015

Supervisor Prof. dr. ir. Erik Duval

Co-supervisor Prof. dr. Katrien Verbert

Funding weSPOT - IST (FP7/2007-2013)

E-mail [email protected]

Introduction / ObjectiveLearning Analytics is the field that tackles the challenge of making sense out the collected data in educationalenvironments. Learning analytics dashboards can support students and teachers to steer the learning process. In thisdissertation we tackle questions that discuss what kind of learning traces can be visualised for learners and teachers aswell as the affordances and problems of using manual and automatic trackers in learning settings. We also discuss whatare the key components of a simple architecture to collect, store and manage learning activity.

Research Methodology

Results & Conclusions

Major publicationSantos, Jose Luis; Verbert, Katrien; Klerkx, Joris; Charleer, Sven; Duval, Erik; Ternier, Stefaan. 2015. “Tracking data inOpen Learning Environments”, Journal of Universal Computer Science, 21(7), 976-996.

Exploring Learning Analytics and Learning Dashboards from a HCI Perspective

The goal is to document and connect outcomes with development process and the authentic setting.

Design-based research is the methodology we follow toexplore the field. A methodology that relies on iterative cyclesof design, enactment, analysis, and redesign of the software.The researcher does not have the control of the experiment.Therefore, the interventions are contextually dependent. Theyare not performed solely based on the research goals.

We designed, developed and deployed three learning dashboards. These were evaluated with 128 students. Weanalysed two language learning MOOCs datasets. 56876 students enrolled these courses. The architecture supporting

the learning dashboards was also deploy in more than fifteencase studies.

The results point out:

•The usefulness of dashboard to compare students’ activitywith their peers.•Manual and automatic trackers have benefits and drawbacks.•This research also identifies three components to deploy asimple and flexible architecture to collect data in open learningenvironments.

We also point out the opportunity we have in LearningAnalytics enhancing the interaction on big and small displaysas well as integrating different sensors in the learning process.

152

Geert BauwensDepartment Civil Engineering

PhD defence 23 November 2015

Supervisor Prof. dr. ir.-arch. Staf Roels

Co-supervisor Prof. dr. ir. Geert Lombaert and Prof. dr. ir. Henrik Madsen

E-mail [email protected]

Introduction / ObjectiveThe building sector accounts for about 40 % of Europe’s total final energy use, with more of this attributable to spaceheating. Several studies indicate a discrepancy between designed and actual performance of building envelopes. We thusneed reliable methods to verify the latter. In this work, we investigate the thermal performance characterisation of wholebuilding envelopes on the basis of specific heating experiments, with a focus on their overall heat loss coefficient, H.

Research MethodologyWe develop a building physical framework to describe a building’sbehaviour during specific heating experiments. From this framework,we derive simplified thermal models, that are suitable to be fitted to datacollected from quasi-stationary and dynamic heating experiments. Wedefine four scenarios, that provide tailored strategies to estimate H.The developed methodology is applied to three quasi-stationary andthree dynamic test cases. We suggest a flowchart, to guide practitionerin reliably characterising the overall heat loss coefficient of buildings.

Results & ConclusionsThe direct link between simplified models and the building physicalframework allows to correctly interpret model parameters. We showthat black-box ARX models comprise physically relevant parameters.The selection of a suitable scenario underpins reliability and physicalrelevance of obtained estimates. Scenarios where H is estimated asthe stationary gain with regard to Ti yield very reliable and robustresults. A dynamic test analysed with ARX modelling proves not morerobust than a quasi-stationary test analysed with linear regression orARX. The combination of dynamic test and grey-box modeling yieldreliable results for short measurement durations.

Building physical framework.

Major publicationBauwens, G. and Roels, S. (2014).Co-heating test: A state-of-the-art.In: Energy and Buildings 82, October 2014, pp. 163-172.

In situ testing of a building’s overall heat loss coefficient Embedding quasi-stationary and dynamic tests in a building physical and statistical framework

Deviation of estimates from their mean, in %

Quasi-stationary test setupin investigated house

153

Mario ZanonDepartment Electrical Engineering (ESAT)

PhD defence 26 November 2015

Supervisor Prof. dr. ir. Moritz Diehl

Co-supervisor Prof. Alberto Bemporad, Ass. Prof. Sébastien Gros

Funding SADCO, HIGHWIND

E-mail [email protected]

Introduction / ObjectiveThis thesis is concerned with optimal control techniques for optimal trajectory planning and real-time control andestimation. The framework of optimal control is a powerful tool which enjoys increasing popularity due to its applicability toa wide class of problems and its ability to deliver solutions to very complicated problems which cannot be intuitivelysolved.

Research MethodologyFast and reliable implementations hinge on suitable problem formulations, careful implementations and use of tailoredalgorithms. In this thesis, we address the problem of applying optimal control in real time for fast nonlinear constraineddynamic systems. On the one hand, we develop a modelling approach for multibody systems which yields optimisation-friendly models of reduced complexity. On the other hand, we address the formulation of optimal control problems andpropose reliable and robust problem formulations as well as techniques for initialising the algorithms. Moreover, weextend stability theory for economic MPC and propose a tuning strategy for tracking MPC so as to locally approximate thefeedback control law yielded by economic MPC. We apply our developments to autonomous driving and tethered airfoilsfor energy harvesting.

Results & ConclusionsWe validated the theoretical developments by applyingthem to nontrivial systems, e.g. tethered airfoils: tunedtracking MPC (dashed blue) is a good approximation ofeconomic MPC (red), while standard tracking MPC(black) has a different behaviour.

Dual airfoils are more advantageous than singleairfoils because they can reach the height having themaximum available wind power.

Economic MPC can be stabilising also for periodicsystems: we illustrate the theory by applying it to thecontrol of an autonomous car.

Major publicationM. Zanon, S. Gros and M. Diehl. Indefinite Linear MPC and ApproximatedEconomic MPC for Nonlinear Systems. Journal of Process Control, 2014, (24)1273-1281.

Efficient Nonlinear Model Predictive Control Formulations for Economic Objectives with Aerospace and Automotive Applications

154

Sijia JiangDepartment Materials Engineering (MTM)

PhD defence 26 November 2015

Supervisor Prof. dr. ir. Marc Heyns

Co-supervisor Prof. dr. ir. Marc Seefeldt

Funding imec

E-mail [email protected]

Introduction / ObjectiveAccording to the International Technology Roadmap for Semiconductors (ITRS), high mobility semiconductors (e.g. Ge,InGaAs) will replace the conventional Si to serve as channel material at the sub-10nm technology node, in order to furtherboost the performance of complementary metal-oxide-semiconductor (CMOS) devices. In order to be compatible with thewell-established Si CMOS technology, the hetero-integration of these high mobility semiconductors onto large scale Siwafers is utilized to realize this replacement. The selective area growth (SAG) of these semiconductors onto patternedshallow trench isolation (STI) Si substrates is proved as one valuable solution to achieve the hetero-integration. Based onthe current technology of CMOS devices, the selective growth of high mobility III/V semiconductors using metal-organicvapor phase epitaxy (MOVPE) into extremely scaled trenches (e.g. 10nm wide or narrower) is needed.

Research MethodologyIn order to address the challenges of selective epitaxial growth induced by the down-scaling trench width, this PhD thesisis devoted to understand the size effects of the down-scaling trench width on the behavior of III/V SAG both theoreticallyand experimentally.

Results & Conclusions• A down-scaling trench width results in a longer critical inter-

island distance (2L*) if the trench width becomescomparable to or smaller than 2L* on blanket substrates.

• The lateral growth rate of well-faceted 3D islands decreasessince on the side facet, the nucleation of 2D islandsbecomes the limiting kinetics and the supersaturation for 2Dnucleation decreases with the down-scaling trench width;

• The inter-facet (e.g. between facet (001) and facet (111))surface migration is facet size dependent and becomesenhanced in the down-scaling trenches, which can lead to aself-limiting crystal shape instead of the equilibrium oneduring the practical growth.

• The average growth rate inside submicron trenchesbecomes trench width dependent, since the Knudsendiffusion of precursor molecules through the trenches limitsthe mass transport and the Gibbs-Thomson effectdecreases the supersaturation, i.e. the driving force for theepitaxial growth.

Major publicationS. Jiang, C. Merckling, A. Moussa, N. Waldron, M. Caymax, W. Vandervorst, N. Collaert, K. Barla, R. Langer, A. Thean,M. Seefeldt, and M. Heyns, “Nucleation Behavior of III/V Crystal Selectively Grown Inside Nano-Scale Trenches: TheInfluence of Trench Width,” ECS J. Solid State Sci. Technol. 4, N83 (2015).

Selective Area Growth of III/V Compounds on Si substrates Using Metal-Organic Vapor Phase Epitaxy

155

Azamat ShakhimardanovDepartment Mechanical Engineering

PhD defence 27 November 2015

Supervisor Prof. dr. ir. Herman Bruyninckx

Co-supervisor Prof. dr. Ing. Gerhard Kraetzschmar

Funding FP7

E-mail [email protected]

Introduction / ObjectiveOver the last 50 years, the controlled motion of robots has become a very mature domain of expertise. It can deal with allsorts of topologies and types of joints and actuators, with kinematic as well as dynamic models of devices, and with oneor several tools or sensors attached to the mechanical structure. Nevertheless, the domain has not succeeded instandardizing the modelling of robot devices, including such fundamental entities as “reference frames”, let alone thesemantics of their motion specification and control. This thesis aims to solve this long-standing problem, from threedifferent sides: semantic models for robot kinematics and dynamics, semantic models of motion specification andcontrol problems, and software that can support the latter while being configured by a systematic use of theformer.

Research MethodologyThe composable semantic models allow decoupling of physicalprimitives from their coordinate-specific representations. Thisfeature is achieved by explicitly separating structural andbehavioral aspects of the models and follows the Model DrivenEngineering (MDE) methodology. The research also exemplifiesthe relations between the components of Whole Body ControlArchitecture (WBCA) and their software representations in theform of the Domain Specific Languages (DSL) in the motionprogramming stack.

Results & Conclusions The main challenge in the process of the DSL development

is the determination of the domain specific semanticconstraints on the domain primitives and operations.

The functionality of the Popov-Vereshchagin solver isextended to cope with priority and weighting-based controlapproaches. The advantage of using these extensions ofthe solver over other similar approaches is that thepresented solver does not require explicit nullspaceprojection of the constraint Jacobian to implementprioritization.

Fig 2: A generic WBCA consists of constraints,constraint controllers in one of the constraint spacesand a kinematic or a dynamic model solver.

Major publicationA. Shakhimardanov, H. Bruyninckx. Design and development ofa composable DSL for robot kinematics and dynamicsconforming to formal semantic models: lessons learned,submitted to Journal of Software Engineering for Robotics(JOSER), 2015

Composable Robot Motion Stack: Implementing constrained hybrid dynamics using semantic models of kinematic chains

Fig 1: the orthogonal relationship between the taskprogramming stack of DSLs and their models inMDE

156

Yuanyuan GUANDepartment Materials Engineering (MTM)

PhD defence 30 November 2015

Supervisor Prof. dr. ir. Nele Moelans

Funding Grants OT/07/040 and CREA/12/012

E-mail [email protected]

Results & ConclusionsThis research led to 3 major results: The growth-rate coefficient of an intermetallic phase increases linearly as a function of the square root of its solubility

range multiplied with its interdiffusion coefficient at a specific temperature

Major publicationGuan, Y. and Moelans, N., Influence of the solubility range of intermetallic compounds on their growth behavior in hetero-junctions. Journal of Alloys and Compounds, 635(2015): 289-299

Development of a method to determine the solubility ranges of intermetallic compounds in metal-metal connections

The solubility ranges of the IMCs ε-Cu3Sn and η-Cu6Sn5 are not negligible and they are temperature independent

Experimentally measured growth rates of ε-Cu3Sn and η-Cu6Sn5 IMCs in Cu-Sn diffusion couple are achieved by employing the estimated solubility range

157

Thomas Wijnhoven

Department Electrical Engineering (ESAT)

PhD defence 30 November 2015

Supervisor Prof. dr. ir. Geert Deconinck

Funding Research Foundation - Flanders (FWO)

E-mail [email protected]

Introduction / ObjectiveIn the future, there will be more and more Distributed Generation (DG) units in the grid and more of these DG units will be Converter Based Distributed Generation (CBDG) units. The goal of this dissertation is to evaluate the impact of these scenarios on the fault currents and voltages during balanced and unbalanced faults.

Research MethodologyTo evaluate these effects in scenarios with a high share of CBDG, a simplified calculation framework is developed and validated in this dissertation: the Iterative Linear Network Equations Method (ILNEM) (calculation procedure)

Results & Conclusions• Fault behaviour of CBDG units is a control design

parameter and is important in scenarios with a high share of CBDG

• CBDG units can contribute to the short-circuit power• Comparison of different fault contribution strategies

during unbalanced faults in scenarios with a high share of CBDG and little conventional generation:

Major publicationT. Neumann, T. Wijnhoven, G. Deconinck, and I. Erlich, “Enhanced Dynamic Voltage Control of Type 4 Wind Turbines during Unbalanced Grid Faults,” Accepted for publication in IEEE Transactions on Energy Conversion, 2015.

Evaluation of Fault Current Contribution Strategiesby Converter Based Distributed Generation

only positive sequence voltage support (and negative sequence current blocking) • reduction of fault currents

o impact of fault on higher voltage levelso protection systems at lower voltage levels require

redesigno fault currents depend on the load

• overvoltages (disconnections of CBDG units?)o limited by limited voltage support

• higher stress on remaining synchronous generators

positive and negative sequence voltage support (with injection of negative sequence currents) • no / much less reduction of fault currents

o no significant change of impacto no impact on protection system

o no dependency on the loads • no overvoltages

• no additional stress for remaining synchronous generators

In summary, with the appropriate voltage support settings for CBDG units, the CBDG units adequately replace the conventional generation from a fault behaviour point of view.

158

Xing GongDepartment Materials Engineering (MTM)

PhD defence 30 November 2015

Supervisor Prof. Marc Seefeldt

Co-supervisor Prof. Bert Verlinden and Prof. Martine Wevers

Funding SCK•CEN

E-mail [email protected]; [email protected];

Introduction / ObjectiveFerritic-martensitic T91 steel is a candidate material for constructing the proton beam window of the ADS/MYRRHAnuclear reactor, which is being developed at SCK•CEN, Belgium for transmuting long-lived nuclear waste. The aim of thisPhD thesis is to investigate the effect of LBE coolant on the low cycle fatigue properties of T91 steel, which is part of theMYRRHA materials assessment program, as well as to improve the understanding of liquid metal embrittlement (LME).

Research MethodologyLow cycle fatigue properties of T91 steel were tested in LBE under different conditionsusing LIMETS3 system (Fig. 1). A mechanical extensometer was designed to allow forstrain measurement at the gauge of a specimen immersed in LBE. Oxygen concentrationin LBE was measured using solid electrolyte potentiometric oxygen sensors.

Major publicationX. Gong, P. Marmy, B. Verlinden, M. Wevers, M. Seefeldt, Low cycle fatigue behavior of a modified 9Cr-1Mo ferritic-martensitic steel in lead bismuth eutectic at 350°C - Effects of oxygen concentration in the liquid metal and strain rate,Corrosion Science, 2015, 94:377-391.

Liquid Metal Embrittlement of a 9Cr-1Mo Ferritic-martensitic Steel in Lead-bismuth Eutectic Environment under Low Cycle Fatigue

Fig. 1 LIMETS3 system

Fig. 2

Fig. 3

Fig. 4

Results & Conclusions T91 is susceptible to LBE embrittlement, manifested by significant life reduction at high strain amplitudes (Fig. 2). There is no significant fatigue life reduction under a combination of low strain amplitudes, slow strain rate and high oxygen (Fig. 3). Temperature dependence of fatigue life in the presence of LBE shows a “trough” (Fig. 4). Crack tip plasticity is greatly reduced by LME, evidenced by the absence of grain refinement near the crack (Figs. 5 and 6).

Fig. 5Tested in vacuum

Fig. 6Tested in

LBE

159

Fábio Luis Marques dos SantosDepartment Mechanical Engineering

PhD defence 02 December 2015

Supervisors Prof. dr. ir. Wim Desmet and Prof. dr. ir. Luiz Góes

Co-supervisor Dr. ir. Bart Peeters

Funding EC FP7 ITN Marie Curie

E-mail [email protected]

Introduction / ObjectiveModal testing or experimental modal analysis (EMA) is a very well known and es tablished procedure in both academia [and industry. It is a common means of estimating or identifying the modal parameters of a system - mainly the mode shapes, natural frequencies, damping and modal scaling. The most common and established way of performing experimental modal analysis is to use acceleration based transducers that lead to the calculation of the displacementmode shapes. However, the use of strain measurements for use in experimental modal analysis has gained a lot of popularity in the last couple of years. This thesis has as the main focus of research the use of strain sensors for experimental modal analysis. In this sense, experimental methodologies and improvements on the current ways of carrying out strain modal analysis are presented, paying particular attention to the relationship between strain and displacement modes.

Research MethodologyThe study of the association between displacement and strain mode shapes led to: Novel procedure for strain modal scaling Reciprocity in strain modal analysis Application on beam and planar structuresUse of combined strain and acceleration modal analysis

Results & Conclusionsthe combination of these theoretical and practical contributions to strain-based modal analysis lead to an advance in strain modal analysis theory, and a better understanding of when and how to properly use strain measurements in EMA, how to properly visualize and interpret the strain mode shapes and how the boundary conditions of the system being analyzed influence the differences between displacement and strain mode shapes.

Strain Reciprocity

Major publicationFábio Luis Marques dos Santos, Bart Peeters, Ludo Gielen, Wim Desmet, Luiz Carlos Sandoval Góes, The Use of Fiber Bragg Grating Sensors for Strain Modal Analysis. Topics in Modal Analysis, Volume 10. Springer International Publishing, 2015 pp 93-101

STRAIN-BASED EXPERIMENTAL MODAL ANALYSIS: ADVANCES IN THEORY ANDPRACTICE

Strain and displacementmodeshapes of an F-16 aircraftHelicopter blade strain modeshape

160

161

Gijs HilhorstDepartment Mechanical Engineering

PhD defence 09 December 2015

Supervisor Prof. dr. ir. Goele Pipeleers

Co-supervisors Prof. dr. ir. Jan Swevers, Prof. dr. ir. Wim Michiels

Funding MBSE4Mechatronics, DYSCO

E-mail [email protected]

Introduction / ObjectiveThe continuously increasing industrial demands drive research communities topush the limits in the design of accurate and high performance controllers fordynamical systems, such as autonomous vehicles, production machines, etc.Therefore, enhanced controller design procedures are indispensable in thisevolution. Typically, first a mathematical model (P) describing the behavior of adynamical system is derived. Then, a controller (K) using real-time measurementsis designed for this model according to the desired performance specifications. Forinstance, a fast response should be guaranteed while limiting energy consumption.In a last step, the controller is validated in closed loop with the dynamical system.

Research MethodologyTo meet the tightening performance and accuracy demands from industry, a versatile approachis presented to design high performance fixed-order multi-objective controllers for the generalclass of linear parameter-dependent systems, encompassing linear parameter-varying (LPV)and uncertain linear dynamics. For each of these subclasses, the effectiveness and practicalviability of our approach is demonstrated by theoretical proofs of stability and performance,numerical comparisons with existing approaches, and experimental validations. In addition, anovel model order reduction technique is combined with our approach to design fixed-ordercontrollers for continuous-time linear time-delay systems. Finally, a parametric programmingapproach is presented to design high performance feedback controllers for LTI systems, whilesimultaneously optimizing structural parameters affecting the system dynamics.

Results & ConclusionsThe practical viability of our approach isdemonstrated on a lab-scale overheadcrane with varying cable length, bydesigning a high performance multi-objective fixed-order LPV controller. Thefigures on the right compare thesimulated (dotted) and experimental(solid) response of a fixed-ordercontroller (black) and high-ordercontroller (orange) to a reference step,respectively, a swing angle disturbance.

Major publicationG. Hilhorst, G. Pipeleers, W. Michiels and J. Swevers (2015). Sufficient LMI conditions for reduced-order multi-objectiveH-2 / H-infinity control of LTI systems. European Journal of Control, 23, 17-25.

Design of Fixed-Order Feedback Controllers for Mechatronic Systems

162

Second, enhancing the protection devices themselves so that they can deal withthe varying fault current levels was considered. Here, adaptive protection thatuses a modified state estimation to gather information about the grid wasproposed. DG and switch status estimation tools as well as a tool for short circuitanalysis that accurately considers inverter based DGs were developed inMATLAB for the adaptive protection.

Harag MargossianDepartment Electrical Engineering (ESAT)

PhD defence 09 December 2015

Supervisor Prof. dr. ir. Geert Deconinck

Co-supervisor Prof. dr. ir. Juergen Sachau

Funding National Research Fund,Luxembourg

E-mail [email protected]

Introduction / ObjectiveThe evolution of the distribution network from a passive grid with unidirectional power flows to, in the presence ofdistributed generation (DGs), an active grid with bidirectional power flows can complicate the design of distributionnetwork line protection. This dissertation looks at what can be done from the perspective of the DGs and from theperspective of the protection devices, in order to avoid any problems with the reliability, selectivity and speed of theprotection system.Research MethodologyFirst, the possibility to control the fault current levels by imposing different requirements on the DGs was studied. Differentgrid code parameters and their impacts on the fault current levels were analyzed. Having network specific fault ridethrough requirements that specify how long and for what voltages the DGs need to remain connected and dynamicvoltage support curves that regulate their reactive current output during faults was proposed.

Results & ConclusionsThe choice of optimal grid code parameters depends on thecharacteristics of the distribution network as well as the location, size andtype of the DG. Different distribution networks should thus be subject todifferent fault ride through and dynamic voltage support requirements toensure the reliable operation of their protection devices. However,imposing additional requirements on inverters inevitably increases theircomplexity and consequently their cost.

Major publicationH. Margossian, G. Deconinck, and J. Sachau, “Distribution Network Protection Considering Grid Code Requirements for Distributed Generation”, IET Generation, Transmission and Distribution, vol. 9, no. 12, pp. 1377-1381, 2015

Distribution Network Line Protection in the Presence of Distributed Generation

The results from the adaptive protection studies showed how effective it is in minimizing the operation times of relays (in some cases leading to a 50% drop in the operation times) while maintaining the coordination between them. The scheme is particularly interesting for distribution networks with high DG integration levels and relatively weak connection points to the supply grid where protection problems are poised to arise.

find x that minimizes J

V at every nodeθ at every node – slackPDG, QDG from last run of

algorithm

set of non linear equations relating the measurements to the states h(x)

normalized residuals r

adjust/add measurements

measurements z:

Pseudo‐measurementsload estimates

zero injection buses

Actual measurementsP,Q measurementsV measurements

DG characteristics & control curves

states x:

weights (1/R)

V measurementsPDG, QDG from last run of

algorithm

adjust weights of DG outputs

DG Status Estimation Flowchart

Impact of the voltage threshold and the maximum reactive current on the fault current levels

163

Vanoost DriesDepartment Electrical Engineering (ESAT)

PhD defence 11 December 2015

Supervisor Prof. dr. ir. De Gersem Herbert

Co-supervisor Prof. dr. ir. Pissoort DavyProf. dr. ir. Gielen Georges

Funding IWT

E-mail [email protected]

Introduction / ObjectiveSmall electromechanical devices have a comparably low efficiency due to the relatively high stator Joule losses. A novelworking principle for such devices avoids stator coils by combining permanent magnets with an anisotropic controllableferromagnetic composite. This composite converts electrostatic energy into magnetic energy using both the piezoelectriceffect and the Villari effect. The composite, inserted in the stator of a permanent magnet axial flux machine, acts as avariable reluctance and guides the magnetic flux to exert a rotating field on the rotor.

Research Methodology The research includes the design, construction and implementation of thesmart composite. Appropriate materials, geometries and fabricationprocesses have been selected. The smart composite has been implementedin an axial machine. A dedicated control strategy has been developed. The design is accompanied by extensive modelling of the smart compositeand the axial machine. A multi-physics solver combining electrostatic andstructural mechanical 2D Cartesian solvers with a 2D radially symmetricmagnetoquasistatic solver, coupled through a multi-scale energy basedmaterial model has been developed.

Results & ConclusionsThe novel working principle isdemonstrated using the multi-physicssolver, confirming the initialassumptions and confirming the novelworking principle as a feasiblealternative for traditional small electricalmachines.

Fig. 1: Operation cycle passedthrough in clockwise direction.

Major publicationVanoost, D., Steentjes, S., De Gersem, H., Peuteman, J., Gielen, G., Pissoort, D., Hameyer, K. (2015). Embedding aMagneto-Elastic Material Model in a Coupled Magneto-Mechanical Finite-Element Solver. IEEE Transactions onMagnetics, PP (99), 1-4.

An axial solid state motor based on an anisotropic controllable ferromagnetic composite

Fig. 2: Multiphysical model.

164

Milica Milutinovic

Department Computer Science

PhD defence 11 December 2015

Supervisor Prof. dr. ir. Bart De Decker

Funding iMinds

E-mail [email protected]

Introduction / ObjectivePrivacy in the general sense refers to individuals’ ability to protect information about themselves and selectively present itto other entities. This concept is strongly affected by everyday practices that assume personal data disclosure. Thismakes it difficult for an individual to control the outflow of her personal data and provides third parties with strong datacollection possibilities. This thesis aims to address this issue by providing solutions that protect the privacy of individuals.

Research MethodologyTo enhance the protection of users’ privacy, this thesis focuses on two aspects of managing personal information:

Results & ConclusionsThis thesis describes solutions that are commerciallyapplicable and improve the ballance between provider’sdata gathering needs and users’ need to protect andmanage their (partial) identities.

Figure 1: Prolog-based privacy-feedback framework.

Major publicationMilica Milutinovic, Italo Dacosta, Andreas Put and Bart De Decker. uCentive: An efficient, anonymous and unlinkableincentives scheme, 14th IEEE International Conference on Trust, Security and Privacy in Computing andCommunications (IEEE TrustCom-15), Helsinki, Finland, 20-22 August, 2015.

Privacy-preserving identity management

Figure 2: Privacy-preserving eHealth system architecture.

Privacy-preserving design and development of information systems Users’ ability to make informed decisions about information disclosures.

More concretely, the work presented in this thesis encompasses development of: Anonymous and unlinkable incentives scheme. It allows to limit user’s data disclosure in services such as loyalty schemes or reputation systems Privacy-preserving eHealth system. The design is developed for settings where trust assumptions are limited (Figure 2)

Privacy-preserving public transport ticketing system. It prevents tracking travellers, while enabling necessary controls by providers Logic-based framework for privacy evaluation and feedback. It allows the users to track the dynamic knowledge of providers about them and accordingly evaluate their privacy level (Figure 1).

165

Marc ClaesenDepartment Electrical Engineering (ESAT)

PhD defence 14 December 2015

Supervisor Prof. dr. ir. Bart De Moor

Co-supervisor Prof. dr. ir. Frank De Smet

Funding IWT

E-mail [email protected]

Introduction / ObjectiveDiabetes mellitus is a metabolic disorder characterized by chronic hyperglycemia, which may cause serious harm tomany of the body’s systems. Diabetes can be managed effectively when detected early, but this proves difficult as thetime between onset and clinical diagnosis may span several years and about one third of diabetes patients in Belgium areundiagnosed. We built a population-wide screening tool for diabetes based on Belgian health expenditure data to speedup the diagnosis of patients so treatment can be initiated before the disease has caused irrevocable damage.

Research MethodologyWe used health expenditure data collected by the National Alliance of Christian Mutualities – the largest social health insurer in Belgium. Screening was formulated as a binary classification task, in which diabetes patients represent the positive class. Due to the nature of the problem and limitations of health expenditure data, we were unable to identify a set of known negatives (patients without diabetes).

Some of the main challenges we tackled during this research project include: Building and evaluating models from positive and unlabeled data Evaluating binary classifiers using test sets without known negatives Automating the hyper-parameter optimization process via heuristic optimization

Results & ConclusionsOur screening method has competitive performance to existing state-of-the-art approaches, which is surprising given that health expenditure data omits most info about the typical risk factors used by other screening methods (BMI, lifestyle, genetic predisposition, …).

Our key medical and machine learning contributions include: Methods to build and evaluate models without known negatives Two open-source libraries, with over 1,000 downloads/month Mapped the survival of diabetes patients via pharmacotherapy Screening approach with suitable performance for case-finding (in Figures)

Major publicationClaesen, M., De Smet, F., Gillard, P., Mathieu, C., & De Moor, B. (2015). Building Classifiers to Predict the Start of Glucose-Lowering Pharmacotherapy Using Belgian Health Expenditure Data. arXiv preprint arXiv:1504.07389Submitted to Journal of Machine Learning Research – Special Issue on Learning from Electronic Health Data.

Machine Learning on Belgian Health Expenditure DataData-Driven Screening for Type 2 Diabetes

166

Tina MattheysDepartment Materials Engineering (MTM)

PhD defence 15 December 2015

Supervisor Prof. dr. ir. Jef Vleugels

Co-supervisor Prof. dr. ir. Omer Van der Biest

Funding6th framework programme(NMP3-CT-2006-026501) Meddelcoat-project

logo funding agency if applicable

Introduction / ObjectiveAlthough medical technology is already very successful, orthopaedic implant loosening (65%) and infections (7%) stilloccur. The resulting revision surgeries do not only cause pain for the patients but they also form a financial and socialburden on our society. In this work, skeletal fixation and durability of the orthopaedic implant in the human body isestablished by integrating a biological and bioactive fixation within a porous Ti-based coating system.

Research MethodologyTo enhance biological implant fixation, electrophoretic codeposition (EPD) of TiH2stabilised emulsions and suspensions is used as a processing technique for theengineering of porous Ti coatings on Ti6Al4V substrates.For chemical modification, thin bioactive glass (BAG) coatings are applied bymeans of EPD or dip coating.Besides a full morphological and mechanical characterization of the obtainedsubstrate-coating systems, the biocompatibility of the coatings was examinedboth in vitro and in vivo .

Results & Conclusions Co-deposition of TiH2-stabilised cyclohexane-

water/ethanol emulsions and a structural ethanolbased suspensions, introduce a spherical macroporous Ti network.

Applying thin BAG top coatings resulted in mainlyfilling up of the open surface pores.

In vitro biocompatibility evaluation showed improvedcell spreading and adhesion of human osteogeniccells in comparison with state-of-the-art Ti coatings aswell as a lower inflammatory potential,

After 4 weeks of implantation, formation of trabecular bone is observed in the regenerative bone cavity as well as atthe interface with the Ti6Al4V substrate, confirming the potential for mechanical interlocking of the macroporous mask.

Major publicationMattheys, T., Braem, A., Neirinck, B., Van der Biest, O. and Vleugels, J. (2012) “Porous Ti coatings for implant fixation byelectrophoretic deposition of TiH2 particle stabilized emulsions.” Advanced Engineering Materials 14 (6): 371-376.

Development of Multifunctional Biocompatible Coatings

Figure 1: EPD of Pickering emulsions

Figure 3: Histological cross-section at the bone-implant interface of a macroporous Ti coating after 4 weeks of implantation.

Figure 2: An acetabular cup after applying a macroporous Timask by means of EPD (left) and a SEM surface view (right)

167

Wouter VolkaertsDepartment Electrical Engineering (ESAT)

PhD defence 15 December 2015

Supervisor Prof. dr. ir. ing. Patrick Reynaert

Co-supervisor Prof. dr. ir. Michiel Steyaert

Funding EU (ERC) + bilateral (NXP)

E-mail [email protected]

Introduction / ObjectiveThe demand for higher data rates in communication links increases continuously. Higher data rates can be achieved byusing more bandwidth. At millimeter wave frequencies (30-300GHz) large bandwidths are available and recently there isa lot of interest in circuits operating at these high frequencies. Thanks to technology improvements CMOS transistors nowprovide gain up to hundreds of GHz. In this research the feasibility of 120GHz circuits in recent CMOS technologies isinvestigated.

Research MethodologyDuring the research the whole IC design flow is executed, including circuit simulations, EM simulations, layout and measurements. Two subjects are investigated in this work: A voltage controlled oscillator is an essential building block in modulated

communication systems. The frequency tuning range is one of the keyspecifications that must be optimized. Another problem that is investigated is LOpulling by a transmitter integrated on the same chip.

A gigabit communication link is designed which consists of a 120GHz continuous-phase frequency shift keying transmitter and receiver chip, a plastic fiber, andcouplers between the chips and the fiber. A plastic waveguide is a low-losschannel at high frequencies and enables millimeter wave communication overmeters distance.

Results & ConclusionsThe feasibility of 120GHz CMOS circuits andcommunication links is proven by several test chips.The different implementations are: 120GHz VCO in 65nm CMOS with 7.8% analog

tuning range. 120QVCO in 45nm CMOS with 13.5% tuning range

and resistant against LO pulling by an on-chiptransmitter.

120GHz plastic waveguide communication link in40nm CMOS. Data rates up to 12.7Gbps werereached for a 1 meter link and 2.5Gbps for a link of7 meters. The best link energy efficiency is1.8pJ/b/m for a 4 meter link and 7.4Gbps data rate.

• (top) Die micrograph ofthe 120GHz VCO.

• (middle) Die micrographof the 120GHz QVCO.

• (bottom) Photograph ofthe plastic waveguidecommunication link.

Major publicationW. Volkaerts, N. Van Thienen and P. Reynaert, “An FSK plastic waveguide communication link in 40nm CMOS”, IEEE International Solid- State Circuits Conference (ISSCC), pp. 178 – 180, San Francisco, USA, Feb. 2015

Millimeter Wave Oscillators and Transceivers in Nanoscale CMOS

168

Wei HuangDepartment Mechanical Engineering

PhD defence 16 December 2015, 17h00

Supervisor Prof. dr. ir. C.M.J. Tampère

Co-supervisor Prof. ir. L.H. Immers

Funding China Scholarship Council, IWT-140433

E-mail [email protected]

Introduction / ObjectiveThis thesis focuses on optimization of anticipatory traffic signal control in urban networks. While optimizing controlvariables, anticipatory control explicitly takes into account travelers’ route choice response, which is usually approximatedby traffic assignment models. The general aim of this PhD research is to develop control methods that elevate the trafficsystem to its best achievable performance, taking account of the uncertainty inherent in the model accuracy.

Research MethodologyThis thesis presents a repeated anticipatory traffic control policy through iterative learning. Given a predefined desiredtraffic state, a rule-based Iterative Learning Control (ILC) is applied to traffic signal setting.An optimization-based iterative learning approach is elaborated in the anticipatory control context, in which the desiredstate is no longer predefined but endogenously optimized. The iterative optimizing control methods perform learning onflow sensitivity to control changes, which is important to the solution optimality.

Results & Conclusions Identification and development of a rule-based iterative learning control to track a desired traffic state. Development of optimization-based iterative learning methods to improve control performance in the context of inaccurate network flow modeling. Development of an integrated framework to achieve better control for reality as well as better model calibration. Development of a dual control method that allows for general applications in noisy networks

Symbolic illustration of model-reality mismatch

Major publicationHuang, W., Viti, F., Tampère, C.M.J., 2015. Repeated anticipatory network traffic control using iterative optimizationaccounting for model bias correction. Transportation Research Part C, 2015 (under second round review).Huang, W., Viti, F., Tampère, C.M.J., 2015. An iterative learning approach for signal control in urban traffic networks withinaccurate equilibrium models. Transportmetrica B, 2015 (under review).Huang, W., Viti, F., Tampère, C.M.J., 2015. A dual control approach for anticipatory traffic control with estimation ofnetwork flow sensitivity. Journal of Advanced Transportation, 2015 (under review).Smith, M., Huang, W., Viti, F., 2013. Equilibrium in capacitated network models with queueing delays, queue-storage,blocking back and control. Procedia – Social and Behavioral Sciences, Vol 80, pp. 860-879.

Optimization-based Iterative Learning for Anticipatory Traffic Signal Control

Signal setting that considers route choice response

11851185

1185

1185

1185

1185

1190

1190

1190 11

9011

9011

90

1195

1195

1195

1195

1195

1195

1200

1200

1200

1200

1200

1200 12

0512

0512

0512

1012

1012

1012

1512

1512

1512

2012

2012

2012

2512

2512

2512

3012

3012

3012

35

Signal green split

Flow

on

link

2 (v

eh/h

)

0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9962

964

966

968

970

972

974

976

978real optimumtwo-step approachpartial model bias correctionfull model bias correction

Reality-tracking controloptimization via modelbias correction

The key algorithmic implementation issue regarding the estimation offlow sensitivity from noisy measurements is addressed.

169

Dries GeebelenDepartment Electrical Engineering (ESAT)

PhD defence 17 December 2015

Supervisor Prof. dr. ir. Joos Vandewalle

Co-supervisor Prof. dr. ir. Johan A.K. Suykens

Funding Name agency + logo if applicable

E-mail Preferable, but not mandatory

Introduction / ObjectiveBiodiversity is the degree of variation of life. It is important because it increases the stability of ecosystem functions. Weuse kernels to develop indices that measure diversity and related properties. More specifically, the kernels are used tomodel the similarity between species. In a second application, we predict QoS values – such as response time andavailability - of workflow compositions. A workflow composition is a composite service consisting of a number of individualservices that are executed in series, parallel or according to other composition patterns.

Research MethodologyIn our research, we combine different fields:Linear algebra: eigenvalues, eigenvectors andthe Schur–Horn theorem are central in ourbiodiversity framework.Kernels: In both applications, all algorithms arekernel methods..Entropy: our proposed Shannon diversity ismathematically related to the Von Neumannentropy – the entropy used to measure quantuminformationOptimization: our QoS prediction algorithm andcertain proposed indices are the solution of anoptimization problemEcology: before you can develop and use biodiversity indices, you need to understand what biodiversity is and in which contexts it is used.

Results & ConclusionsThere is no general truth about how diversity and related properties should be measured. There are, however,properties that these indices should satisfy such that they behave as an ecologist would expect them to behave. Wecreated a set of properties for each index and showed that our indices are the only ones that satisfy all of theseproperties. With respect to predicting QoS, our algorithm outperforms existing algorithms on both real-world andsimulated data. It has a number of favorable properties. For example, it implicitly takes into account the dependenciesbetween different services. These dependencies can occur, for instance, when web services run on the same server.

Performance of our QoS prediction algorithm on simulated data:the goal is to predict the value below which the response time ofthe workflow composition will be in 99% of the cases. Ouralgorithm (blue line) outperforms an existing algorithm (green line)because the existing algorithm doesn't take into account that theresponse times of the individual services are negatively correlated.The predicted values of the existing algorithm are much too high.

Kernel-based Methods for Measuring Biodiversity and Predicting Quality of Service

170

171

Ninah KoolenDepartment Electrical Engineering (ESAT)

PhD defence 17 December 2015

Supervisor Prof. dr. ir. Sabine Van Huffel

Co-supervisor Prof. dr. Gunnar Naulaers, Prof. dr. ir. Maarten De Vos, Prof. dr. Sampsa Vanhatalo

Funding IWT, FWO, iMinds

E-mail [email protected]

Introduction / ObjectiveAround 10 percent of all human births is premature, which means that annuallyabout 15 million babies are born before 37 completed weeks of gestation. In general, premature and immature babieshave a high risk for neurological abnormalities by maturation in extra-uterine life. Clinical experts visually assess evolvingelectroencephalography (EEG) characteristics over both short and long periods to evaluate maturation of patients at risk.The aim of this PhD research was to develop supporting software for the automatic analysis of preterm EEG patterns.

Research MethodologyQuantification of the changing EEG pattern with maturation

Results & ConclusionsEEG features show a significant correlation with maturation (in function of the postmenstrual age). The interburst intervals (IBI) lengths get shorter. This development is also shown by the decrease of the data-driven histogram index. The brain connectivity increases, quantified by an increase of interhemispheric synchrony.

Major publicationN. Koolen, K. Jansen, J. Vervisch, V. Matic, M. De Vos, G. Naulaers, S. Van Huffel (2014). Line length as a robust method to detect high activity events: Automated burst detection in premature EEG recordings. Clinical Neurophysiology, 125 (10), 1985-1994.N. Koolen*, A. Dereymaeker*, O. Räsänen, K. Jansen, J. Vervisch, V. Matic, M. De Vos, S. Van Huffel, G. Naulaers, S. Vanhatalo (2014). Interhemispheric synchrony in the neonatal EEG revisited: activation synchrony index as a promising classifier. Frontiers in Human Neuroscience, 8, art.nr. 1030. * joint first author

Automated quantification of preterm brain maturation using electroencephalography

In conclusion, the EEG patterns can be assessed over longer time intervals and patients at risk can be identified by automated means. The knowledge and expertise of medical experts is aggregated in the implemented algorithms, which adapt automatically at an individual patient level. A feature set of EEG indexes is reported and promising for implementation of preterm developmental growth charts.

A. Burst detection algorithm. B. Change of the line length distribution in function of age. More EEG activity is present at older age, resulting in a shift of the distribution towards longer line lengths.

C. Interhemispheric synchrony between two EEG channels is quantified by the Activation Synchrony Index (ASI).

172

Costanza HerreraDepartment Electrical Engineering (ESAT)

PhD defence 17 December 2015

Supervisor Prof. dr. ir. Guy Vandenbosch

Funding Monesia - Erasmus Mundus ExternalCooperation Window

E-mail [email protected]

Introduction / ObjectiveWe investigated the characteristics of and metamaterials and periodic structures and focused on a particular possibleapplication: absorption of electromagnetic waves. The unusual and sometimes extraordinary properties of artificialcomplex materials have been of interest for the scientific community for more than 10 years and several possibleapplications been proposed.In our work, numerical and experimental research and results are presented. The aim is using these materials’ specialcharacteristics in order to improve conventional absorbing materials and devices, find more advantageous applications,propose new structures and gain further knowledge about complex materials in general.

Research MethodologyAn initial introductory investigation on the characteristics of ACMs is made, literature studies are carried out, systematicanalyses and parametric sensitivity studies are performed, and in order to propose new structures, numerical analysesincluding simulations with antenna and electromagnetic software-based solvers are performed. For some structures theresults of two different types of solvers are used. Subsequently, verifications of their final results are validated throughexperimentation with built prototypes. In compliance with the available tools and resources, experiments are realized inthe microwave range, with closed and open (free-space) measurement systems.

Results & ConclusionsStructures with wideband absorbing behaviour were presented, a demanded property among absorbing materials and challenging to be obtained with periodic structures.Development of at least some of them with the adequate materials could turn them into interesting commercial applications when compared to existing conventional ones.Also, very low-profile, thin (one sixth of the wavelength) and small-scaled structures could be fabricated within the required frequency range, yielding good filtering performance and excellent agreement with simulation results.

Major publicationC. Herrera, G.A.E. Vandenbosch, “Systematic study of double-layered ultra-thin stacked patch absorbers”, Proc. EMCEurope 2013, Brugge, Belgium, Sep.2013.

Periodic structures and metamaterials for absorption purposes

LW

w1

Double copper-dielectric layer

FACULTEIT INGENIEURSWETENSCHAPPEN

Kasteelpark Arenberg 1 bus 22003001 LEUVEN, België

tel. + 32 16 32 13 50fax + 32 16 32 19 82

[email protected]