Multisensor Fusion and Integration: Theories, Applications, and its Perspectives

17
3122 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011 Multisensor Fusion and Integration: Theories, Applications, and its Perspectives Ren C. Luo, Fellow, IEEE, Chih Chia Chang, and Chun Chi Lai (Invited Paper) Abstract—The decision-making processes in an autonomous mechatronic system rely on data coming from multiple sensors. An optimal fusion of information from distributed multiple sensors requires robust fusion approaches. The science of multisensor fusion and integration (MFI) is formed to treat the information merging requirements. MFI aims to provide the system a more accurate perception enabling an optimal decision to be made. The wide application spectrum of MFI in mechatronic systems includes industrial automation, the development of intelligent robots, military applications, biomedical applications, and mi- croelectromechanical systems (MEMS)/nanoelectromechanical systems (NEMS). This paper reviews the theories and approaches of MFI with its applications. Furthermore, sensor fusion methods at different levels, namely, estimation methods, classification methods and inference methods, are the most frequently used algorithms. Future perspectives of MFI deployment are included in the concluding remarks. Index Terms—Classification methods, estimation methods, infer- ence methods, intelligent robotics, mechatronics, multisensor fu- sion and integration (MFI). I. INTRODUCTION M ULTISENSOR fusion and integration is a technology that refers to the synergistic combination of sensory data from multiple sensors to achieve inferences that are not fea- sible from each individual sensor operating separately. The ad- vance of the development of the sensor technology is not suffi- cient enough without the utilization of multisensor fusion tech- niques. Since sensors of different types that are integrated into the system have their own limitations and perceptive uncertain- ties, an adequate data fusion approach is expected to reduce overall sensory uncertainties and thus serves to increase the ac- curacy of system performance. The main advantages of the im- plementation of multisensor fusion and integration (MFI) in- clude that one can get not only enhanced but also complemen- Manuscript received July 14, 2011; revised August 16, 2011; accepted Au- gust 19, 2011. Date of publication August 30, 2011; date of current version Oc- tober 28, 2011. The associate editor coordinating the review of this paper and approving it for publication was Prof. Krikor Ozanyan. R. C. Luo is with the Center for Intelligent Robotics and Automation Re- search, National Taiwan University, Taipei, 10617 Taiwan (e-mail: renluo@ntu. edu.tw; [email protected]; [email protected]). C. C. Chang and C. C. Lai are with the Department of Electrical Engineering, National Chung Cheng University and Center for Intelligent Robotics and Au- tomation Research, National Taiwan University, Taipei, 10617 Taiwan (e-mail: [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/JSEN.2011.2166383 tary perceptions, and more timely information is available via parallel processing of sensory data. MFI can help the system sense changes in the environment and monitor the system itself. Issues related to multisensor fusion include data association and management, sensor uncertainty, and dynamic system mod- eling. They arise from the inherent uncertainties in the sensory information which are caused by not only device imprecision but also noises sources within the system and the sensor it- self. The strategies of multisensor fusion should be capable of dealing with these uncertainties and result in a consistent per- ception efficiently. Practically, MFI requires interdisciplinary knowledge in control theory, signal processing, artificial intelli- gence, probability, and statistics. Different definitions of the architectures of MFI have been proposed in literature. Luo and Kay [1]–[3] defined functional roles of multisensor integration and multisensor fusion, and proposed a three-level fusion category with adequate fusion algorithms according to the processed data formats. Dasarathy [4] proposed an I/O pair-based fusion architecture. Hall [5], [6] provided an introduction to multisensor data fusion based on the architecture of the Joint Directors of Laboratories (JDL) data fusion model [7], which was originally developed for military applications. Elmenreich [8] gave a review and the comparison between different models of MFI. Smith and Singh [9] gave an overview of contemporary multisensor fusion techniques relating to different fusion levels of the JDL framework, and discussed the weaknesses and strengths of the approaches in different applications. Henderson and Shilcrat [10] proposed a framework of logical sensor which treats the multisource information in a multisensor system based on the viewpoint of logical software programming. According to the category of sensor fusion methods defined by Luo and Kay, the algorithms applied to multisensor fusion at different levels include estimation methods, classification methods and inference methods. They are defined, not by the fusion purpose, but by the process of information treatment. Broadly speaking, the estimation methods are used to estimate preliminarily the incoming signals at a low level, the classi- fication methods are then applied to classifying the extracted features at a medium level, and the inference methods are utilized for decision-making at a high level. Fig. 1 shows an example of a multisensor-integrated mobile robot [11]. The robot, which is designed as an intelligent ser- vice robot, is equipped with a stereo camera, SICK laser ranger, ultrasonic sensors, fire sensor, and wheel odometer. Through the integration of multiple sensors and the implementation of 1530-437X/$26.00 © 2011 IEEE

Transcript of Multisensor Fusion and Integration: Theories, Applications, and its Perspectives

3122 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

Multisensor Fusion and Integration: Theories,Applications, and its Perspectives

Ren C. Luo, Fellow, IEEE, Chih Chia Chang, and Chun Chi Lai

(Invited Paper)

Abstract—The decision-making processes in an autonomousmechatronic system rely on data coming from multiple sensors. Anoptimal fusion of information from distributed multiple sensorsrequires robust fusion approaches. The science of multisensorfusion and integration (MFI) is formed to treat the informationmerging requirements. MFI aims to provide the system a moreaccurate perception enabling an optimal decision to be made.The wide application spectrum of MFI in mechatronic systemsincludes industrial automation, the development of intelligentrobots, military applications, biomedical applications, and mi-croelectromechanical systems (MEMS)/nanoelectromechanicalsystems (NEMS). This paper reviews the theories and approachesof MFI with its applications. Furthermore, sensor fusion methodsat different levels, namely, estimation methods, classificationmethods and inference methods, are the most frequently usedalgorithms. Future perspectives of MFI deployment are includedin the concluding remarks.

Index Terms—Classification methods, estimation methods, infer-ence methods, intelligent robotics, mechatronics, multisensor fu-sion and integration (MFI).

I. INTRODUCTION

M ULTISENSOR fusion and integration is a technologythat refers to the synergistic combination of sensory data

from multiple sensors to achieve inferences that are not fea-sible from each individual sensor operating separately. The ad-vance of the development of the sensor technology is not suffi-cient enough without the utilization of multisensor fusion tech-niques. Since sensors of different types that are integrated intothe system have their own limitations and perceptive uncertain-ties, an adequate data fusion approach is expected to reduceoverall sensory uncertainties and thus serves to increase the ac-curacy of system performance. The main advantages of the im-plementation of multisensor fusion and integration (MFI) in-clude that one can get not only enhanced but also complemen-

Manuscript received July 14, 2011; revised August 16, 2011; accepted Au-gust 19, 2011. Date of publication August 30, 2011; date of current version Oc-tober 28, 2011. The associate editor coordinating the review of this paper andapproving it for publication was Prof. Krikor Ozanyan.

R. C. Luo is with the Center for Intelligent Robotics and Automation Re-search, National Taiwan University, Taipei, 10617 Taiwan (e-mail: [email protected]; [email protected]; [email protected]).

C. C. Chang and C. C. Lai are with the Department of Electrical Engineering,National Chung Cheng University and Center for Intelligent Robotics and Au-tomation Research, National Taiwan University, Taipei, 10617 Taiwan (e-mail:[email protected]; [email protected]; [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/JSEN.2011.2166383

tary perceptions, and more timely information is available viaparallel processing of sensory data. MFI can help the systemsense changes in the environment and monitor the system itself.

Issues related to multisensor fusion include data associationand management, sensor uncertainty, and dynamic system mod-eling. They arise from the inherent uncertainties in the sensoryinformation which are caused by not only device imprecisionbut also noises sources within the system and the sensor it-self. The strategies of multisensor fusion should be capable ofdealing with these uncertainties and result in a consistent per-ception efficiently. Practically, MFI requires interdisciplinaryknowledge in control theory, signal processing, artificial intelli-gence, probability, and statistics.

Different definitions of the architectures of MFI have beenproposed in literature. Luo and Kay [1]–[3] defined functionalroles of multisensor integration and multisensor fusion, andproposed a three-level fusion category with adequate fusionalgorithms according to the processed data formats. Dasarathy[4] proposed an I/O pair-based fusion architecture. Hall [5], [6]provided an introduction to multisensor data fusion based on thearchitecture of the Joint Directors of Laboratories (JDL) datafusion model [7], which was originally developed for militaryapplications. Elmenreich [8] gave a review and the comparisonbetween different models of MFI. Smith and Singh [9] gavean overview of contemporary multisensor fusion techniquesrelating to different fusion levels of the JDL framework, anddiscussed the weaknesses and strengths of the approaches indifferent applications. Henderson and Shilcrat [10] proposeda framework of logical sensor which treats the multisourceinformation in a multisensor system based on the viewpoint oflogical software programming.

According to the category of sensor fusion methods definedby Luo and Kay, the algorithms applied to multisensor fusionat different levels include estimation methods, classificationmethods and inference methods. They are defined, not by thefusion purpose, but by the process of information treatment.Broadly speaking, the estimation methods are used to estimatepreliminarily the incoming signals at a low level, the classi-fication methods are then applied to classifying the extractedfeatures at a medium level, and the inference methods areutilized for decision-making at a high level.

Fig. 1 shows an example of a multisensor-integrated mobilerobot [11]. The robot, which is designed as an intelligent ser-vice robot, is equipped with a stereo camera, SICK laser ranger,ultrasonic sensors, fire sensor, and wheel odometer. Throughthe integration of multiple sensors and the implementation of

1530-437X/$26.00 © 2011 IEEE

LUO et al.: MULTISENSOR FUSION AND INTEGRATION: THEORIES, APPLICATIONS, AND ITS PERSPECTIVES 3123

Fig. 1. The structure of a multisensor-integrated intelligent service robot.

multisensor fusion, the mobile robot can autonomously estimatethe environment structure and simultaneously detect meaningfulsymbols in the building it serves. An information enriched mapcan be constructed accurately and conveniently by MFI. It is ap-propriate for advanced indoor service robot applications.

This paper provides a review of the state of the art of MFI byintroducing the architecture and fusion algorithms at differentfusion levels. Some application examples are also presented.This paper is organized as follows. Section II summarizessensor fusion methods at different levels, namely, estimationmethods, classification methods and inference methods, themost frequently used algorithms in previous research withtheir advantages and limitations. Section III introduces theimplementation and applications of MFI. Section IV presentsfuture prospects of MFI. Section V gives the conclusions.

II. OVERVIEW OF THE MFI ARCHITECTURE

Traditionally, multisensor fusion is defined as a three-levelhierarchy, namely, data fusion, feature fusion, and decision fu-sion. Classification in the I/O pair [4] based architecture is basedon the three-level (data-feature-decision) hierarchy. This modelseparates the MFI into five classes, namely, data in-data out fu-sion, data in-feature out fusion, feature in-feature out fusion,feature in-decision out fusion, and decision in-decision out fu-sion. It is intuitively classified by the type of the input and outputinformation.

The JDL model [5]–[7] divides the data fusion process intofour levels, namely, level 1 for object refinement, level 2 forsituation assessment, level 3 for threat assessment, and level 4for process assessment. Object refinement contains processes ofdata registration, data association, position attribute estimation,and identification. Situation assessment fuses the kinematic andtemporal characteristics of the data to infer the situation of theenvironments. Threat assessment projects the current situationinto the future to infer the threat posed by the enemy. In level1, the parametric information is combined to achieve refinedrepresentations of individual objects. Levels 2 and 3 are oftenreferred to information fusion, while level 1 is data fusion. Theprocess management in level 4 is an on going assessment of

Fig. 2. Architecture of MFI in an intelligent system.

other fusion stages to make sure that the data fusion processesare performing in an optimal way.

The concept of logical sensor proposed by Tom Hendersonand Esther Shilcrat [10], [12], [13] aims to aid in the coherentsynthesis of efficient and reliable multisensor system. The au-thors considered the data processing of sensor fusion problemin a multisensor kernel system (MKS) which is composed oflow-level data organization, high-level modeling, and logicalsensor specification. The logical sensor defined by the authorincludes four parts, namely, a logical sensor name, a character-istic output vector, a selector, and alternate subnets. A logicalsensor can be viewed as a network composed of subnetworkswhich are themselves logical sensors. Communication within anetwork is controlled via the data flow among subnetworks. Bymeans of logical sensors, the user can be insulated from the di-versity of various physical sensors. Design of a MFI system con-siders only the input data format without regard to the kind ofthe used sensor. Such a logical sensor system is reconfigurableand is more tolerant for sensing device failure. Luo and Hen-derson [14] had implemented the logical sensor specification ina servo-controlled robot hand with multiple sensors.

Fig. 2 shows the Luo and Kay model [1] which defines anarchitecture for multisensor fusion and integration. It explicitlydefines basic functions of MFI in a multisensor-integratedsystem. The fusion processes can be performed at differentlevels including the low level (signal and pixel level), themedium level (feature level), and the high level (symbol level).This categorization is defined according to the type of theprocessed input data and the resultant information provided forthe system. According to the data type and the fusion purpose,different fusion algorithms are utilized at respective fusionlevels. The measurements provided by multiple sensors can befused at one or more of these levels.

Functions and the employed fusion methods at the differentlevels are summarized as follows.

3124 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

Fig. 3. An implementation of MFI on a security robot for the intruder surveillance.

A. Low-Level Fusion (Signal and Pixel Fusion)

The sensory data fused at the signal-level can be single- ormultidimensional signal while the data being fused at the pixel-level are multiple images. The sensory data under fusion at thislevel simply represents the output or the state of the plant. Thesignal-level fusion can be used in real-time applications, andcan be considered as an additional step of the overall fusionprocess of the system. Due to the different sampling propertiesof multiple sensors, the multisensory data need to be synchro-nized and adapted before the fusion process. At this level, sta-tistical estimation methods have been successfully used. Theycan be grouped into nonrecursive methods and recursive ones.Nonrecursive estimation methods, such as the weighted-averagemethod and the least-squares method, are normally used only tomerge redundant data. Since these methods execute only onceto get an optimal estimation, they are time efficient comparedwith recursive algorithms. Nevertheless, recursive estimationmethods such as the Kalman filter (KF) and extended Kalmanfilters (EKFs) can be applied for fusion purposes.

Pixel-level fusion refers to the image processing directlyon the original pixel information from individual sensors togenerate a new composite image with better quality and morefeatures providing a better interpretation of the scene [15].Pixel-level fusion can improve the performance of patternrecognition algorithms. A popular application is signal pro-cessing on synthetic aperture radar (SAR) images which areobtained from satellites. Since radar interacts with the groundfeatures in ways different from the optical radiation, specialimage processing algorithms have to be employed for theinterpretation of radar images. Commonly used fusion methodsin pixel-level fusion include band-rationing fusion, principalcomponent analysis (PCA), wavelet transform fusion, andcombinations of these methods [16], [17].

B. Medium-Level Fusion (Feature Fusion)

The data fused at the feature level are features extracted fromsignals and images. Data fusion at this level is realized by con-catenating the feature points obtained from multiple sources toresult in a feature with higher discrimination [18]. A general fea-ture-level fusion procedure can be decomposed into three steps,namely, feature set uniformization and normalization, featurereduction and concatenation, and feature matching. Classifica-

tion methods are suitable for the feature fusion. Frequently usedmethods include support vector machines (SVM), cluster anal-ysis, -means clustering [19], [20], Kohonen feature map, andlearning vector quantization (LVQ).

C. High-Level Fusion (Symbol Fusion)

Unlike the low-level fusion which deals with raw sensorydata in a central manner, the information treated at the high-level is a symbolic representation of process parameters, muchlike human descriptions. Symbol fusion refers to the combiningsymbols with an associated uncertainty measure to generate acomposite decision. It is also referred to as decision fusion. Ingeneral, high-level fusion seeks to process local decisions frommultiple sensors to achieve a joint decision. A discuss on theimplementation approaches of high-level fusion and its compar-ison with low-level fusion can be found in [21]. Algorithms thatare tolerant of imprecision, uncertainty, partial truth and approx-imation are suitable for symbol-level fusion. Neural networks,genetic algorithms, evolution algorithms and fuzzy logic areusually employed. Also, inference methods such as Bayesianinference and the Dempster–Shafer method can be successfullyapplied for symbol-level fusion.

Fig. 3 illustrates an implementation of multisensor fusion andintegration in a security robot. The robot is equipped with rangesensors and a vision sensor. By fusing multisourced data andthe extracted features at different levels, a person is detected andrecognized.

III. FUSION ALGORITHMS WITH RELATED FUSION LEVELS

The algorithms used in MFI mainly stem from probabilitytheory, data classification methods and artificial intelligence. Asintroduced in the previous section, the fusion algorithms withcorresponding fusion levels are classified in Table I. The fu-sion algorithms can be broadly classified as estimation methods,classification methods, and inference methods. Based on the sur-veyed literature, the most frequently used algorithms are intro-duced in detail herein. The key concepts of them are interpreted,and the implementing scenarios are illustrated by applicationparadigms.

A. Estimation Methods

1) Kalman Filters (KFs): The KF [22] is a predict-updatetype estimator and is popularly utilized in many engineering

LUO et al.: MULTISENSOR FUSION AND INTEGRATION: THEORIES, APPLICATIONS, AND ITS PERSPECTIVES 3125

TABLE ICLASSIFICATION OF FUSION ALGORITHMS

Fig. 4. Procedure of KFs.

applications. Traditional KFs need an accurate linear modelof both the system dynamics and the observation process tobe optimal in a least-mean-squared-error sense. Fig. 4 showsthe common procedure of the Kalman filters. Consider a lineardynamic system with multiple sensors represented by thefollowing state-space model:

(1)

(2)

where represents the discrete-time index, is the statevector, is the measurement vector, is the state tran-sition model, is the control input model, representsthe observation model which maps the true state space into theobserved space, and are zero-mean white Gaussiannoises with covariance matrices and , respectively.In the prediction phase, given the state vector , a pre-dicted state at time step is generated by (3), andthe predicted estimate covariance is given by (4)

(3)

(4)

In the estimation phase, when the actual measurement attime step is conducted, the projected predictionwill be estimated by (5) resulting in the estimated state vector

, and the state estimate covariance matrix is up-dated by (6), where is the Kalman gain matrix that is ameasure of the relative confidence between the past estimates

and the newest observation. The Kalman gain is chosen to min-imize the a posteriori state estimate covariance

(5)

(6)

(7)

The main advantage of the KF is its computational efficiencysince reprocessing of the entire sequence of previous observa-tions is not required with the introduction of new observations.The information in the current state and its error correlation ma-trix summarizes the previous observations. However, it comesat the restricted assumptions that the system dynamics is linearand the initial uncertainty is Gaussian. Practitioners tackle thisproblem by extended KFs which linearize the system modelusing Taylor series expansions around a stable operating point[23]. The EKF implements a Kalman filter for a system whosedynamics result from the linearization around the previous stateestimate of the original nonlinear filter dynamics. Julier et al.[24] identified some disadvantages of EKF as follows. Thelinearization can produce highly unstable performance if thediscrete time intervals are not sufficiently small. Sufficientlysmall time intervals implies high-computational overhead, andthe derivation of the Jacobian matrices are nontrivial. Anothermethod which deals with the nonlinear filtering problem isthe unscented Kalman filter (UKF). The UKF is based on therelatively low complexity in approximating a known statisticaldistribution rather than approximating a nonlinear function. TheUKF calculates the correlation of the error in the state estimate,innovations, and state estimates together without passing valuesthrough the linearized process or measurement models. Thesimplicity of implementing a complex process or measurementmodel is a definite advantage of the UKF over the EKF, however,this simplicity comes at the expense of computational time.

Another rigorous challenge of the KF addressed in literatureis its inappropriate use in distributed tracking fusion problemsin which the measurements of different sensory sources maybe inconsistent and dependent, violating the strict assumptionof KFs [25], [26]. The following covariance-based fusion algo-rithms are proposed to overcome these problems.

2) Covariance-Based Fusion Algorithms: Ng and Yang [27]compared five fusion algorithms in the decentralized trackingproblem. The simple convex combination is a linear combina-tion of state vectors, where all weights are non-negative and sum

3126 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

Fig. 5. (a) Covariance fusion result with different �. (b) Applying CI on thelandmark estimation [34].

up to 1. This method is easily to be implemented with low cost[28], [29].

In the covariance-based fusion algorithms, the covarianceintersection (CI) attracts more research interests. CI was firstproposed by Julier and Uhlmann in 1997 [30] for the applica-tion of decentralized data fusion. It solves the fusion problemfor which KF is inappropriate when the sensory data measuredfrom multiple sensors are not independent [31], [32]. CI takesa convex combination of the means and covariance in theinformation space. The major advantage of CI is that it permitsfiltering and data fusion to be performed on probabilisticallydefined estimates without knowing the degree of correlationamong those estimates. Consider two different pieces of mea-surement and from different sources. If given the meanand variance: , , ,

, . Define the estimate asa linear combination of and . It represents the previous es-timate of the same target with certain measurement uncertainty.The CI approach is based on a geometric interpretation of theKF process. The general form of the KF can be written as

(8)

(9)

where the weights and are chosen to minimize . Thisform reduces to the conventional KF if the estimates are inde-pendent ( ). As shown in Fig. 5(a), the covariance ellip-

soid of CI will enclose the intersection region and the estimateis consistent. CI does not need assumptions on the dependencyof the two pieces of information when it fuses them. Given theupper bounds and , the covarianceintersection estimation results are defined as follows:

(10)

(11)

where , , . The parameter andmodifies the relative weights assigned to and . Different

choices of the weighting can be used to optimize the covarianceestimate with respect to different performance criteria such asminimizing the trace or the determinant of . Any optimiza-tion strategy can be used to find out the optimal weighting suchas the Newton–Raphson technique. In this method, the minimaldeterminant cost function of is chosen. Let

(12)

(13)

This theorem reveals the nature of the optimality of the best inCI algorithm. The CI algorithm also provides a convenient pa-rameterization for the optimal solution in -square dimensionalspace. The results can be extended to multiple variables and par-tial estimates as shown below

(14)

(15)

where refers to the th measurement input and.

Luo et al. [33] used CI to estimate the sensor nodes’ loca-tions for higher accuracy in the sensor network. They fused es-timates which were obtained from the power propagation lossand the propagation time. The authors also implemented the CIalgorithm on the mobile robot to improve the docking accuracyand efficiency [34]. They fused intensity images and range datausing the CI approach to locate the robot pose and detect thedocking station while performing the docking for recharging.The fusion process is depicted in Fig. 5(b). When the visionsensor and the Laser Range Finder (LRF) detect the landmarkconcurrently, one can obtain reliable orientation informationfrom the vision system, but only rough depth information. TheLRF, in contrast with vision sensor, can accurately measure thedistance of an extracted object, but it is hard for it to detect andrecognize a feature in the environment. The CI process can com-bine the individual advantages of sensors to get a complemen-tary measurement with lower uncertainty.

However, if there are two pairs of estimation and– which represent covariance and mean measurements of

the same real-world object, the differences between their meanscan be much larger than expected based on their respective errorcovariance estimates. For example, if we estimate two mean po-sitions as differing by several meters, but their respective covari-ances indicate each mean is accurate within a few centimeters,something is wrong. On the other hand, if it is not possible toprune estimates, then the only alternative is to associate a sim-

LUO et al.: MULTISENSOR FUSION AND INTEGRATION: THEORIES, APPLICATIONS, AND ITS PERSPECTIVES 3127

ilar property within a description. The main question is how toachieve this coalescence in such a manner that the integrity ofthe information is maintained. A mechanism called covarianceunion (CU) can be applied under this situation. In contrast toCI, the CU algorithm gives the solution to the problem of infor-mation corruption [35], [36]. Instead of pruning estimates, CUreplaces inconsistent estimates with a single estimate that is sta-tistically consistent with all the given estimates. The estimate isguaranteed to be consistent as long as at least one estimate isconsistent, regardless of which estimate is spurious.

For example, given estimations represented by estimates, CU produces an estimate

that is guaranteed to be consistent as long as one esti-mate is consistent. This is achieved by guaranteeingthat the estimate is consistent with respect to each ofthe estimates. The CU constraint is

...

(16)

The CU optimization has simple linear constraints that are com-patible with any generic constrained optimization package. Theconstraint is applied into a linear matrix inequality (LMI) fora minimum volume ellipsoid containing given ellipsoids.Each ellipsoid equation with a quadratic function as:

(17)

The minimum volume can be found by solving the followingdeterminant maximization problems:

(18)

where is given by ( ) [37]. The maxdet-problem is a convex optimization problem. A convex optimiza-tion problem is the one in which the objective and constraintfunctions satisfy the linear inequality. The convex optimizationcan be considered as a generalization of linear programmingwhich can be efficiently solved by several algorithms, such asin [38]. Fig. 6 shows the optimal ellipsoid fusion result in 2Dand 3D estimation. Fig. 7 shows a actual application with CUmethodology, with knowing stereo camera parameters, the SIFT(scale-invariant feature transform) features of a fire extinguisherin database which are matched by the pair of camera imagesin Fig. 7(a)–(c) and each SIFT feature depth is estimated inFig. 7(d). Finally, the blue line shows the optimal ellipsoid fu-sion result which is projected on the - plane in Fig. 7(e).

Fig. 6. Covariance Union fusion result on (a) 2D and (b) 3D estimation [11].

B. Classification Methods

The classification approaches in MFI refer to the processesgrouping multisource data into classified data sets. A multidi-mensional feature space is first partitioned into distinct classes.The location of a new coming feature vector is compared withpreclassified locations in the feature space, such that one canidentify which data class the new feature attributes. The classi-fication techniques can be grouped into parametric approachesand nonparametric ones [39].

Parametric classification methods include parametric tem-plates [40] and clustering approaches. A parametric templatespace is defined as a region of linear template space that isuniquely represented by a parameter. They are widely appliedin image processing and computer vision. The clusteringanalysis seeks to partition a given data set into groups basedon specified features via a training process, so that the datain each subset have common traits. It has been successfullyapplied in data mining, machine learning, pattern recognition,and bioinformatics [41].

In contrast to parametric methods, the nonparametric classi-fication algorithms are not constrained to prior assumptions onthe distribution of input data. Self-learning decision trees, ar-tificial neural networks (ANN) [42], [43], and support vectormachines (SVM) belong to this category.

3128 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

Fig. 7. (a), (b), and (c) SIFT features of fire extinguisher on stereo images.(d) Depth uncertainty estimation for each stereo SIFT feature. (e) Fire extin-guisher depth estimation by CU method [11]. (a) Temperate. (b) Left CCDimage. (c) Right CCD image.

SVM was first proposed by Vapnik et al. in 1995 [44]. SVMis a kind of supervised learning method which analyzes data andrecognizes patterns for classification and regression analysis. Astandard SVM is a nonprobabilistic binary linear classifier. Thekey idea is to generate an optimized discriminant, called hy-perplane, to demarcate the training data into two classes. Theoptimal hyperplane in SVM means the classification possessesminimum errors and the maximum margin between these twoclasses [45], [46]. Consider a set of labeled pointsas the training data, where are feature vectors and

are class labels. The goal is to construct an op-timal rule to assign a new point to one of the classes correctlyby the linear classifier function

(19)

Fig. 8. Illustration of the concept of an optimal hyperplane for a set of linearlyseparable data.

where and are weights and bias parameters, respectively, andis the inner product of these two vectors. The labeled pointsare classified into two classes by the following criteria:

(20)

As shown in Fig. 8, the solid line with is the op-timal hyperplane. The circled points on the parallel dashed lines,where satisfies with the equality in (20), form the so-called“support vectors” that are the nearest points from the hyper-plane in these two classes. In the figure, denotesthe distance between the hyperplane and the support vector, andit is the difference between the normal distances of these linesfrom the origin. The classification margin is defined asthe distance between these two support vectors. SVM aims tominimize subject to (20) to get a maximum margin.This is a quadratic optimization problem which can be solvedby finding the saddle point of the Lagrange function

(21)

where is the Lagrange multiplier. However, if the data is non-linear, the data should be projected into a higher dimensionalHilbert space through a mapping function , andthen be linearly separated in this space using kernel functions.The boundary is described in this form

(22)

where is the kernel function. Fre-quently used kernel functions include polynomials and radialbasis functions (RBF).

The achievement of maximum margin in SVM is suitablefor seeking safety in path planning. So, it is frequently appliedin mobile robots for simultaneous localization and mapping(SLAM), collision avoidance, path planning, navigation, andobjects recognition [47]–[49]. A noticeable application of SVMis research on the myoelectric control of prostheses that areassistive for amputees [50], [51]. Human beings perform in-tended motions by sending motor commands from the brain tothe muscles and these commands can be observed and recordedas electromyographic (EMG) signals which can be measuredon the skin surface. It has been shown that SVM is a promisingapproach to classify the extracted features of EMG signals.By training the amputee and classifying the measured EMG

LUO et al.: MULTISENSOR FUSION AND INTEGRATION: THEORIES, APPLICATIONS, AND ITS PERSPECTIVES 3129

Fig. 9. The operational concept of Bayesian inference.

signals, the motions of muscle contracting and expanding canbe classified as the commands to steer the prosthesis naturally.

C. Inference Methods

Bayesian inference and Dempster–Shafer reasoning are twopopular inference algorithms for MFI while each has its ownadvantages and limitations. When full probabilistic informationis available, the Bayesian-based methods provide perhaps thebest possible results. Bayesian inference can address mostfusion problems efficiently, but Dempster–Shafer reasoningmakes explicit any lack of information concerning a propo-sition’s probability and also can address some problems thatprobability theory cannot [52].

1) Bayesian Inference: Bayesian filters provide probabilisticestimation methods with a recursive predict-update process.With the assumption that the system state has Markovproperties, the current state depends only on the previousstate , and also the sensory measurement depends only onthe system’s current state. The likelihood of the system stateis represented as which is a conditionalposterior probability distribution function based on a set ofreceived sensory data, where is the system commandand is the sensory measurement. As shown in Fig. 9, thesystem model is used to predict the state forward. Then, thereceived measurements are processed to update the state byBayes’ rule. Define the state evolution function and themeasurement function which are given by

(23)

(24)

where , are noises of the process and the measurement, re-spectively. Assume that the probability at time

, is available. The prediction step involves the system model(23) to obtain the prior probability of the state at time usingthe prediction (25).

Predict:

(25)Since the state transition function is a Markov process,is independent of the measurements , and the prediction(25) becomes

(26)

where is defined by the system model (23). Whenthe measurement at time step is available, it will be incor-porated to update the prior estimate by Bayes’ rule

Update:

(27)

where

(28)

and depends on the likelihood function ,which is defined by the measurement model (24).

Bayesian inference is an abstract concept providing onlya probabilistic framework for recursive state estimation. KF,EKF, grid-based filters, and particle filters are all Bayesian-typemethods. They differ in how they represent the probabilitydensity over the state . Grid-based filters overcome therestriction of the required linearity imposed on the KF. Theyprovide the optimal recursion of the filtered densityby relying on discrete and piecewise constant representationsof the belief. The grid-based methods replace the integrationin (26) by summation. Nevertheless, grid-based approacheshave the drawback of the computational and space complexityrequired to keep the position grid in memory for every newobservation.

Another frequently adopted Bayesian method is the particlefilter (PF). Particle filters are a class of modern sequential MonteCarlo Bayesian methods based on point mass representationof posterior probability density. The main advantage of PFs isthe ability to represent arbitrary probability densities even innon-Gaussian and nonlinear dynamic systems. PFs are more ef-ficiency compared with grid-based methods, since the PFs con-centrate resources on regions in the state space with high likeli-hood. The sequential importance sampling (SIS) algorithm is aMonte Carlo approach that forms the basis for most particle fil-ters, such as sampling importance and resampling (SIR), auxil-iary SIR (ASIR), regularized particle filer (RPF), and likelihoodparticle filters [53], [54].

The key idea of PF is to represent the required posterior den-sity function in (27) by a set of random samples with associatedweights and to compute estimates based on these samples andweights. PF contains two major operational stages, namely, im-portance sampling and resampling.

At the importance sampling stage, the posterior isapproximated by a set of samples , whereeach is a state sample with associated weight . Theseweights are non-negative factors called importance weights andthey are normalized such that . Then, the poste-rior can be approximated as

(29)

3130 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

where is the Dirac delta function, and the weights arechosen by using the principle of importance sampling. Impor-tance sampling method supposes is a probabilitydensity from which it is difficult to draw samples but for which

can be evaluated. Let be samples gen-erated from a proposal which is the importance density.Then, a weighted approximation is made as

(30)

where the term is the normalized weight of the th particle.Now, consider the samples drawn from an importance den-sity , and the weights in (29) are defined as

(31)

By introducing the Markov property and using the Bayes’ rule,the weights can be derived as

(32)

This is the weight update equation. The choice of appropriateimportance density decides the efficiency of the weightingprocedure. It affects the performance of the particle filtersignificantly.

After a few iterations, unavoidably, some particles will haveinsignificant weights. This implies that parts of computationaleffort were wasted on updating particles whose contribution arealmost zero. So, the resampling process is necessary wheneveran obvious degeneracy is observed before the next updating.The concept of resampling is to eliminate particles having lowimportance weight and replicate particles having high impor-tance weights. Many resampling methods are proposed in liter-ature including systematic resampling, multinomial resampling,stratified resampling, and residual resampling. They differ in theidentification criterion whether particles being pruned away orbeing replicated. To summarize, the PF uses a set of weightedsamples to approximate the distribution density by (29), whereinthe weights is selected via the importance density in (32), andthen updates the filtered density by Bayesian method (27) and(28). PFs are massively used in the application of mobile robotsincluding the localization of the robot [55], the tracking of tar-gets [56], and SLAM for the robot [57].

C. Martin et al. [58] proposed a fusion method which is basedthe Bayesian filter and the CI algorithm helping the mobile robotto detect and track people. Fig. 10 shows the structure of themultisensor-integrated mobile robot. Multimodal sensors usedin the system include the frontal camera, the omnidirectionalcamera, the sonar sensors, and the laser range finder. Since eachof them has its own advantages and limitations, the multisensorfusion technique is applied to compensate the sensing range andenhance the tracking accuracy. The authors use a probabilisticaggregation scheme for people detection and tracking. Informa-tion from each sensor are first estimated by applying the Bayes

Fig. 10. Hardware structure of the companion robot [58].

Fig. 11. Fusion results of the detection and tracking of two individuals [58].

rule, posteriors of both the robot movement and the observationabout the detected person are computed by the predict-updatefiltering. Then, the Gaussian estimates are further fused by CI al-gorithm to get the human position with higher accuracy. Fig. 11shows the fusion result. The upper picture is the real scene froma bird’s eye view. Two people are standing in front of the robot.The four figures in the middle row show the current hypothesesgenerated by fisheye camera, frontal camera, laser range finder,and sonars. The black dot at the center in each plot represents thelocation of the robot. No sensor on its own can represent the sit-uation correctly. The figure on the bottom shows the fused resultwith correct and sharpened representation of detected people.

2) Dempster-Shafer Evidence Theory: The Dempster-Shafer(D-S) evidence theory owes its name to the work by Dempster(1968) [59] and Shafer (1976) [60]. The D-S theory is basedon two ideas: 1) obtaining degrees of belief for one questionfrom subjective probabilities for a related question and 2) using

LUO et al.: MULTISENSOR FUSION AND INTEGRATION: THEORIES, APPLICATIONS, AND ITS PERSPECTIVES 3131

Dempster’s rule to combine such degrees of belief when theyare based on independent items of evidence. The evidencetheory operates on a frame of discernment which contains afinite number of mutually exclusive propositions called focalelements. The power set of , , is the set of all subsets ofincluding the null set and itself. For instance, if there are twosubsets and , then the power set is

(33)

In D-S inference, there are three essential functions, namedthe mass function ( ) reflecting the evidence’s strength ofsupport, the belief function ( ), and the plausibility function( ). The mass function has following properties:

(34)

The belief and plausibility functions are derived from the valueof , where the belief is the lower bound of the probability

, and the plausibility is the upper bound, that is,. Consider there are sensor and sensor , the

belief and plausibility functions are defined as follows:

(35)

(36)

The Dempster’s combination rule takes the orthogonal sum ofeach possible proposition. The combination of these two beliefsis done by the following equation:

(37)

The D-S technique is usually implemented with other algo-rithms to improve the accuracy of the decision. Denoeux [61]combined the D-S technique into a neural network for patternrecognition. Zhan et al. [62] combined the D-S fusion theorywith the genetic algorithm to improve the accuracy of the genderand age recognition in a home service robot which was equippedwith a microphone array. They used genetic algorithm to simul-taneously select preprocessed features from four feature extrac-tion methods and the classifier from five classifiers. After that,the D-S procedure was conducted to fuse the decisions. The ex-perimental results showed that the accuracy of recognition wasbetter than that when each classifier was used alone. Fig. 12shows the block diagram of the gender/age classification basedon the D-S fusion technique.

D. Artificial Intelligence Methods

High-level fusion seeks to process local decision from mul-tiple sensors to achieve a joint decision. Artificial intelligencemethods can be seen as the advanced version of the estima-tion, the classification, and the inference methods at lower fu-

Fig. 12. Block diagram of the gender/age classification system based on theD-S fusion technique [62].

sion levels. They can be model-free and also have sufficient de-gree of freedom to fit complex nonlinear relationships with thenecessary precautions. Artificial neural networks (ANNs) andfuzzy logic algorithms are appropriate methods for high-levelinference.

ANN consists of layers of processing elements, namely inputlayer, hidden layer and output layer, and they may be intercon-nected in a variety of ways. Neurons can be trained to representsensory information, and through associative recall, complexcombinations of these neurons can be activated in response todifferent sensory stimuli. For multisensor fusion, the extractedfeatures from multiple sensors are classified by ANN and arefurther merged to yield more accurate results. Not only the syn-ergistic use of multiple sensors yields higher accuracy, but theincorporation of different algorithms also improves system per-formance. Mitra et al. [63] proposed an inference engine, whichwas a neural network architecture using an SVM, for the classi-fication of light detection and ranging data.

Fuzzy logic is basically a type of multivalue logic that allowsthe uncertainty in multisensor fusion to be directly representedin the inference process by assigning each proposition a realnumber from 0.0 to 1.0 to indicate its degree of truth. Stoveret al. [64] gave a description of a general fuzzy logic architec-ture providing the capability for multisensor fusion to achievehigh-level inferences. The implementation of fuzzy logic tech-niques for navigation and motion planning of mobile robots wasproposed in [65]–[68].

Luo and Su [69], [70] implemented an adaptive fusionalgorithm on a security robot for fire detection. In general, fireaccidents are accompanied by dense smoke, a spreading firesource, and increasing temperature. The security robot wasequipped with an ionization smoke sensor, a semiconductortemperature sensor, and an ultraviolet sensor to detect the densesmoke, the temperature variation, and the flame, respectively.To eliminate false alarms, the authors used an adaptive fusionalgorithm to generate a more reliable decision. The accuracyand efficiency of fire detection was improved through theadaptive fusion method. The proposed adaptive multusensor

3132 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

TABLE IISUMMARY OF ADVANTAGES AND LIMITATIONS OF FUSION METHODS AND ALGORITHMS

fusion method also used in the intelligent recharging system ofthe mobile robot [71].

As a summarization of the discussed fusion methods at dif-ferent levels, the most frequently used algorithms in literatureare listed in Table II with advantages and limitations.

IV. IMPLEMENTATION AND APPLICATIONS OF MFI

The technology of multisensor fusion and integration has awide spectrum of application in most of engineering science.The implementation of MFI facilitates the automation of anintelligent mechatronic system. Here, we introduce the imple-mention approaches of MFI in different application fields withselected examples, including intelligent robots, military appli-cations, MEMS/NEMS, biomedical applications, mechatronics,and computer vision.

A. Intelligent Robotics

The development of intelligent robotics mainly includeswheeled robots, industrial robots, and humanoid robots. In ourIntelligent Robotics and Automation Laboratory at the NationalTaiwan University, we have developed a wheeled companionrobot [72] which is designed to assist and provide humanservices in daily life. Fig. 13 shows the hardware structure ofthe robot.

The most important capability of a service robot is to trackand follow the targeted person. Herein, a visual sensor and laserrange finder are synergistically used to detect the distance andorientation of the targeted person for tracking and following. Toreduce the individual uncertainty of sensors, the CI algorithm

Fig. 13. Hardware structure of the companion robot [72].

is used to fuse the sensory information resulting complemen-tary perception with higher accuracy. Fig. 14(a) shows the mul-tisensor fusion process in the system, and (b) shows the fusionresult with a reduced uncertainty.

B. MFI for Military Applications

Military operations typically take place in large-scale, dy-namic environments. It is difficult to detect, track, recognize andresponse to every entity within the volume of interests. Thus, theMFI plays an important role that facilitates fusing the availablemultisource sensory information in a most efficient way. Both inmilitary and civilian applications, an increasing interest is beingshown in fusing visible and infrared images to monitor the sur-roundings. In military applications, for example, night visionsensors are normally used for helicopter navigation. Visible im-ages provide clearer edge information based on reflected light,which is restrained in night vision. Thermal sensors on the other

LUO et al.: MULTISENSOR FUSION AND INTEGRATION: THEORIES, APPLICATIONS, AND ITS PERSPECTIVES 3133

Fig. 14. (a) Multisensor fusion process of human detection. (b) Fusion resultwith complementary correlation [72].

Fig. 15. Procedure of the fusion of thermal and visible images [73].

hand provide a blurred image of heat emitting objects whichgive clues on the terrain and surrounding environment. By com-bining images from both sensors in real-time, a helmet mounteddisplay or a panel mounted in a vehicle provides the driver aclear view even in bad weather. Khan and Khan [73] proposean image fusion procedure to fuse thermal and visible imagesexploiting the advantages of discrete wavelet frame transform

Fig. 16. (a) Infrared image. (b) Visible image. (c) Fused image [73].

Fig. 17. Comparisons between the authors’ method and other two methods.(a) Plot of IQI values. (b) Plot of MI values [73].

(DWFT), kernel principal component analysis (K-PCA), andSVM. Fig. 15 shows the fusion procedure. The source imagesare decomposed using DWFT down to levels first. Then, theactivity level at each pixel location is computed using coeffi-cient-based activity method. This produces several subbands ofdetails (clearer edge information) and one approximation sub-band which is a low-pass version of the original image. Afterthat, SVM is applied to discriminate the details only, and theextraction of approximation subband information is done byK-PCA. Finally, the fused image is reconstructed using inverseDWFT (IDWFT) on the fused coefficients.

The fusion results is shown in Fig. 16. The comparisonsof fusion quality between Khan proposed scheme, FT-KPCA(K-PCA with DWT), and PCA alone are shown in Fig. 17. Twocriteria, namely the image quality index (IQI) and the mutualinformation (MI), are used to evaluate the image fusion quality.The higher the values of both IQI and MI, the better is thequality. It is demonstrated that Khan’s method is better.

3134 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

C. MFI for MEMS/NEMS

With the advances of the micromachining techniques, thedevelopment of MEMS/NEMS has grown rapidly in the lastdecade. Various micro/nanosensors with different operationalprinciples have been fabricated and been applied in mechatronicsystems. A relatively mature application of MEMS sensors is ininertial measurement unit that have been massively applied invarious automatic systems. MEMS sensors of this type includeaccelerometers which measure linear acceleration and gravity,magnetometers which measure the magnetic field for headingdetermination, gyroscopes which measure angular velocity, andpressure sensors which measure air pressure for attitude estima-tion. These sensors can be used alone for simple measurementssuch as the freefall detection, tilt measurement, and screen rota-tion. However, other complex applications, such as the motiongaming, robot balancing, human body tracking and unmannedaerial vehicles, need the synergistic use of these MEMS sensorsfor better performance. The implementation of MFI undoubt-edly plays a critical role in the cooperation. A. Noureldin etal. [74] improved the performance of the vehicular navigationby MFI using an augmented EKF with neuro-fuzzy algorithm.The navigation system was constructed by a MEMS inertialnavigation system (INS) and a global positioning system (GPS)to form a GPS-aided INS vehicular navigation system. WhileMEMS-based INS is inherently immune to signal jamming,spoofing, and blockage vulnerabilities (as opposed to GPS), theMEMS IMU is significantly affected by complex error charac-teristics. The authors improved the performance of navigationby implementing a two-stage sensor fusion method, namely, im-proving the stochastic modeling of MEMS-based inertial errorsby autoregressive processes on the raw data, and enhancing thepositioning accuracy during GPS outage by nonlinear modelingof INS position errors using neuro-fuzzy modules, which areaugmented in the EKF integration.

D. MFI in Computer Vision

Computer vision has become a fundamental technology ina wide range of applications, such as medical diagnosis, con-cealed weapon detection for security, recognition for intrusiondetection, vision of intelligent robotics, and nondestructivetesting. Computer vision is improved by image fusion whichis a process of combing the relevant information from a setof images into a more informative image. Krishnamoorthyand Soman [75] give a survey on the implementation and acomparative study of the image fusion algorithms. The authorscategorize image fusion algorithms into three stages, namely,simple image fusion, pyramid decomposition-based fusion, anddiscrete wavelet transform-based fusion. The comparison be-tween these methods is verified by fusing the CT (computerizedtomography) scan image and the MR (magnetic resonance)scan image using different algorithms.

Concealed weapon detection (CWD) is an increasingly im-portant topic in the general area of law and enforcement andit appears to be a critical technology for dealing with terrorism.Passengers with concealed objects may not be detected by metaldetectors which are currently employed by airport security per-sonnel. The use of multiple sensing mechanisms, including in-

Fig. 18. Image fusion from the visual and infrared images [76].

frared (IR), acoustic, millimeter wave (MMW), and X-ray sen-sors can improve overall performance.

Liu et al. [76] propose an MFI method using image fusionfrom the visual and infrared images to improve concealedweapon detection and visualization. Fig. 18 shows the proposedprocedure.

They first use an unsupervised fuzzy -means clustering todetect the concealed weapon from the IR image. Then the de-tected region is used as a mask signal for the multiresolutionimage mosaic (MRIM) process. A steerable image pyramid isemployed to decompose and reconstruct these two images. Thesynthesized image can retain quality comparable to the visualimage while the region of the concealed weapon is highlightedand enhanced. This work provides an efficient solution to oper-ator-assisted weapon detection and avoidance of privacy offenseat the portal security check at sensitive locations.

E. MFI in Biomedical Applications

Multisensor fusion and integration have an important influ-ence on the quality of a physicians’ diagnosis since it is thelink between the patient and the medical measuring instrument.Medical instruments today have an extensive storage of mea-sured data which need to be estimated, classified and inferredin order to provide diagnostic suggestions adequately. Bracio etal. [77] discuss MFI algorithms in biomedical systems based onthe Luo and Kay categorization. Ren and Kazanzides [78] pro-pose a two-stage sensor fusion process for the attitude trackingof handheld surgical instruments. In laparoscopic surgery, achallenging problem is to track the surgical instruments insidethe human body. Traditionally used methods include opticaltracking, electromagnetic tracking, mechanical tracking, in-ertial tracking, or hybrid combinations thereof. The authorsdevelop an inertial measurement unit (IMU) that includes a

LUO et al.: MULTISENSOR FUSION AND INTEGRATION: THEORIES, APPLICATIONS, AND ITS PERSPECTIVES 3135

Fig. 19. Block diagram of attitude estimation. Acc stands for accelerometer,Gyr for gyroscope, Mag for magnetometer, VecAtt for vectorized attitude, andVecQua for vectorized quaternion [78].

class of miniature integrated inertial sensing system, namelymagnetic field, angular rate and gravity (MARG). They try toimprove the attitude tracking for a wireless MARG with smalldisturbances in a confined intrabody surgical space. Fig. 19shows the block diagram of the proposed estimation method.First, the gyro-assisted estimation of gravity and magneticfield from the measurement of accelerometer, gyroscope andmagnetometer are estimated by KFs. Then, the attitude ofsurgical instruments is estimated by EKF. The surgical motionpattern is generated from a da Vinci surgical robot system, anda handheld endoscopic instrument is operated in accordancewith the pattern. The proposed multisensor fused IMU is com-pared with an optical tracking (OPT) and an electromagnetictracking (EMT) method. The experimental results in Fig. 20show significant improvements of the attitude estimation.

V. PERSPECTIVES OF MULTISENSOR FUSION AND INTEGRATION

The technology of multisensor fusion and integration has be-come an important part of the development for intelligent andautomatic systems. Through corresponding algorithms at dif-ferent fusion levels, MFI can eliminate redundancy and contra-diction among the multisensor information. It can reduce un-certainty making use of complementary information and thusprovide a more comprehensive and coherent perceptive descrip-tion of the environment and the system itself. As discussed inTable II, most fusion methods have their own disadvantages andlimitations. Some prospects of the development of the multi-sensor fusion and integration are summarized as follows.

A. Fusion Accuracy, Computational Speed, and Cost

Undoubtedly, the fusion accuracy holds the highest priorityin multisensor fusion problems. But a higher accuracy is oftenfollowed by a tradeoff of computational speed or cost. This is be-cause of the dynamic system model and the time-varying prop-erty of sensory measurements. Data fusion always needs a recur-sive predict-update process and a database for the training andlearning phases in an artificial intelligence algorithm. Increasingfusion accuracy and speed, and concurrently reducing the com-putational cost can be achieved by deriving innovative fusion al-gorithms or by combining multiple fusion methods. Moreover,the design of the system model and the method of integratingmultiple sensors also affect fusion efficiency.

B. Robustness, Reliability, and Repeatability of MFI

Most MFI designs are task-oriented. The arrangement of themultisensor integration and employed fusion algorithms is usu-

Fig. 20. Comparison of attitude estimation (roll, pitch, and yaw angles) be-tween multisensor fused IMU, OPT, and EMT. (a) Roll. (b) Pitch. (c) Yaw [78].

ally only suitable for the designed system under specific restric-tions for the current task. That is, a successful implementation ofMFI for one application may fail in another similar one. The de-velopment of a robust, reliable and repeatible multisensor fusionprocess supported by reconfigurable software/hardware mod-ules is a goal for future researchers in this field.

C. Scope From Macro to Micro/Nano Systems

Although more and more MEMS/NEMS sensors are usedin multisensor integrated systems, the fusion process is still inthe scope of the macro world. With the development of micro/nano components and devices integrated in a complex LSI andVLSI systems, the demands of advancements in multisensor fu-sion and integration is urgent. Zhu et al. [79] propose a sensorfusion method to overcome the cross-axis problem of MEMSthermal gas inertial sensor in a small chamber. When multiplethermal gas inertial sensors are integrated to form a compensa-tion micro system, the measured acceleration and rotation un-dergo the cross-axis effect that is result from the multidimen-sional coupling movement of the convection flow in the sensorchamber. The authors simply use the least-squares estimationmethod to decouple the cross-axis rotation without additionalcomponents in the micro system. This problem can be over-come by altering the chamber structure and the placement of

3136 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

the heater and thermistors. Implementing MFI to overcome thiskind of problem is simple, easy to realize, and free of struc-tural complexity and cost. From the macro view, an intelligentMEMS/NEMS system should comprise microsensors, microac-tuators, micromanipulators, signal processing, and transmittingsubsystems. Implementing MFI in such a micro system is a chal-lenging task for the future.

VI. CONCLUSION

In this paper, we provide a review of MFI theories, ap-proaches, applications, and perspectives. Advantages andlimitations of the most frequently used fusion algorithms atcorresponding fusion levels are summarized. The benefits ofimplementing MFI are interpreted by application examples invarious fields, including intelligent robotics, military applica-tions, MEMS and NEMS, computer vision, and biomedicalapplications. Future perspectives of MFI deployment includingthe improvement in fusion performance and the spread ofapplications are depicted. The implementation of multisensorfusion requires interdisciplinary knowledge including controltheory, signal processing, artificial intelligence, probability, andstatistics. The major benefit gained from MFI is that the systemcan be furnished with information of high quality concerningcertain aspects of its environment which cannot be senseddirectly by any individual sensor operating independently.Undoubtedly, MFI has become a fundamental technology forthe development of an intelligent mechatronic systems.

REFERENCES

[1] R. C. Luo and M. G. Kay, “A tutorial on multisensor integration andfusion,” in Proc. 16th Anuu. Conf. IEEE Ind. Electron., 1990, vol. 1,pp. 707–722.

[2] R. C. Luo and M. G. Kay, “Multisensor fusion and integration in in-telligent systems,” IEEE Trans. Syst., Man, Cybern., vol. 19, no. 5, pp.901–931, Sep./Oct. 1989.

[3] R. C. Luo and M. G. Kay, Multisensor Integration and Fusion for In-telligent Machines and Systems. Norwood, MA: Ablex Publishing,1995.

[4] B. V. Dasarathy, “Sensor fusion potential exploitation – Innovative ar-chitectures and illustrative applications,” Proc. IEEE, vol. 85, no. 1, pp.24–38, Jan. 1997.

[5] D. L. Hall, Mathematical Techniques in Multisensor Data Fusion.Boston, MA: Artech House, 1992.

[6] D. L. Hall and J. Llinas, “An introduction to multisensor data fusion,”in Proc. IEEE, Jan. 1997, vol. 85, pp. 6–23.

[7] “Data Fusion Lexicon,” U.S. Dept. Defense, Data Fusion Subpanel ofthe Joint Directors of Laboratories, Technical Panel for C3, 1991.

[8] W. Elmenreich, “A review on system architectures for sensor fusionapplications,” Int. Federation Inform. Process., pp. 547–559, 2007.

[9] D. Smith and S. Singh, “Approaches to multisensor data fusion in targettracking: A survey,” IEEE Trans. Knowl. Data Eng., vol. 18, no. 12, pp.1696–1710, Dec. 2006.

[10] T. C. Henderson and E. Shilcrat, “Logical sensor systems,” J. Robot.Syst., vol. 1, no. 2, pp. 169–193, 1984.

[11] R. C. Luo, C. C. Lai, and C. C. Hsiao, “Enriched indoor environmentmap building using multi-sensor based fusion approach,” IEEE/RSJ Int.Conf. Intell. Robot. Syst. Oct. 2011, pp. 2059–2064.

[12] C. Hansen, T. C. Henderson, E. Shilcrat, and W. S. Fai, “Logicalsensor specification,” in Proc. SPIE Conf. Intell. Robots, Nov. 1983,pp. 578–583.

[13] T. C. Henderson and W. S. Fai, “MKS: A multisensor kernel system,”IEEE Trans. Syst., Man, Cybern., vol. 14, no. 5, pp. 784–791, Sep./Oct.1984.

[14] R. C. Luo and T. C. Henderson, “A servo-controlled gripper with sen-sors and its logical specification,” J. Robot. Syst., vol. 3, no. 4, pp.409–420, 1986.

[15] Y. Yang, C. Han, X. Kang, and D. Han, “An overview on pixel-levelimage fusion in remote sensing,” in Proc. IEEE Int. Conf. Autom. Lo-gistics, Aug. 2007, pp. 2339–2344.

[16] J. Yue, R. Yang, and R. Huan, “Pixel level fusion for multiple SARimages using PCA and wavelet transform,” in Proc. Int. Conf. Radar,Oct. 2006, pp. 1–4.

[17] L. Zhang, J. Stiens, and H. Sahli, “Multispectral image fusion for ac-tive millimeter wave imaging application,” in Proc. Global Symp. Mil-limeter Waves, Apr. 2008, pp. 131–134.

[18] A. Rattani, D. R. Kisku, M. Bicego, and M. Tistarelli, “Feature levelfusion of face and fingerprint biometrics,” in Proc. IEEE Int. Conf. Bio-metrics: Theory, Appl., Syst., Sep. 2007, pp. 1–6.

[19] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Sil-verman, and A. Y. Wu, “An efficient �-means clustering algorithm:Analysis and implementation,” IEEE Trans. Pattern Anal. Mach. In-tell., vol. 24, no. 7, pp. 881–892, Jul. 2002.

[20] K. Venkatalakshmi and S. M. Shalinie, “Classification of multispec-tral images using support vector machines based on PSO and �-meansclustering,” in Proc. Int. Conf. Intell. Sensing, Inform. Process., Jan.2005, pp. 127–133.

[21] N. Flouda, A. Polychronopoulos, O. Aycard, J. Burlet, and M.Ahrholdt, “High level sensor data fusion approaches for object recog-nition in road environment,” in Proc. IEEE Intell. Veh. Symp., Jun.2007, pp. 136–141.

[22] R. E. Kalman, “A new approach to linear filtering and prediction prob-lems,” Trans. ASME, J. Basic Eng., pp. 35–45, Mar. 1960.

[23] D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello, “Bayesianfiltering for location estimation,” IEEE Pervasive Comput., vol. 2, no.3, pp. 24–33, Jul. 2003.

[24] S. Julier, J. Uhlmann, and H. F. Durrant-Whyte, “A new method forthe nonlinear transformation of means and covariances in filters andestimators,” IEEE Trans. Autom. Control, vol. 45, no. 3, pp. 477–482,Mar. 2000.

[25] S. Matzka and R. Altendorfer, “A comparison of track-to-track fusionalgorithms for automotive sensor fusion,” in Proc. IEEE Int. Conf. Mul-tisensor Fusion Integr. Intell. Syst., Aug. 2008, pp. 189–194.

[26] B. L. Scala and A. Farina, “Effects of cross-covariance and resolutionon track association,” in Proc. Int. Conf. Inform. Fusion, Jul. 2000, vol.2, pp. WeD1/10–Wed1/16.

[27] G. W. Ng and R. Yang, “Comparison of decentralized tracking al-gorithms,” in Proc. 6th Int. Conf. Inform. Fusion, 2003, vol. 1, pp.107–113.

[28] G. W. Ng, C. H. Tan, and T. P. Ng, “Tracking ground targets usingstate vector fusion,” in Proc. 7th Int. Conf. Inform. Fusion, 2005, pp.297–302.

[29] Y. Ooi, L. H. T. Chang, Y. P. Wong, and A. R. M. Piah, “A choice ofweights for convex combination methods in estimation partial deriva-tives,” in Proc. Int. Conf. Computer Graphics, Imaging, and Visualiza-tion, 2004, pp. 233–236.

[30] S. J. Julier and J. K. Uhlmann, “A non-divergent estimation algorithmin the presence of unknown correlations,” in Proc. Amer. Control Conf.,1997, vol. 4, pp. 2369–2373.

[31] L. Chen, P. O. Arambel, and R. K. Mehra, “Estimation under unknowncorrelation: Covariance intersection revisited,” IEEE Trans. Autom.Control, vol. 47, no. 11, pp. 1879–1882, Nov. 2002.

[32] S. B. Lazarus, I. Ashokaraj, A. Tsourdos, R. Zbikowski, P. M. G.Silson, N. Aouf, and B. A. White, “Vehicle localization using sensordata fusion via integration of covariance intersection and intervalanalysis,” IEEE Sensors J., vol. 7, no. 9, pp. 1302–1314, Sep. 2007.

[33] R. C. Luo, O. Chen, and L. C. Tu, “Nodes Localization through datafusion in sensor network,” in Proc. 19th Int. Conf. Adv. Inform. Netw.Appl., AINA’05, 2005, pp. 337–342.

[34] R. C. Luo, C. T. Liao, and S. C. Lin, “Multi-sensor fusion for reduceduncertainty in autonomous mobile robot ducking and recharging,” inProc. Int. Conf. Intell. Robot. Syst., Oct. 11–15, 2009, pp. 2203–2208.

[35] O. Bochardt, R. Calhoun, J. K. Uhlmann, and S. J. Julier, “Generalizedinformation representation and compression using covariance union,”in Proc. 9th Int. Conf. Inform. Fusion, Jul. 2006, pp. 1–7.

[36] R. C. Luo, Y. C. Chou, and O. Chen, “Multisensor fusion and integra-tionL algorithms, applications, and future research directions,” in Proc.IEEE Int. Conf. Mechatronics Autom., Aug. 2007, pp. 1986–1991.

[37] Wu and Vandenberghe, Eds. et al., “Maxdet: Software for DeterminantMaximization Problems. User’s Guide,”Alpha ed. 1996.

[38] S. Boyd, L. El-Ghaoui, E. Feron, and V. Balakrishnan, Linear MatrixInequalities in System and Control Theory. Philadelphia, PA: SIAM,1994, vol. 15, Studies in Applied Mathematics.

[39] B. Waske and J. A. Benediktsson, “Fusion of support vector machinesfor classification of multisensor data,” IEEE Trans. Geosci. RemoteSensing, vol. 45, no. 12, pp. 3858–3866, Dec. 2007.

[40] K. Tanaka, M. Sano, S. Ohara, and M. Okudaira, “A parametric tem-plate method and its application to robust matching,” in Proc. IEEEConf. Comput. Vision Pattern Recogn., Jun. 2000, vol. 1, pp. 620–627.

LUO et al.: MULTISENSOR FUSION AND INTEGRATION: THEORIES, APPLICATIONS, AND ITS PERSPECTIVES 3137

[41] D. Jiang, C. Tang, and A. Zhang, “Cluster analysis for Gene expressiondata: A survey,” IEEE Trans. Knowl. Data Eng., vol. 16, no. 11, pp.1370–1386, Nov. 2004.

[42] H. Lin, “Identification of spinal deformity classification with total cur-vature analysis and artificial neural network,” IEEE Trans. Biomed.Eng., vol. 55, no. 1, pp. 376–382, Jan. 2008.

[43] J. Alirezaie, M. E. Jernigan, and C. Nahmias, “Neural network-basedsegmentation of magnetic resonance images of brain,” IEEE Trans.Nucl. Sci., vol. 44, no. 2, pp. 194–198, Apr. 1997.

[44] C. Corts and V. Vapnik, “Support vector networks,” Mach. Learn., vol.20, no. 3, pp. 273–297, Sep. 1995.

[45] C. J. C. Burges, “A tutorial on support vector machines for patternrecognition,” J. Data Mining Knowl. Discovery, vol. 2, no. 2, pp.121–167, Jun. 1998.

[46] S. Avidan, “Support vector tracking,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 26, no. 8, pp. 1064–1072, Aug. 2004.

[47] K. Tanaka, K. Yamano, E. Kondo, and Y. Kimuro, “A vision systemfor detecting mobile robots in office environments,” in Proc. IEEE Int.Conf. Robot. Autom., Apr. 2004, pp. 2279–2284.

[48] J. Tian, M. Gao, and E. Lu, “Dynamic collision avoidance pathplanning for mobile robot based on multi-sensor fusion by supportvector machine,” in Proc. IEEE Int. Conf. Mech. Autom., Aug. 2007,pp. 2779–2783.

[49] J. Shen and H. Hu, “SVM based SLAM algorithm for autonomous mo-bile robots,” in Proc. IEEE Int. Conf. Mech. Autom., Aug. 2007, pp.337–342.

[50] S. Bitzer and P. van der Smagt, “Learning EMG control of a robotichand: Towards active prostheses,” in Proc. Int. Conf. Robot. Autom.,May 2006, pp. 2819–2823.

[51] M. Yoshikawa, M. Mikawa, and K. Tanaka, “A myoelectric interfacefor robotic hand control using support vector machine,” in Proc. Int.Conf. Intell. Robot. Syst., Oct.-Nov. 29–2, 2007, pp. 2723–2728.

[52] D. M. Buede and P. Girardi, “A target identification comparisonof Bayesian and Dempster-Shafer multisensor fusion,” IEEE Trans.Syst., Man, Cybern. A: Syst. Humans, vol. 27, no. 5, pp. 569–577,Sep. 1998.

[53] Y. Liu, B. Wang, W. He, J. Zhao, and Z. Ding, “Fundamental principlesand applications of particle filters,” in Proc. 6th World Congr. Intell.Control Autom., Jun. 2006, pp. 5327–5331.

[54] A. Gning, F. Abdallah, and P. Bonnifait, “A new estimation method formultisensor fusion by using interval analysis and particle filtering,” inProc. IEEE Int. Conf. Robot. Autom., Apr. 2007, pp. 3844–3849.

[55] J. Wolf, W. Burgard, and H. Burkhardt, “Robust vision-based local-ization by combining an image-retrieval system with Monte Carlolocalization,” IEEE Trans. Robot., vol. 21, no. 2, pp. 208–216, Apr.2005.

[56] P. Vadakkepat and L. Jing, “Improved particle filter in sensor fortracking randomly moving object,” IEEE Trans. Instrum. Meas., vol.55, no. 5, pp. 1823–1832, Oct. 2006.

[57] H. J. Chang, C. S. G. Lee, Y. H. Lu, and Y. C. Hu, “P-SLAM: Simulta-neous localization and mapping with environmental-structure predic-tion,” IEEE Trans. Robot., vol. 23, no. 2, pp. 281–293, Apr. 2007.

[58] A. Scheidig, S. Mueller, C. Martin, and H.-M. Gross, “Generatingpersons movement trajectories on a mobile robot,” in Proc. 15thIEEE Int. Symp. Robot Human Interactive Commun., Sep. 2006, pp.747–752.

[59] A. P. Dempster, “A generalization of Baysian inference,” J. ToyalStatist. Soc. B, vol. 30, no. 2, pp. 205–247, 1968.

[60] G. Shafer, A Mathematical Theory of Evidence. Princeton, NJ:Princeton Univ. Press, 1976.

[61] T. Denoeux, “A neural network classifier based on Dempster-Shafertheory,” IEEE Trans. Syst., Man, Cybern.- Part A: Syst. Humans, vol.30, no. 2, pp. 131–150, Mar. 2000.

[62] Y. Zhan, H. Leung, K. C. Kwak, and H. Yoon, “Automated speakerrecognition for home service robots using genetic algorithm and Demp-ster-Shafer fusion technique,” IEEE Trans. Instrum Meas., vol. 58, no.9, pp. 3058–3068, Sep. 2009.

[63] V. Mitra, C. J. Wang, and S. Banerjee, “Lidar detection of underwaterobjects using a neuro-SVM-based architecture,” IEEE Trans. NeuralNetw., vol. 17, no. 3, pp. 717–731, May 2006.

[64] J. A. Stover, D. L. Hall, and R. E. Gibson, “A fuzzy-logic architecturefor autonomous multisensor data fusion,” IEEE Trans. Ind. Electron.,vol. 43, no. 3, pp. 403–410, Jun. 1996.

[65] K. C. Ng and M. M. Trivedi, “A neuro-fuzzy controller for mobile robotnavigation and multirobot convoying,” IEEE Trans.Syst., Man, Cybern.B: Cybern., vol. 28, no. 6, pp. 829–840, Dec. 1998.

[66] H. A. Hagras, “A hierarchical type-2 fuzzy logic control architecturefor autonomous mobile robots,” IEEE Trans. Fuzzy Syst., vol. 12, no.4, pp. 524–539, Aug. 2004.

[67] X. Yang, M. Moallem, and R. V. Patel, “A layered goal-oriented fuzzymotion planning strategy for mobile robot navigation,” IEEE Trans.Syst., Man, Cybern. B: Cybern., vol. 35, no. 6, pp. 1214–1224, Dec.2005.

[68] T. Das and I. N. Kar, “Design and implementation of an adaptive fuzzylogic-based controller for wheeled mobile robots,” IEEE Trans. ControlSyst. Technol., vol. 14, no. 3, pp. 501–510, May 2006.

[69] R. C. Luo and K. L. Su, “A review of high-level multisensor fusion:Approaches and applications,” in Proc. IEEE Int. Conf. MultisensorFusion Integr. Intell. Syst., Aug. 1999, pp. 25–31.

[70] R. C. Luo and K. L. Su, “Autonomous fire-detection system using adap-tive sensor fusion for intelligent security robot,” IEEE Trans. Mecha-tronics, vol. 12, no. 3, pp. 274–281, Jun. 2007.

[71] R. C. Luo and K. L. Su, “Multilevel multisensory-based intelligentrecharging system for mobile robot,” IEEE Trans. Ind. Electron., vol.55, no. 1, pp. 270–279, Jan. 2008.

[72] R. C. Luo, N. W. Chang, S. C. Lin, and S. C. Wu, “Human tracking andfollowing using sensor fusion approach for mobile assistive companionrobot,” in Proc. 5th Annu. Conf. IEEE Ind. Electron. Soc., Jul. 2009, pp.2235–2240.

[73] A. M. Khan and A. Khan, “Fusion of visible and thermal images usingsupport vector machines,” in Proc. IEEE Multitopic Conf., INMIC’06,2006, pp. 146–151.

[74] A. Noureldin, T. B. Karamat, M. D. Eberts, and A. El-Shafie, “Per-formance enhancement of MEMS-based INS/GPS integration for low-cost navigation applications,” IEEE Trans. Veh. Technol., vol. 58, no.3, pp. 1077–1096, Mar. 2009.

[75] S. Krishnamoorthy and K. P. Soman, “Implementation and compara-tive study of image fusion algorithms,” Int. J. Comput. Appl., vol. 9,no. 2, pp. 25–35, Nov. 2010.

[76] Z. Liu, Z. Xue, R. S. Blum, and R. Laganie‘re, “Concealed weapon de-tection and visualization in a synthesized image,” Pattern Anal. Appl.,vol. 8, no. 4, pp. 375–389, 2006.

[77] B. R. Bracio, W. Horn, and D. P. F. Moller, “Sensor fusion in biomed-ical systems,” in Proc. 19th Annu. Int. Conf. e IEEE Eng. Med. Biol.Soc., Oct. 1997, pp. 1387–1390.

[78] H. Ren and P. Kazanzides, “Investigation of attitude tracking usingan integrated inertial and magnetic navigation system for hand-heldsurgical instruments,” IEEE/ASME Trans. Mechatronics, pp. 1–8,2010.

[79] R. Zhu, H. Ding, Y. Yang, and Y. Su, “Sensor fusion methodology toovercome cross-axis problem for micromachined thermal gas inertialsensor,” IEEE Sensor J., vol. 9, no. 6, pp. 707–712, Jun. 2009.

Ren C. Luo (M’83–SM’88–F’92) received the Ph.D.degree in electrical engineering from the TechnischeUniversitaet Berlin, Berlin, Germany.

He is currently an Irvin T. Ho Chair and Distin-guished Professor with the Department of ElectricalEngineering, National Taiwan University, Taipei andPresident of Robotics Society of Taiwan. He alsoserved two-terms as President of the National ChungCheng University, Taiwan. He was a Full Professorwith the Department of Electrical and Computer En-gineering, North Carolina State University, Raleigh,

NC, and Toshiba Chair Professor with the University of Tokyo, Japan. Hisresearch interests include sensor-based intelligent robotic systems, multisensorfusion and integration, and computer vision. He has authored more than 400papers on these topics, which have been published in refereed technical journalsand conference proceedings. He also holds several patents.

Dr. Luo received the IEEE Eugean Mittlemann Outstanding ResearchAchievement Award, the IEEE IROS Harashima Innovative TechnologiesAward, the ALCOA Outstanding Engineering Faculty Research Award, NCSU,USA; the National Science Council Outstanding Research Awards, 1998–1999,2000–2001, 2002–2005; the National Science Council Distinguished Re-search Awards, 2006–2008; the TECO Outstanding Science and TechnologyResearch Achievement Award. He was Editor-in-Chief of the IEEE/ASMETRANSACTIONS ON MECHATRONICS (2003–2007). He served as Presidentof the IEEE Industrial Electronics Society (2000–2001). He also served asPresident of the Chinese Institute of Automation Engineers, Program Directorof Automation Technology Division, National Science Council; Adviser of theMinistry of Economics Affairs and Technical Adviser of the Prime Minister’sOffice in Taiwan. He contributes regularly to IEEE sponsored internationalconferences by serving as Conference General Chairs (IEEE IROS 1992, MFI1994, IECON 1996, MFI 1999, ICRA 2003, IECON 2007, IROS 2010), andother committee activities.

3138 IEEE SENSORS JOURNAL, VOL. 11, NO. 12, DECEMBER 2011

Chih-Chia Chang received the B.S. degree incontrol engineering from National Chiao TungUniversity, Hsinchu, Taiwan and the M.S. degreein automation and control from National TaiwanUniversity of Science and Technology, Taipei.Currently, he is working towards the Ph.D. degree inelectrical engineering at the National Chung ChengUniversity, Chia-Yi, Taiwan.

His research interests include multisensor fusionand robotics.

Chun Chi Lai received the B.S. degree in electricalengineering from the National Yunlin University ofScience and Technology, Douliou, Yunlin, Taiwan,and the M.S. degree in electrical engineering in 2002from the National Chung Cheng University, Chia-Yi,Taiwan, where he is currently working towards thePh.D. degree.

He is also a Research Assistant with the Center forIntelligent Robotics and Automation Research, Na-tional Taiwan University. His research interests in-clude multisensor fusion and intelligent robotics.