Journal of Mechanical Science and Technology
[SS:ISR2013] Improving UGV Navigation by Exploiting Virtual Sensors and MultisensorFusion
--Manuscript Draft--
Manuscript Number: MEST-D-14-00034R3
Full Title: [SS:ISR2013] Improving UGV Navigation by Exploiting Virtual Sensors and MultisensorFusion
Article Type: Original Paper
Keywords: Multi-sensor fusion; Unmanned ground vehicles (UGV); Non-holonomic constraints(NHC); Zero-velocity update (ZVU); Centralized Ex-tended Kalman filter (CEKF).
Corresponding Author: MUHAMMAD ILYAS
KOREA, REPUBLIC OF
Corresponding Author SecondaryInformation:
Corresponding Author's Institution:
Corresponding Author's SecondaryInstitution:
First Author: MUHAMMAD ILYAS
First Author Secondary Information:
Order of Authors: MUHAMMAD ILYAS
Sangdeok Park, PhD
Seung-Ho Baeg, M.D
Order of Authors Secondary Information:
Abstract: This paper deals with finding techniques to improve navigation accuracy using multiplesensors mounted on a mobile platform and exploiting a ground vehicle's inherentcharacteristic of having no motion along the cross-track and off- the-ground; oftencalled non-holonomic constraints for car-like vehicles, assuming no slip or skid.Forward velocity of the vehicle is obtained using a wheel encoder, and by using thenon-holonomic constraints, the 3D velocity vector becomes observable during thenormal moving state of the vehicle. This makes one kind of virtual sensor. The othervirtual sensor is the zero-velocity update condition of the vehicle: when it is true, the 3Dvelocity vector (which is zero) becomes observable. These observables wereemployed in an Extended Kalman Filter update to bound the growth of inertialnavigation system error. We designed an Extended Kalman filter for the fusion of datafrom IMU, GPS and motion constraints, i.e., non-holonomic constraints and zero-velocity update, and we analyzed the effects of utilizing these constraints to improvethe navigation accuracy in stationary as well as in the dynamic case. Our proposednavigation suite provides reliable accuracy for un-manned ground vehicle applicationsin a GPS denied environment, such as a forest canopy and urban canyon.
Response to Reviewers: Dear Editorial Office !
We have prepared this revised version of our paper as per JMST tamplate. We havedone our best to fit in the proper format. If there is any mistake/missings , plz inform us.
Paper Number: MEST-D-14-00034R2Paper Title: [SS:ISR2013] Improving UGV Navigation by Exploiting Virtual Sensors andMultisensor FusionAuthors: MUHAMMAD ILYAS; Sangdeok Park, PhD; Seung-Ho Baeg, M.D
Regards!Muhammad ilyas
Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Journal of Mechanical Science and Technology 00 (2013) 0000~0000
www.springerlink.com/content/1738-494x
Improving UGV navigation by exploiting virtual sensors and multisensor fusion†
Muhamad Ilyas1, Sangdeok Park2 and Seung-Ho Baeg2,* 1Korea University of Science and Technology (UST), Daejon, 305-333, Korea 2Korea Institute of Industrial Technology (KITECH), Ansan, 426-791, Korea
(Manuscript Received 000 0, 2013; Revised 000 0, 2013; Accepted 000 0, 2013)
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Abstract
This paper deals with finding techniques to improve navigation accuracy using multiple sensors mounted on a mobile platform and
exploiting a ground vehicle’s inherent characteristic of having no motion along the cross-track and off- the-ground; often called non-
holonomic constraints for car-like vehicles, assuming no slip or skid. Forward velocity of the vehicle is obtained using a wheel encoder,
and by using the non-holonomic constraints, the 3D velocity vector becomes observable during the normal moving state of the vehicle.
This makes one kind of virtual sensor. The other virtual sensor is the zero-velocity update condition of the vehicle: when it is true, the 3D
velocity vector (which is zero) becomes observable. These observables were employed in an Extended Kalman Filter update to bound the
growth of inertial navigation system error. We designed an Extended Kalman filter for the fusion of data from IMU, GPS and motion
constraints, i.e., non-holonomic constraints and zero-velocity update, and we analyzed the effects of utilizing these constraints to improve
the navigation accuracy in stationary as well as in the dynamic case. Our proposed navigation suite provides reliable accuracy for un-
manned ground vehicle applications in a GPS denied environment, such as a forest canopy and urban canyon.
Keywords: Multi-sensor fusion; Unmanned ground vehicles (UGV); Non-holonomic constraints (NHC); Zero-velocity update (ZVU); Centralized Extend-
ed Kalman filter (CEKF).
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1. Introduction
Autonomous navigation or localization is a stringent re-
quirement for operating any kind of manned/unmanned vehi-
cle in a given environment. A very basic method to achieve
this is the process of dead reckoning. That method involves
determining a platform’s position without using any a priori
external information except for what the platform senses using
only onboard sensors. That is, dead reckoning is the determi-
nation of a platform’s position, velocity and attitude without
the use of predefined maps of the environment, or infrastruc-
ture developed for navigation purposes, such as terrain aided
navigation systems or Global Navigation Satellite System
(GNSS). In practice, unaided navigation is prone to diver-
gence with time due to the inherent growth in errors produced
by dead reckoning. Fully autonomous navigation is therefore
usually obtained by blending information from different but
complementary navigation sensors. Depending on the applica-
tion and accuracy needs, this data integration process can be
achieved by different means. For autonomous air vehicles, the
most commonly used navigation sensors are inertial meas-
urement units (IMU) and GNSS. In an inertial navigation sys-
tem (INS), the IMU is used to deliver navigation quantities at
a high data rate with excellent short term accuracy using
mechanization equations, and the GNSS’ absolute navigation
solution is utilized to bound the INS error growth over long
time periods. Other sensors such as an air-pressure sensor and
air-velocity sensor may also aid INS to enhance reliability and
accuracy. For air vehicles, GNSS is normally a reliable source
of INS support.
However for unmanned ground and underwater vehicles,
GNSS signals are either not reliable or entirely unviable. In
such applications, other navigation aiding sensors must be
relied on. One easy alternate solution for UGV is to use wheel
encoders to accurately sense movement by counting the wheel
*Corresponding author. Tel.: +82-31-8040-6272, Fax. +82-31-8040-6270
E-mail address: [email protected] † Recommended by Editor 000 000-please leave blank
© KSME & Springer 2013
ManuscriptClick here to download Manuscript: Improving UGV navigation by exploiting virtual sensors and multisensor fusion.doc
0000 M. Ilyas et al. / Journal of Mechanical Science and Technology 00 (2013) 0000~0000
turns. This gives position and velocity information in the body
frame which can be transformed in the navigation frame using
gyro output or steering wheel encoder readings.
In this work, we have used IMU as the prime navigation
sensor along with Global Positioning System (GPS) and
wheel encoders to investigate the exploitation of so called
“virtual sensors” to improve navigation accuracy. Ground
vehicles, contrary to air or underwater vehicles, have the in-
herent characteristic of moving on the ground and having
direct contact with it. This gives them a unique feature, espe-
cially for car-like vehicles, i.e. they cannot move perpendicu-
lar to the forward axis during normal operation. We can ex-
ploit this UGV characteristic to improve its navigation accura-
cy.
Forward velocity obtained from the wheel encoder can also
be a good indicator of vehicle motion status. This velocity is
obtained in the body frame and transformed using a three-axis
gyro to the navigation frame along with cross-track and verti-
cal velocities (which are essentially zero) to make a velocity
vector observation in the navigation frame. Furthermore,
when a vehicle stops, forward velocity given by the wheel
encoders also becomes zero, which produces a zero-velocity
update to the Extended Kalman Filter (EKF).
It should also be noted that the non-holonomic constraints
Kalman filter (NHCF) designed in this work is applied as a
total state filter, i.e. as a direct Kalman filter. Hence non-
holonomic constraints (NHC) and zero velocity updates
(ZVUs) can be applied at a rate as high as the IMU update rate,
100Hz in our case, assuming constant velocity during the up-
date time interval, which is a valid assumption for slow dy-
namics ground vehicles. Application of these constraints con-
siderably bounds growing INS errors, especially in the posi-
tion and velocity axes. Finally, we have designed a centralized
EKF and investigated the effects of virtual sensors on naviga-
tion accuracy in the case of multisensory fusion.
Many researchers have investigated the benefits of NHC for
ground vehicles. Among them, L. Flavio et. al. [1] has used
these constraints for trajectory planning. A well known inves-
tigation of NHC is given by S. Sukkarieh [2]. They designed
a constraint EKF in direct form and compared the error reduc-
tion achieved by using those constraints by integrating
INS/NHC and constraints/GPS.
We have further included ZVU as a virtual sensor and uti-
lized all sensor data in a multisensor data fusion format to
show the effectiveness of NHC and ZVU on estimator per-
formance. Our designed algorithm for constraint EKF behaves
as a ZVU filter when the vehicle is stationary. This stationary
condition may be found using wheel odometry data or IMU
data.
In [3] a direct form of EKF was developed for INS/GPS in-
tegration. We have adopted this derivation for designing the
constraint EKF. A constraint estimator was designed by J.
Pusa [4] as an indirect form (i.e., as an error state) which is an
overoptimistic approach from the viewpoint of a virtual sensor.
Virtual sensor data can be sampled at as high a rate as possible,
i.e., at the rate of an IMU. Applying NHC at the IMU rate has
a far more profound effect on reducing navigation error than
applying them at a lower rate. Therefore there is no reason to
choose a lower sampling rate for vital sensors except out of
consideration of the computation burden.
The main contribution of this paper is to improve naviga-
tion accuracy, and position and velocity accuracy in particular,
by exploiting NHC and ZVU in dynamic and stationary states,
respectively, in the framework of multisensor fusion.
Further organization of this paper is as follows: In Section 2
we will develop the necessary theoretical and mathematical
elements of the underlying problem. In Section 3 we design
the Nonholonomic constraint EKF, and Section 4 describes
multisensory fusion techniques. In Section 5 we show the
experimental setup, and discussion of the experimental results
is provided in Section 6. Section 7 concludes this paper with
some comments on future research work.
2. Theoretical and mathematical development
2.1 Reference frames
In order to acquire and manipulate Inertial Measurement
Unit (IMU) data, the definition of coordinate frames to be
used in the application is very important. Without proper
frame definitions, navigation data will not produce meaningful
results and we will not be able to integrate data from different
sensors, to provide outputs in frames. Here we introduce the
frame definitions of the navigation system being used in
KITECH.
Sensor frame: IMU axis direction is defined as: the x-axis is
directed forward, the y-axis to the left and the z-axis is up. The
origin of this frame is the center of the IMU.
Vehicle Frame (b-frame): Vehicle frame is defined per the
aerospace definition convention, i.e. the x-axis is directed
forward, y-axis to the right and the z-axis is down. IMU data
must first be transformed from sensor frame to vehicle frame.
Its origin is assumed to be the same as that of the IMU.
Navigation frame (n-frame): The geographic frame is typi-
cally chosen as the navigation frame when large areas of travel
are covered. However if the application is traversing within a
small defined area, as is typical for UGVs, then an Earth-fixed
local-tangent frame (n-frame) can be used as the navigation
frame. The origin is defined on a local reference point with its
x-axis pointing north, y-axis pointing east and the z-axis point-
ing down, making the frame a local tangent [2, 3]. As for the
metric representation, a coordinate system called the Universal
Transverse Mercator (UTM) is widely used, which projects
the geodetic position to a flat metric map. With this represen-
tation, the observations of the landmarks from the on-board
M. Ilyas et al. / Journal of Mechanical Science and Technology 23 (2009) 1261~1269 1263
sensor can easily be represented in Cartesian coordinates. GPS
longitude/latitude data is output, by default, in Earth Centered
Earth Fixed frame. This navigation data is first transformed in
a UTM frame and then fused in the Kalman filter with INS
data output by our algorithm.
2.2 IMU data calibration and initial alignment
Calibration is the process of determining the errors in the
IMU sensor, and alignment is the process of determining the
initial attitude for strapdown navigation. Initial roll and pitch
angles are obtained by leveling process, and initial heading is
obtained by gyro-compassing in this work. Details of these
processes can be found in [5, 6].
2.2.1 Turn-on bias removal on accelerometer
Besides fixed bias, scale factor and axis misalignment er-
rors, which are subtracted from IMU measurements, there is a
bias term which varies randomly each time we turn-on our
sensor. This factor has to be determined and compensated to
ensure navigation accuracy. Before using IMU measurements,
it is essential to remove this bias from the IMU raw data [7,
10]. Averaged accelerometer data over a stationary time peri-
od is collected as:
ˆ b b ba F F B (1)
where bF is true acceleration (gravity), ˆ bF is accelerometer
measurement and baB is accelerometer bias vector. (Note
that vectors/matrices are represented by bold faced letters in
this paper.) True acceleration is obtained by transforming the
gravity vector in n-frame to b-frame i.e.
[0 0 - ]b b Tn nF C g (2)
Then, turn-on bias is obtained as:
ˆb b ba B = F F (3)
2.2.2 Turn-on bias removal on Gyroscope
When the vehicle is stationary, the gyro only senses the
earth’s rotation rate. Thus its output is the sum of the earth
rotation rate and turn-on bias vectors, i.e.
b b bib ie g ω Ω B (4)
where bibω is gyro output,
bieΩ is the earth rotation rate trans-
formed in the b-frame and bgB is the gyro bias vector. The
earth rotation rate is obtained in the b-frame as:
b b nie n ieΩ C Ω (5)
b b bg ib ie B ω Ω (6)
Earth rate is ignored for low-grade (MEMS) sensors, and then
the averaged gyro measurements become turn-on bias, i.e.
b bg ibB ω (7)
The effect of turn-on bias removal from the IMU raw data on
the estimation of roll pitch and yaw angles is shown in Fig. 1.
The roll, pitch and yaw angles show diverging trend without
removing turn-on bias from raw IMU data.
Fig. 1. Effect of turn-on bias removal from raw IMU readings.
2.3 Free-inertial navigation
Inertial navigation mechanization equations are required to
get position, velocity and attitude from input IMU data. Here
we will first describe the attitude calculation and propagation
equations. From the propagated attitude, the roll, pitch and
yaw angles are obtained which will be used to form the trans-
formation matrix from the body to navigation frames, and vice
versa. This transformation matrix is used to transform the
body-sensed acceleration to the navigation frame. After com-
pensating for the proper gravity vector, we can mathematically
integrate this acceleration to get velocity, and further integra-
tion gives position in the navigation frame [5]. This is known
as free-inertial or stand-alone navigation. Navigation equa-
tions are presented in a local tangent frame fixed to the earth
and are expressed as a UTM frame. All navigation data from
GPS, INS and the reference navigation system (Span-SE) has
been transformed in the UTM frame for fusion and compari-
son.
0000 M. Ilyas et al. / Journal of Mechanical Science and Technology 00 (2013) 0000~0000
2.3.1 Attitude propagation
There are three commonly used methods for expressing the
orientation of a coordinate system (3D frame) with respect to
another 3D frame. These three methods are:
1) Euler angles
2) Direction cosines
3) Quaternion
Among them the Euler angle method is most frequently
used. The common designations of the Euler angles are roll
(φ), pitch (θ), and yaw (ψ). Its strengths lie in a relatively sim-
ple mechanization and easily interpretable physical signifi-
cance. Its negative attribute is the mathematical singularity
that exists when the pitch angle θ approaches 90 deg. But for
most of the practical platforms in use, this poses no real threat,
especially for land vehicles. That is why we have adopted
Euler angles based attitude computation in our work. This
method is explained here in detail.
The Euler angle can represent the attitude of a platform
from three successive rotations of the navigation frame to
body frame. However the sequence of rotations is unique to
achieve the given final attitude, so the order of rotation should
be defined first [3]. In this work, the navigation frame is rotat-
ed and fitted into the body frame in the sequences of yaw (ψ),
pitch (θ), and roll (φ). The transformation of gyro sensed an-
gular velocity to the Euler angular velocity can be achieved by
the following equation:
1 sin( ) tan( ) cos( ) tan( )
0 cos( ) sin( )
0 sin( ) / cos( ) cos( ) / cos( )
x
k y
z
Ψ =
(8)
[ ]n bk b ibΨ = E ω (9)
where [ ]b Tib x y zω ω ω ω is gyro output. Eq. (9) relates the
body angular rates to the rates of Euler angles in a nonlinear
form. Note that the solution requires integration of the first-
order nonlinear equation. The solution of this equation in dis-
crete time can be obtained by using numerical integration, i.e.,
1
11
1
k k kk
k k kk
k k k
dt
=
(10)
1k k k t Ψ Ψ Ψ= (11)
Here t is sampling time. This provides an attitude update in
terms of the Euler angle, i.e. roll, pitch, and yaw respectively.
2.3.1 Position and velocity propagation
In navigation along the earth-fixed navigation frame, the
platform’s position and velocity are referenced with respect to
those axes. The general equation describing the motion of a
vehicle in a navigation frame is given by Eq. (12). Detailed
derivations are given in [7, 8].
2[ ] { [ ][ ] }n n n n n n n nin in in P f ω P g ω ω P (12)
where the left hand side gives the acceleration of the vehicle in
the navigation frame and the right hand side of the equation
comprises a number of terms. The first term is the specific
force expressed in the navigation frame; the second term is the
Coriolis effect due to the rotational motion of the frame. The
Coriolis effect can be significant for fast moving platforms but
it may be ignored for slow moving vehicles such as the UGV
used in this work. The last term is the effective local gravita-
tional acceleration, which is also known as the plumb-bob
gravity. Writing n nΡ = v and using an effective local gravi-
ty, (3) becomes:
2[ ]n n b n n nb in l v C f ω v g (13)
For local-level earth fixed frame and slow moving vehicles,
the second term, i.e. Coriolis acceleration is often ignored,
n n n b n
b l f v C f g (14)
where the effective local gravity nlg term is a function of lati-
tude and height above the ground. Position and velocity up-
dates can be written in discrete form, assuming constant accel-
eration, as follows:
1n n nk k t v = v + f (15)
12
0.5n n n nk k k t t +P = P + v f (16)
where nf is the gravity compensated accelerometer readings in
the navigation frame. It provides updated velocity and position
in the navigation frame. In a nutshell, the discrete time inertial
navigation system model can be written as in Eq. (17) by
combining Eq. (11) and Eq. (15,16) which gives the propaga-
tion of attitude, velocity and position respectively, from time
step k to k+1, wherek
u is the system input (IMU data) cor-
rupted with white Gaussian noise vectork
w .
1 ( , , )k k k k k x f x u w (17)
3. Non-holonomic constraint Extended Kalman filter
(NHCF) design
M. Ilyas et al. / Journal of Mechanical Science and Technology 23 (2009) 1261~1269 1265
Ground vehicles cannot move in a plane perpendicular to
the forward direction [4]. This is well known NHC for car-like
vehicles. This inherent property of ground vehicles can be
exploited to improve the quality of their navigation. For this
purpose we have designed a NHC filter which is a total state
(direct-form) Kalman filter because NHC is effectively a vir-
tual sensor and hence can be used at as high a rate as the IMU
update rate (100Hz).
3.1 Process model
The process model of the NHC filter consists of INS mech-
anization equations. We choose position, velocity and attitude
as state vectors for the NHC filter and write the discrete form
of the process model as given in Eq. (17). This is a nonlinear
dynamic model. We have to linearize it for use in a Kalman
filter format [3]. The Jacobean matrices after linearization of
the above nonlinear process model are calculated as:
3 3 3
3 3 3
3 3 3
..
( )
( 1)
k
n bb ibn
t
kt
k
I I 0f( )
Φ 0 I 0X
E ω0 0 I
Ψ
(18)
2
2
0
( )0 0
( 1)
0
1( ) ( )
1( ) ( )
n bb ibn
b b b by z y z
b by z
b b b by z y z
k
k
sc s s c
c c
s c
sc s s c
c c
E ω
Ψ
where (.) (.)sin(.),c cos(.)s and,
2
3 3
3 3
3
.2
..
.( )
k
nb
t
t
t
I 0
f( )Μ I 0
X0 E
(19)
Matrix n
bE is defined in Eq. (18). These Jacobeans matrices are
used in constraint EKF measurement update stage.
3.2 Measurement model of NHCF
We get forward velocity xv from the wheel encoder in the b-
frame. Since cross-track velocity yv and off- the-ground veloc-
ity zv are zero, as conformal to NHC, this information can be
applied to update the NHC filter. At each sampling time of the
inertial unit, a constraint observation is made, assuming the
velocity of the vehicle remains constant during IMU sampling
time, which is a valid assumption due to the slow dynamics of
the vehicle. Hence this velocity observation can occur at the
sampling rate of IMU. To transform this velocity vector to n-
frame, we have to multiply body frame velocities with trans-
formation matrix nbC i.e.
cos cos
0 cos sin
0 sin
b
x x
n n nb y x
z x
v v
v v
v v
z C z
(20)
The measurement noise covariance matrix should also be
transformed from b-frame to n-frame as well.
[ ]n n b n Tk b k bR C R C (21)
Corresponding measurement matrix matrix(C) is:
3 3 3[ ]Tk C 0 I 0 (22)
One thing to emphasize is that when the vehicle stops,
i.e., 0xv = , measurement vector nz becomes zero as well
and is applied as a zero-velocity update to NHCF. Hence im-
plementing this one filter will work both in stationary and
moving states of the vehicle at a high update rate. We ana-
lyzed the effects on navigation improvement by including
these constraints. The brief algorithm of NHCF is given below
[9]:
i) Initialize the filter as follows:
0 0ˆ ( ) x E x (23)
0 0 0 0 0ˆ ˆ[( )( ) ]T P E x x x x (24)
For k=1,2,….,N, perform the following:
ii) Compute the partial derivative matrices:
1
11
ˆk
kk
+x
fΦ
x
(25)
1
11
ˆk
kk
+x
fΜ
w
(26)
iii) Perform the propagation step:
1 1 1ˆ ˆ( , ,0)k k k k
x f x u (27)
1 1 1 1 1 1T T
k k k k k k k
P Φ P Φ M Q M (28)
iv) Perform the measurement update step:
1
( )T T
k k k k k k k- - -k = P C C P C R (29)
0000 M. Ilyas et al. / Journal of Mechanical Science and Technology 00 (2013) 0000~0000
[ˆ ˆ ˆ( ,0)]n
k k k k k + - -x = x k z h x (30)
)(k k k k+ -P = I k C P (31)
Note that Jacobean matrices ,k kΦ M are computed in each
iteration. This causes a computation burden at high rate due to
the trigonometric quantities involved in Jacobean matrices.
One way to alleviate this burden is to reduce the size of the
state vector from nine to six, i.e. we only estimate position and
velocity components in the constraint EKF and the rest of the
quantities of interest are estimated in a centralized EKF, de-
signed in the next section.
4. Multisensor data fusion: Centralized Extended
Kalman filter design
It is intuitively known that no single sensor can sense all
states/parameters of a given system. We need multisensory
inputs to get the full, uninterrupted states information of a
platform and/or environment. The Kalman filter is a well-
known multisensory data fusion algorithm which produces
optimal estimates of the desired states/parameters under cer-
tain assumptions, e.g., a linear dynamic model and corrupting
noises being white Gaussian. In our application we linearize
the dynamic model of INS errors and use INS/GPS reported
errors in states as observations to centralized EKF [3, 10].
Compare Results
Run INS
(IMU only)
Avg.imu data
(20 Sec)GPS
valid
?
Read SPU-A
init.Attid.
Turn-on bias
Position, Vel.,Attit
Run NHCF
(EKF#1)
Read SPU-B
GPS
?
Run INS/GPS
(EKF#2)
Yes
No
Wheel encoder
Yest <20s
Error estimates
No
Yes
Init.
setting
Init. position, vel.
Fig. 2. Algorithm flow chart for multisensor integration navigation system.
We have designed CEKF with and without exploiting NHC
to show their effects on overall navigation accuracy. In
CEKF all sensors measurements are fed to a single estimation
algorithm which processes these measurements to produce
desired state estimates. Note that we have imposed a valida-
tion gate which passes only those measurements qualifying a
certain accuracy level. For this we have used a normalized
innovation covariance matrix, defined as [11]:
; k
Tkk k k k k
k
where -δz
S = H P H + RS
where kH is the observation matrix and kR is the noise
covariance matrix. If k
< threshold, then this particular sen-
sor measurement is allowed to update CEKF. The complete
schematic of CEKF is shown in Fig. 3. The state vector of the
filter comprises 15-INS error states including position errors,
velocity errors, attitude errors and bias errors of the inertial
sensor.
4.1 Inertial navigation system error dynamics model
To design a Kalman filter for an integrated navigation sys-
tem, the INS error model is first derived, and position, velocity
and attitude error states are defined. Observations to KF in this
integration scheme are the difference between posi-
tion/velocity provided by INS mechanization equations and
the aiding sensor. If we include bias errors of the IMU, mod-
eled as a first order Gauss-Markov process, in the error state
model, the overall continuous time error-state model will be
given as follows:
δP = δV (32)
[ ]n b n n bb b a b δV =C f δψ C b C f (33)
n n bb g b ib δψ = C b C ω (34)
aa a b b b ω (35)
gg g b b b ω (36)
The discrete-time counterpart becomes:
1 1k k k k k δx F δx G η (37)
where kη is process noise with noise strength matrix kQ and
,k kF G are discrete counterparts of continues time state transi-
tion matrices obtained as follows:
k c t F I F (38)
k c t G G (39)
4.2 Error observation model
M. Ilyas et al. / Journal of Mechanical Science and Technology 23 (2009) 1261~1269 1267
In the complementary filter structure, the filter input is de-
fined as the observed error, or the observation difference,
kδz which is related to the error state vector as:
k k k kδz =H δx + v (40)
where, kH is the observation matrix and kv is the observa-
tion noise with noise strength matrix kR . Observation error
kδz is formed by subtracting the position and velocity of the
aiding sensor from the INS predicted position and velocity.
From the error state model defined in Eq. (37) and the obser-
vation model in Eq. (40), the filter’s estimation process will
proceed by performing a prediction and update cycle, as fol-
lows.
[P,V,A]
corrected
IMU
NHCF(EKF#1)
EKF#2
IMU raw data
Sta
te e
rror e
stimates
Turn-on bias estimates
Turn-
on BiasAlignment
Wheel
Encoder
GPS Validation
IMU bias estimates
P,V
n
bC
[P,V,A] predicted
Error P,V
b b
ibf ,ω
nbC
A
Initialization Block
g
eieΩ
Bias comp.data
x
y
z
v
v
v
Fig. 3. Integrated INS/GPS/NHCF system block diagram, wherenbC is
transformation matrix, eie is earth rotation rate, g gravity vector,
,b bibf are accel. and gyro data and P,V,A are position, velocity and
attitude vectors respectively.
4.3 Prediction
The error state vector and its covariance are propagated us-
ing the state transition model and the process noise matrix:
1k kk - +δX =F δX =0 (41)
1k kT
k k k k k- +=P F P F +G Q G (42)
The predicted error estimate is zero given the assumption of
zero mean Gaussian process noise [3]. Estimates uncertainty is
provided by the covariance matrix propagation k-P
4.4 Update
When an observation occurs, the error state vector and its
covariance are updated according to:
k k k+δX =K δz (43)
where kK is Kalman gain and kδz is given in Eq. (40) above.
Kalman gain is calculated as:
1[ ]T Tk k k k
- - -K = P H HP H + R (44)
Error covariance (i.e. uncertainly in estimates) is updated as
follows:
[ ] [ ]T T
k k k kk k k + -= + RP I K H P I K H K K (45)
4.5 Feedback error correction
Once the observation update has been processed successful-
ly, the estimated errors of the INS are sent to correct the exter-
nal INS loop [3]. The corrected position and velocity
( ,c cn nk kP v ) are obtained as:
c n n nk k kP = P δP (46)
c n n nk k kv = v δv (47)
whereas correction of the direction correction matrix (DCM)
is a little tricky and is obtained by the following equation:
1, ,( [ ])c
kn nb k b k -C = I δψ × C (48)
Here[ ]kδψ × is a skew symmetric matrix of the estimated
attitude error vector. From the corrected direction cosine ma-
trix ,c n
b kC , attitude angles may be easily derived.
5. Experimental setup and methodology
For the experimental setup, we have used the UGV shown
in Fig. 4. It is a KYMCO MXU 500i, an off-road rover modi-
fied to be remotely controlled as well as to have autonomous
mode capabilities. On top of it, a navigation system is installed.
The integrated navigation system block diagram and corre-
sponding algorithm flow chart are depicted in Fig. 2 and Fig. 3
respectively. The navigation system consists of two pro-
cessing units, named signal processing units (SPU). SPU-A
consists of an IMU HG1700 (100Hz), Garmin GPS (10Hz)
and wheel encoder (50Hz). SPU-B consists of an IMU iMAR-
FSAS (200Hz), NovAtel Span-SE™ (20Hz), and NovAtel
GPS (5Hz). SPU-B serves as the reference navigation system.
The wheel encoder is installed on the right rear wheel of the
rover to provide the longitudinal distance and forward velocity
of the vehicle. In total, we are using three sensors (IMU, GPS,
wheel encoder) and two motion constraints (NHC and ZVU)
0000 M. Ilyas et al. / Journal of Mechanical Science and Technology 00 (2013) 0000~0000
as virtual sensors in this work. Tests conducted for evaluation
of the proposed algorithm were divided into three groups:
Test#1: Stationary for about 8 minutes.
Test#2: Clock wise (CW) run, three loops, for about 350m.
Test#3: Counter clock wise (CCW) run, two loops, for about
235 m.
HG1700 IMU
SPAN SE
NovAtel GPSRadio Tx/Rx iMAR IMUGarmin GPS
Na
v. P
rocesso
r
Fig. 4. Off-road rover modified as UGV.
The vehicle was run on a closed route, about 25 m x 45 m,
once clock wise and then counter clock wise to test the
validity of the proposed method. A test with the vehicle in a
stationary condition was also performed in order to check the
performance of NHCF as a zero velocity updater while the
vehicle was stationary for about eight minutes. The test area is
surrounded by tall buildings so GPS validity was checked first
before its measurements were incorporated into the CEKF. In
moving tests, the vehicle is kept stationary initially for 10 - 20
seconds in order to calculate its initial attitude through a
leveling process and gyro compassing, as mentioned in
Section 2.2. The ATV also takes short (~5sec) stops while
running in order to apply zero velocity updates in NHCF.
6. Results discussion
To evaluate the performance of our proposed algorithm, we
have opted for the root mean square error (RMSE) as our met-
ric. RMSE is a popular method for scoring an algorithm be-
cause this metric tends to penalize larger errors more severely
than other metrics [12]. Formally, RMSE is defined as:
2
1 ;
N
i
i
error
RMSE error true estimteN
This expression is similar to the standard deviation widely
used in uncertainty measures. The biggest advantage of
RMSE is that it provides quadratic loss function, and is also a
measure of uncertainty in estimation and is sensitive to outli-
ers [13].
First of all we calibrated the IMU raw data with the algo-
rithm as explained in Section 2.2. For calibration we used
about 20 seconds of stationary data and calculated the turn-on
bias of the accelerometer and gyro. The effect of removing
this bias is shown in Fig. 1, for roll, pitch and yaw calculations,
for about 15 minutes of stationary data. It is evident that sub-
traction of this bias data is very important to achieve accurate
navigation.
Next we have shown the improvement in navigation accu-
racy when using NHC and ZVU, as given in Tables I and II.
We have reported the average RMS error on position, velocity
and attitude for all three types of tests, i.e., the stationary con-
dition (Test#1), CW (Test#2) and CCW (Test#3) in these
tables. In Table I, we compare navigation quantities estimated
using IMU only, and quantities using IMU aided with NHC
and ZVU. The percentage improvement by exploiting these
constraints is evident in the last three columns. Note that we
are using a very accurate gyro, RLG tactical grade, which is
accurate on the order of <2°/hr, hence these constraints have
no significant effect during this short time testing period. We
expect to see their positive effects on attitude estimation in
long term testing too, for example, 1 hour.
Table II shows the percentage improvement in navigation
accuracy using IMU aided with GPS plus the proposed con-
straints (i.e. NHC and ZVU). Again there is no improvement
in attitude under these constraints, as also confirmed by the
cumulative error distribution function (CDFs) plots in Fig. 15.
In the stationary test, only the ZVU constraint is applied be-
cause NHCF acts as a zero-velocity updater in the stationary
case. Similarly, when the vehicle stops during motion in the
other two tests, the ZVU constraint is applied as well. Thus,
implementing NHCF serves two purposes; during motion it
applies NHC and while stationary it applies ZVUs.
Note that in Table II, although we are using a very accurate
GPS to aid IMU, due to experimental conditions, namely, a
test site near tall buildings, the GPS accuracy degrades very
quickly. The horizontal dilution of precision (HDOP) value of
the GPS in this experiment is very bad because the experiment
was conducted near tall buildings; to mimic a GPS denied area.
The use of other aiding sensors becomes inevitable in this
kind of situation. Readily available measurements updates are
the vehicle’s inherent constraints, i.e., NHC and ZVUs, which
have been exploited in this work. An HDOP of value 5 is ac-
ceptable for aiding the IMU. Above this threshold, GPS data
is no longer acceptable [14]. Hence, due to poor HDOP values,
GPS aided IMU was poor as compared to using these con-
straints.
M. Ilyas et al. / Journal of Mechanical Science and Technology 23 (2009) 1261~1269 1269
Table I. RMS error comparison of IMU only and IMU aided with NHCF.
T
E
S
T
IMU only IMU+NHC+ZVU
P
Imp
(%)
V
Imp
(%)
A
Imp
(%)
**Pe
(m)
Ve
(m/s)
Ae
(deg)
Pe
(m)
Ve
(m/s)
Ae
(deg)
#1 125.9 1.1 2.1 0.2 0.1 2.1 99 99 Nil
#2 66.9 1.2 3.2 2.2 0.1 3.2 96 88 Nil
#3 29.3 0.7 3.7 1.5 0.2 3.7 94 78 Nil
Table I1. RMS error comparison of integrated IMU/GPS and IMU/GPS
aided with NHCF. T
E
S
T
IMU+GPS IMU+GPS+NHC+ZVU
P
Imp
(%)
V
Imp
(%)
A
Imp
(%)
Pe
(m)
Ve
(m/s)
Ae
(deg)
Pe
(m)
Ve
(m/s)
Ae
(deg)
#1 0.2 0.1 2.1 0.2 0.1 2.1 14 50 Nil
#2 16.8 0.5 1.4 0.9 0.1 1.4 94 83 Nil
#3 6.3 0.4 1.5 1.2 0.1 1.5 80 68 Nil
**Pe, Ve, Ae: Avg.RMS position, velocity, attitude errors respectively. Pimp,
Vimp, Aimp : % improvement in position, velocity and attitude respectively.
Fig. 5-8 show the 2D trajectory plots for all three tests and
compare multisensor integration visually. In Fig. 5 we show
the results of the stationary data (Test#1). The IMU-only tra-
jectory diverges quickly but when aided with NHCF, this di-
vergence is reduced drastically, and becomes even more accu-
rate than the other two plots, i.e., IMU/GPS, IMU/GPS/NHC.
In the stationary state, ZVU is applied continuously, which
drastically bounds the growing errors.
Fig. 6 is the 2D plot of Test#2. Fig. 6 shows the first trav-
erse loop, in which we can see that initially all data is good.
However, the IMU-only trajectory starts diverging at the end
of the first loop. This is obvious because of IMU’s inherent
errors. But, when NHCF is applied, the trajectory remains
close to reference. In the last (third) loop given in Fig. 7, the
IMU-only and IMU aided with GPS trajectories diverge, but
the IMU aided with NHC and ZVU remains good even at the
end of the third loop. This is the benefit we get from applying
these motion constraints. Fig. 8 is the first loop trajectory of
test#3. IMU-only and IMU/GPS trajectory starts diverging at
the end of first loop. The second loop showed even more di-
vergence, the reason of what has already been explained.
Fig. 5. Stationary Test#1: 2D trajectory in UTM frame.
Fig. 6. CW Run Test#2: 2D trajectory in UTM frame (first loop).
Fig. 7. CW Run Test#2: 2D trajectory in UTM frame (third loop).
0000 M. Ilyas et al. / Journal of Mechanical Science and Technology 00 (2013) 0000~0000
Fig. 8. CCW Run Test#3: 2D trajectory in UTM frame (first loop).
0 0.1 0.2 0.3 0.4 0.5 0.6 0.70
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cum
ula
tive p
robabili
ty
Error in x[m]
imu+NHC
imu+GPS
imu+GPS+NHC
Fig. 9. Cumulative distribution function of x-position error in Test#1.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.350
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cum
ula
tive p
robabili
ty
Error in y[m]
imu+NHC
imu+GPS
imu+GPS+NHC
Fig. 10. Cumulative distribution function of y-position error in Test#1.
0 10 20 30 40 50 60 70 800
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cum
ula
tive p
robabili
ty
Error in x[m]
imu only
imu+NHC
imu+GPS
imu+GPS+NHC
Fig. 11. Cumulative distribution function of x-position error in Test#2.
0 10 20 30 40 50 60 70 80 90 1000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cum
ula
tive p
robabili
ty
Error in y[m]
imu only
imu+NHC
imu+GPS
imu+GPS+NHC
Fig. 12. Cumulative distribution function of y-position error in Test#2.
0 5 10 15 20 250
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cum
ula
tive p
robabili
ty
Error in x[m]
imu only
imu+NHC
imu+GPS
imu+GPS+NHC
Fig. 13. Cumulative distribution function of x-position error in Test#3.
M. Ilyas et al. / Journal of Mechanical Science and Technology 23 (2009) 1261~1269 1271
0 5 10 15 20 250
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1C
um
ula
tive p
robabili
ty
Error in y[m]
imu only
imu+NHC
imu+GPS
imu+GPS+NHC
Fig. 14. Cumulative distribution function of y-position error in Test#3.
Fig. 15. Cumulative distribution function of attitude error in Test#2.
Fig. 9-14 shows the cumulative error distribution functions
for all three tests. For Test#1 we have provided x-position
error and y-position error CDF in Fig. 9 and Fig. 10. The
IMU-only error is too big, so it has not been displayed on
these graphs. It is evident that by applying NHC and ZVU,
90% of all the time, the error is reduced below 0.35m in x and
0.15m in y-axis.
Fig. 11 and Fig. 12 display the CDFs for Test#2. Again we
can see from these plots that 90% of the time, the error in x-
position and y-position is less than 2m and 1.5m respectively,
when employing NHC and ZVU. IMU-only and IMU/ GPS
divergence is clear from these graphs.
In Fig. 13 and Fig. 14, we have drawn CDFs of x-position
error and y-position error for Test#3. Errors in x-position and
in y-position are 3m and 2m respectively for IMU when aided
with NHC and ZVU.
Hence when we use NHC along with ZVUs, there is an
overall improvement in navigation accuracy in the static case
as well as in the dynamic case due to the high update rate ve-
locity observation vector obtained through NHC and ZVU. In
summary, by exploiting NHC and ZVUs, we enhance vehicle
navigation accuracy for both static and dynamic states.
7. Conclusion and future work
We have shown through experimental results that exploita-
tion of NHC and ZVU drastically enhances the navigation
performance of ground vehicles in both stationary as well as
dynamic states. Since one single sensor cannot reliably pro-
vide all necessary information for navigating in a hazardous
environment, multisensory data fusion is of the utmost im-
portance for autonomous vehicles. Sensor fusion can be a cure
for bad sensor information, and compensate inaccurate data
with good sensor data, available at time of need. We analyzed
the effects of a UGV’s inherent motion constraints on a multi-
sensor integrator and showed that utilizing these constraints
enhanced the performance of the estimator. By plotting CDF
for navigation errors and tabulating comparisons of RMS error
with and without the use of NHC and ZVUs, we showed that
the integrated solution has less error drift, especially in posi-
tion and velocity axes. Hence, for the design of a multisensor
integrator for UGV application, use of these inherent motion
constraints will achieve better accuracy both in stationary as
well as dynamic states of the vehicle. However, these results
may not be guaranteed for very slippery conditions and fast
moving vehicles.
As future research work, we will focus on integrating low
cost MEMS IMU and a visual odometry technique using a
camera and 3D lidar, while exploiting these motion constraints,
and conduct tests in a slippery environment. We intend to
achieve seamless, robust and accurate navigation for UGVs in
any kind of environment by utilizing multisensory data fusion
algorithms and exploiting the vehicle’s inherent motion con-
straints, such as NHC and ZVUs.
Nomenclature
nbE : Transformation between body rates to Euler angle
rates
kΨ : Euler angle vector
nP : Position in navigation frame n
V : Velocity in navigation frame
xV : Velocity forward velocity
δP : Position error vector
δV : Velocity error vector
δψ : Attitude error vector
ab : Accelerometers bias error vector
gb : Gyro bias error vector
References
[1] L. Flavio D.M., J. Fernando M.J., Trajectory Planning for
Non-holonomic Mobile Robot Using Extended Kalman Fil-
ter, Hindawi Publishing Corporation Mathematical Prob-
lems in Engineering Volume 2010, Article ID 979205, 22
pages doi:10.1155/2010/979205.
[2] S. Sukkarieh, Low Cost, High Integrity, Aided Inertial
0000 M. Ilyas et al. / Journal of Mechanical Science and Technology 00 (2013) 0000~0000
Navigation Systems for Autonomous Land Vehicles, PhD
Thesis, March 2000, ACFR University of Sydney, Australia.
[3] J. Kim, Autonomous Navigation for Airborne Application,
Ph.D thesis, ACFR University of Sydney, Australian.
[4] J. Pusa, Strapdown inertial navigation system aiding with
non-holonomic constraints using indirect Kalman Filtering,
Master of Science Thesis 2009, Tampere University of
Technology.
[5] D. Titterton and J. Weston, Strapdown Inertial Navigation
Technology, 2nd Edition, IEE Radar, Sonar, Navigation and
Avionics Series 17, AIAA Reston VA USA.
[6] B. S. Cho, W. S. Moon, W. J. Seo and K. R. Baek, A dead
reckoning localization system for mobile robots using iner-
tial sensors and wheel revolution encoding, Journal of Me-
chanical Science and Technology 25(11)(2011)2907~2917.
[7] X. Kong, Inertial Navigation System Algorithms for low
Cost IMU, PhD thesis, 2000, Department of Mechanical and
Mechatronics Engineering, ACFR, The University of Syd-
ney Australia.
[8] E. North, J. Georgy, U. Iqbal, M. Tarbochi and A. Noureldin,
Improved Inertial/Odometry/GPS Positioning of Wheeled
Robots Even in GPS-Denied Environment, Global Naviga-
tion Satellites Systems-Signal ,Theory and Applications
Chapter 11, Intechopen.
[9] D. Simon, Optimal State Estimation: Kalman, Hinf and
Nonlinear Approaches, Wiley Interscience, 2006.
[10] E-H. Shin, Accuracy Improvement of Low Cost INS/GPS
for Land Applications, Thesis 2001, Department of Geomet-
rics Engineering University of Calgary.
[11] H. Durrant-Whyte, Multi-Sensor Data Fusion, Technical
Report 2001, ACFR University of Sydney, Australia.
[12] A. Gunawardana, G. Shani, A survey of accuracy evalua-
tion metrics of recommendation tasks, Journal of Machine
Learning Research 10(2009)2935-2962.
[13] S. Makridakis, M. Hibon, Evaluating accuracy (or errors)
measures,
http://www.insead.edu/facultyresearch/research/doc.cfm?did
=46875.
[14] Wikipedia, Dilution of precision (GPS),
http://en.wikipedia.org/wiki/Dilution_of_precision_(GPS).
Muhammad Ilyas received his B.S. de-
gree in electrical engineering in 2000, a
M.S. degree in electrical engineering and
computer science in 2008 and is currently
working towards Ph.D degree in the de-
partment of intelligent robot engineering at
University of Science and Technology
(UST), Daejon Korea, since September 2013. His research inter-
ests include robot navigation, IMU/wheel encoder/Sun sen-
sor/Lidar integration for planetary rovers, multisensor fusion, Kal-
man filter and nonlinear filtering.
Seung-Ho Baeg received the B.S. and
M.S. degrees in the department of material engineering at
Korea University in 1991 and 1993, respectively. He is cur-
rently a principal researcher and a chief of robotic solution for
extreme field agency on KITECH, Korea. His research inter-
ests include Laser Range Finder (LRF), 3D LADAR system,
and robot system architecture design.
Sangdeok Park received his B.S. degree
in the department of mechanical design
at Yeungnam University in 1988 and
M.S. and Ph.D. degrees from the de-
partment of mechanical engineering at
Pohang University of Science and Tech-
nology (POSTECH) in 1990 and 2000,
respectively. Since 2004, he has been
with KITECH, Ansan, Korea, where he is currently a principal
researcher and a chief officer of Robotics R&BD group of
KITECH. His research interests include the design and control
of quadruped walking robots, wearable robots and hydrau-
lically driven robots.
2014년 8월 18일 correction_statement
https://lang.kaist.ac.kr/popups/certification/2417/1 1/1
(305-701) 대전광역시 유성구 대학로 291, Daehak-ro, Yuseong-gu, Daejeon 305-701
[email protected], http://lang.kaist.ac.kr, Tel. 042-350-8731~2, Fax. 042-350-8730
Proofreading Confirmation
Registration Number CR-14-2077
Date of Registered 2014. 08. 05
TitleImproving UGV Navigation by Exploiting
Virtual Sensors and Multisensor Fusion
Proofread Language English
Author Ilyas Meo
Proofreader Stephen Nett
This paper was proofread by the KAIST Language Center
Aug. 18, 2014
KAIST Language Center
Korea Advanced Institute of Science and Technology (KAIST)
EnglishCorrectionStatementClick here to download Response to reviewer: correction_statement_Improving UGV Navigation by Exploiting Virtual Sensors and Multisensor Fusion.pdf
Copy right formClick here to download Response to reviewer: Copy_right_form.pdf
Top Related