Tải bản đầy đủ (.pdf) (306 trang)

Multisensor fusion and integration in the wake of big data, deep learning and cyber physical system

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (49 MB, 306 trang )

Lecture Notes in Electrical Engineering 501

Sukhan Lee · Hanseok Ko
Songhwai Oh   Editors

Multisensor Fusion
and Integration in the
Wake of Big Data, Deep
Learning and Cyber
Physical System
An Edition of the Selected Papers from
the 2017 IEEE International Conference
on Multisensor Fusion and Integration
for Intelligent Systems (MFI 2017)


Lecture Notes in Electrical Engineering
Volume 501

Board of Series editors
Leopoldo Angrisani, Napoli, Italy
Marco Arteaga, Coyoacán, México
Bijaya Ketan Panigrahi, New Delhi, India
Samarjit Chakraborty, München, Germany
Jiming Chen, Hangzhou, P.R. China
Shanben Chen, Shanghai, China
Tan Kay Chen, Singapore, Singapore
Rüdiger Dillmann, Karlsruhe, Germany
Haibin Duan, Beijing, China
Gianluigi Ferrari, Parma, Italy
Manuel Ferre, Madrid, Spain


Sandra Hirche, München, Germany
Faryar Jabbari, Irvine, USA
Limin Jia, Beijing, China
Janusz Kacprzyk, Warsaw, Poland
Alaa Khamis, New Cairo City, Egypt
Torsten Kroeger, Stanford, USA
Qilian Liang, Arlington, USA
Tan Cher Ming, Singapore, Singapore
Wolfgang Minker, Ulm, Germany
Pradeep Misra, Dayton, USA
Sebastian Möller, Berlin, Germany
Subhas Mukhopadyay, Palmerston North, New Zealand
Cun-Zheng Ning, Tempe, USA
Toyoaki Nishida, Kyoto, Japan
Federica Pascucci, Roma, Italy
Yong Qin, Beijing, China
Gan Woon Seng, Singapore, Singapore
Germano Veiga, Porto, Portugal
Haitao Wu, Beijing, China
Junjie James Zhang, Charlotte, USA


** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex,
SCOPUS, MetaPress, Springerlink **
Lecture Notes in Electrical Engineering (LNEE) is a book series which reports the latest research
and developments in Electrical Engineering, namely:









Communication, Networks, and Information Theory
Computer Engineering
Signal, Image, Speech and Information Processing
Circuits and Systems
Bioengineering
Engineering

The audience for the books in LNEE consists of advanced level students, researchers, and industry
professionals working at the forefront of their fields. Much like Springer’s other Lecture Notes
series, LNEE will be distributed through Springer’s print and electronic publishing channels.
For general information about this series, comments or suggestions, please use the contact
address under “service for this series”.
To submit a proposal or request further information, please contact the appropriate Springer
Publishing Editors:
Asia:
China, Jessie Guo, Assistant Editor () (Engineering)
India, Swati Meherishi, Senior Editor () (Engineering)
Japan, Takeyuki Yonezawa, Editorial Director ()
(Physical Sciences & Engineering)
South Korea, Smith (Ahram) Chae, Associate Editor ()
(Physical Sciences & Engineering)
Southeast Asia, Ramesh Premnath, Editor ()
(Electrical Engineering)
South Asia, Aninda Bose, Editor () (Electrical Engineering)
Europe:
Leontina Di Cecco, Editor ()

(Applied Sciences and Engineering; Bio-Inspired Robotics, Medical Robotics, Bioengineering;
Computational Methods & Models in Science, Medicine and Technology; Soft Computing;
Philosophy of Modern Science and Technologies; Mechanical Engineering; Ocean and Naval
Engineering; Water Management & Technology)
()
(Heat and Mass Transfer, Signal Processing and Telecommunications, and Solid and Fluid
Mechanics, and Engineering Materials)
North America:
Michael Luby, Editor () (Mechanics; Materials)
More information about this series at />

Sukhan Lee Hanseok Ko
Songhwai Oh


Editors

Multisensor Fusion
and Integration in the Wake
of Big Data, Deep Learning
and Cyber Physical System
An Edition of the Selected Papers
from the 2017 IEEE International Conference
on Multisensor Fusion and Integration
for Intelligent Systems (MFI 2017)

123


Editors

Sukhan Lee
Intelligent Systems Research Institute
Sungkyunkwan University
Suwon
Korea (Republic of)

Songhwai Oh
Department of Electrical and Computer
Engineering
Seoul National University
Seoul
Korea (Republic of)

Hanseok Ko
School of Electrical Engineering
Korea University
Seoul
Korea (Republic of)

ISSN 1876-1100
ISSN 1876-1119 (electronic)
Lecture Notes in Electrical Engineering
ISBN 978-3-319-90508-2
ISBN 978-3-319-90509-9 (eBook)
/>Library of Congress Control Number: 2018940915
© Springer International Publishing AG, part of Springer Nature 2018
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar

methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Printed on acid-free paper
This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland


Preface

Multisensor fusion and integration is playing a critical role in harnessing the smart
technologies as we ride the big wave of the 4th Industrial Revolution. Deployment
of the Internet of Things, Cyber-Physical Systems and Robotics in distributed
environment is rapidly rising as our society seeks to transition from being ambient
to being smart and, at the same time, to enable human to curate information and
knowledge between ubiquitous and collective computing environments. What
surround us are the networks of sensors and actuators that monitor our environment,
health, security and safety, as well as the service robots, intelligent vehicles and
autonomous systems of ever heightened autonomy and dependability with integrated heterogeneous sensors and actuators. Developing fundamental theories and
advancing implementation tools to address the emerging key issues in multisensor
fusion and integration in the wake of big data and deep learning would make the
above transition smooth and rewarding.
This volume is an edition of the papers selected from the 13th IEEE International

Conference on Multisensor Integration and Fusion, IEEE MFI 2017, held in Daegu,
Korea, 16–22 November 2017. Only 17 papers out of the 112 papers accepted for
IEEE MFI 2017 were chosen and requested for revision and extension to be
included in this volume. The 17 contributions to this volume are organized into two
chapters: Chapter 1 is dedicated to the theories in data and information fusion in
distributed environment and Chapter 2 to the multisensor fusion in robotics. To help
readers understand better, a chapter summary is included in each chapter as an
introduction.
It is the wish of the editors that readers find this volume informative and
enjoyable. We would also like to thank Springer-Verlag for undertaking the publication of this volume.
Sukhan Lee
Hanseok Ko
Songhwai Oh

v


Contents

Multi-sensor Fusion: Theory and Practice
Covariance Projection as a General Framework of Data Fusion
and Outlier Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sukhan Lee and Muhammad Abu Bakr

5

State Estimation in Networked Control Systems with Delayed
and Lossy Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Florian Rosenthal, Benjamin Noack, and Uwe D. Hanebeck


22

Performance of State Estimation and Fusion with Elliptical
Motion Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Qiang Liu and Nageswara S. V. Rao

39

Relevance and Redundancy as Selection Techniques
for Human-Autonomy Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . .
Justin D. Brody, Anna M. R. Dixon, Daniel Donavanik,
Ryan M. Robinson, and William D. Nothwang
Classification of Reactor Facility Operational State Using SPRT
Methods with Radiation Sensor Networks . . . . . . . . . . . . . . . . . . . . . . .
Camila Ramirez and Nageswara S. V. Rao
Improving Ego-Lane Detection by Incorporating Source Reliability . . .
Tran Tuan Nguyen, Jens Spehr, Jonas Sitzmann, Marcus Baum,
Sebastian Zug, and Rudolf Kruse

52

76
98

Applying Knowledge-Based Reasoning for Information Fusion
in Intelligence, Surveillance, and Reconnaissance . . . . . . . . . . . . . . . . . . 119
Achim Kuwertz, Dirk Mühlenberg, Jennifer Sander, and Wilmuth Müller
Multiple Classifier Fusion Based on Testing Sample Pairs . . . . . . . . . . . 140
Gaochao Feng, Deqiang Han, Yi Yang, and Jiankun Ding


vii


viii

Contents

Multi-sensor Fusion Applications in Robotics
Bayesian Estimator Based Target Localization in Ship Monitoring
System Using Multiple Compact High Frequency
Surface Wave Radars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Sangwook Park, Chul Jin Cho, Younglo Lee, Andrew Da Costa,
SangHo Lee, and Hanseok Ko
SLAM-Based Return to Take-Off Point for UAS . . . . . . . . . . . . . . . . . . 168
Daniel Bender, Wolfgang Koch, and Daniel Cremers
Underwater Terrain Navigation During Realistic Scenarios . . . . . . . . . . 186
Mårten Lager, Elin A. Topp, and Jacek Malec
Supervised Calibration Method for Improving Contrast
and Intensity of LIDAR Laser Beams . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Mohammad Aldibaja, Noaki Suganuma, Keisuke Yoneda, Ryo Yanase,
and Akisue Kuramoto
Multi-object Tracking Based on a Multi-layer Particle Filter
for Unclustered Spatially Extended Measurements . . . . . . . . . . . . . . . . . 219
Johannes Buyer, Martin Vollert, Mihai Kocsis, Nico Sußmann,
and Raoul Zöllner
Ensemble Kalman Filter Variants for Multi-Object Tracking
with False and Missing Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Fabian Sigges and Marcus Baum
Fall Detection with Unobtrusive Infrared Array Sensors . . . . . . . . . . . . 253
Xiuyi Fan, Huiguo Zhang, Cyril Leung, and Zhiqi Shen

Subtle Hand Action Recognition in Factory Based
on Inertial Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Yanyan Bao, Fuchun Sun, Xinfeng Hua, Bin Wang, and Jianqin Yin
Kinematics, Dynamics and Control of an Upper Limb
Rehabilitation Exoskeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Qingcong Wu and Ziyan Shao
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299


Multi-sensor Fusion:
Theory and Practice


Multi-sensor Fusion: Theory and Practice
Sukhan Lee and Hanseok Ko
Multisensor fusion and integration in a distributed environment is becoming of utmost
importance, especially, in the wake of the growing deployment of Internet of Things
(IoT) as well as Cyber Physical Systems (CPS). Although the fundamental theory
behind multisensor fusion and integration has been well-established through several
decades of investigations, in practice, there still remain a number of technical challenges to overcome, in particular, for dealing with multisensor fusion and integration in
a distributed environment. Specifically speaking, multisensor fusion with the known
cross-correlations among multiple data sources can be handled ideally, for instance, by
Bar-Shalom Campo and Generalized Millman’s formula. However, in a distributed
environment, a number of critical issues arise that are yet to be addressed and solved,
including (1) the difficulty of estimating exact cross-correlations among multiple data
sources due to the physical relationships possibly existing among their observations as
well as the possible double counting by sharing prior information or data sources,
(2) the presence of inconsistency or outliers among data sources, (3) the existence of
transmission delays as well as data losses and (4) the incorporation of various constraints that may be available among states and observations into fusion. The papers
collected for this chapter are to address some of the critical issues as described above in

a theoretical and/or a practical point of view, as follows:
The paper, entitled “Covariance Projection as General Framework of Data Fusion
and Outlier Removal,” by Sukhan Lee and Muhammad Abu Bakr proposes a general
framework of distributed data fusion for distributed sensor networks of arbitrary
redundancies, where inconsistent data are identified simultaneously within the framework. The paper, entitled “State Estimation in Networked Control Systems with
Delayed and Lossy Acknowledgments,” by Florian Rosenthal, Benjamin Noack and
Uwe D. Hanebeck deals with the state estimation in networked control systems where
the control inputs and measurements transmitted via networks as well as the
acknowledgements packets sent by the actuator upon reception of control inputs are
subject to data losses and random transmission delays. The paper, entitled “Performance of State Estimation and Fusion with Elliptical Motion Constraints,” by
Qiang Liu and Nageswara Rao investigates target tracking in the presence of elliptical
nonlinear constraints on its motion dynamics, where the state estimates generated by
sensors are considered to be sent over long-haul lossy links to a remote fusion center.
The paper, entitled “Relevance and Redundancy as Selection Techniques for
Human-Autonomy Sensor Fusion,” by Justin David Brody, Anna Marie Rogers Dixon,
Daniel Donavanik, Ryan M. Robinson and William D. Nothwang addresses the
problem of sensor fusion in a human-autonomy system where the dynamic nature of
sensors makes it difficult to model their variability. The paper examines the application
of information theoretic entities, such as the relevance between sensors and target
classes and the redundancy among the selected sensors, as the criteria for evaluating the
importance for fusion. The paper, entitled “Classification of Reactor Facility


Multi-sensor Fusion: Theory and Practice

3

Operational State Using SPRT Methods with Radiation Sensor Networks,” by
Nageswara Rao and Camila Ramirez deals with the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network
where sensor measurements are inherently random with the parameters determined by

the intensity at the sensor locations. The paper, entitled “Applying Knowledge-Based
Reasoning for Information Fusion in Intelligence, Surveillance, and Reconnaissance,”
by Wilmuth Muller, Achim Kuwertz, Dirk Muhlenberg and Jennifer Sander presents a
method of high-level data fusion combining probabilistic information processing with
logical and probabilistic reasoning. This is to support human operators in their situational awareness for improving their capabilities of making efficient and effective
decisions. The paper, entitled “Multiple Classifier Fusion Based on Testing Sample
Pairs,” by Gaochao Feng, Deqiang Han, Yi Yang, and Jiankun Ding presents a multiple
classifier system operated under the classification based on testing sample pairs, where
fuzzy evidential reasoning is used to implement multiclass classification fusion. The
paper, entitled “Improving Ego-Lane Detection by Incorporating Source Reliability,”
by Tran Tuan Nguyen, Jens Spehr, Jian Xiong, Marcus Baum, Sebastian Zug and
Rudolf Kruse proposes an efficient and sensor-independent metric which provides an
objective and intuitive self-assessment for the entire road estimation process at multiple
levels, including individual detectors, lane estimation and the target applications.


Covariance Projection as a General
Framework of Data Fusion
and Outlier Removal
Sukhan Lee(&) and Muhammad Abu Bakr
Intelligent Systems Research Institute, Sungkyunkwan University,
Gyeonggi-do, Suwon 440-746, South Korea
{lsh1,abubakr}@skku.edu

Abstract. A fundamental issue in sensor fusion is to detect and remove outliers
as sensors often produce inconsistent measurements that are difficult to predict
and model. The detection and removal of spurious data is paramount to the
quality of sensor fusion by avoiding their inclusion in the fusion pool. In this
paper, a general framework of data fusion is presented for distributed sensor
networks of arbitrary redundancies, where inconsistent data are identified

simultaneously within the framework. By the general framework, we mean that it
is able to fuse multiple correlated data sources and incorporate linear constraints
directly, while detecting and removing outliers without any prior information.
The proposed method, referred to here as Covariance Projection (CP) Method,
aggregates all the state vectors into a single vector in an extended space. The
method then projects the mean and covariance of the aggregated state vectors
onto the constraint manifold representing the constraints among state vectors that
must be satisfied, including the equality constraint. Based on the distance from
the manifold, the proposed method identifies the relative disparity among data
sources and assigns confidence measures. The method provides an unbiased and
optimal solution in the sense of Minimum Mean Square Error (MMSE) for
distributed fusion architectures and is able to deal with correlations and uncertainties among local estimates and/or sensor observations across time. Simulation
results are provided to show the effectiveness of the proposed method in identification and removal of inconsistency in distributed sensors system.
Keywords: Covariance projection method Á Constraint manifold
Data fusion Á Distributed sensor network Á Inconsistent data

1 Introduction
Multisensor data fusion is to obtain a more meaningful and precise estimate of a state
by combining data from multiple sources. One of the inherent issues in multisensor
data fusion is that of uncertainty in sensor measurements. The sensor uncertainties may
come from impreciseness and noise in the measurements, as well as, from ambiguities
and inconsistencies present in the environment. The fusion methodologies should be
able to model such uncertainties and combine data to provide a consistent and accurate
fused solution.
© Springer International Publishing AG, part of Springer Nature 2018
S. Lee et al. (Eds.): MFI 2017, LNEE 501, pp. 5–21, 2018.
/>

6


S. Lee and M. A. Bakr

Recently, distributed data fusion [1, 2] is widely explored in diverse fields of
engineering and control due to its superior performance over the centralized fusion in
terms of flexibility, robustness to failure and cost-effectiveness in infrastructure and
communication. However, the distributed architecture needs to address statistical
dependency among the local estimates received from multiple nodes for fusion. This is
due to the fact that local state estimates at individual nodes can be subject to same
process noise [3] and to double counting, i.e., sharing same data sources among them
[4]. Ignoring such statistical dependency or cross-correlation among multiple nodes
leads to inconsistent results, causing divergence in data fusion [5].
The fusion methodologies assume that the sensor measurements are affected by
Gaussian noise only and thus the covariance of the estimate provides a good
approximation of all the disturbances affecting the sensor measurements. However, in
real applications, the sensor measurements may not only be affected by noise but also
from unexpected situations such as short duration spike faults, sensor glitch, permanent
failure or slowly developing failure due to sensor elements [6]. Since these types of
uncertainties are not attributable to the inherent noise, they are difficult to model. Due
to these uncertainties, the estimates provided by sensor nodes in a distributed network
may be spurious and inconsistent. Fusing these imprecise estimates with correct estimates can lead to severely inaccurate results [7]. Hence, a data validation scheme is
required to identify and eliminate the outliers from the fusion pool.
Detection of inconsistency needs either a priori information often in the form of
specific failure model(s) or data redundancy [1]. The model-based approaches [1, 8]
uses the generated residuals between the model outputs and actual measurements to
detect and remove faults. For instance in [9], Nadaraya-Watson estimator and a priori
observations are used to validate sensor measurements. Similarly, a priori system model
information as a reference is used to detect failures in filtered estimates [10]. Researchers
have also used fuzzy logic [11] and neural network [12] based approaches for sensor
validation. However, model-based methods either require an explicit mathematical
model or need tuning and training for data validation. This restricts the usage of these

methods in the case where prior information is not available or unmodeled failure
occurs. A method to detect spurious data based on Bayesian framework is proposed in
[13]. The method adds a term to the Bayesian formulation which has the effect of
increasing the posterior distribution when measurement from one of the sensor is
inconsistent with respect to the other. However, the method assumes independence of
the sensor estimates in its analysis and may lead to incorrect rejection of true estimates or
incorrect retaining of false estimates.
This paper presents a general data fusion framework, referred to as Covariance
Projection (CP) Method, to find an optimal and consistent fusion solution for multiple
correlated data sources. The proposed method provides a framework for identifying and
removing outliers in a distributed sensor network where only the sensor estimates may
be available at the fusion center.
1.1

Problem Statement

In a distributed architecture [1, 2], the sensors are often equipped with a tracking
system to provide local estimates of some quantity of interest in the form of mean and


Covariance Projection as a General Framework of Data Fusion and Outlier Removal

7

covariance. Assume that each local system predicts the underlying states using following equation,
xk ¼ A xkÀ1 þ BukÀ1 þ wkÀ1
where A is the system matrix, B input matrix, ukÀ1 input vector and ^xkÀ1 is the state
vector. The system process is affected by zero mean Gaussian noise wkÀ1 with
covariance matrix Q. The sensor measurements are approximated as,
zki ¼ Hi xk þ vki þ eki ; i ¼ 1; . . .; n

where vki is Gaussian noise with covariance matrices Ri ; i ¼ 1; 2; . . .; n. The sensor
measurements are also affected by unmodeled faults eki . The state prediction of each
local system is updated by its own sensor measurement to compute local state estimates
as (^xk ; Pk ). The local estimates are then communicated among sensor nodes or sent to a
central node for obtaining a global estimate. However, the local estimates may be
correlated due to common process noise [3] or double counting [4]. Furthermore, the
estimates provided by local systems may be spurious and inconsistent due to the
unmodeled sensor faults. As stated in the introduction, the majority of work needs a
priori information in the form of particular failure model(s) to detect sensor faults
[9, 10]. While in a distributed architecture, the fusion node may have access to the
estimated mean and covariance of the data sources only. Moreover, the crosscorrelation among data sources is overlooked in traditional data validation schemes and
outliers removal is mostly based on heuristics [13].
This paper presents a general framework to validate and fuse correlated and
uncertain data from multiple sources without any prior information. The proposed
method assigns confidence measure to multiple data sources based on the distance from
the constraint manifold. The method then statistically removes the inconsistent sensor
estimates of arbitrary dimensions and correlations.

2 Proposed Approach
Consider unbiased estimates ^x1 and ^x2 , of the true state x, with covariances P1 , P2 and
cross-covariance matrix P12 . The statistical distribution, that is, the mean and covariance from individual sensors in RN is aggregated such that it is transformed to an
extended space of R2N along with the equality constraint between the two data sources,
that is,
^x ¼

!
^x1
;
^x2


P ¼

P1
PT12

!
P12
;
P2

^x1 ¼ ^x2

ð1Þ

Figure 1(a) shows the extended space representation as a 2D ellipsoid of two individual
1D Gaussian estimates along with the constraint manifold. The constraint manifold is a
manifestation of the relationship between the data from two sensors. The subspace of the
equality constraint can be written as M ¼ ½1; 1ŠT . Whitening Transform (W) is a linear


8

S. Lee and M. A. Bakr

Fig. 1. (a) Extended space representation of two data sources and constraint manifold
(b) Whitening transform and projection, as a generalization of covariance extension method [14].

transformation that can be defined as, W ¼ DÀ1=2 E T , where D and E is the respective
eigenvalue and eigenvector matrix of P. Applying Whitening transform, we get,
^xW ¼ W^x;


PW ¼ WPW T ¼ I;

M W ¼ WM

Figure 1(b) shows the transformation of the ellipsoid into a unit circle after W. The
mean and covariance are then projected on the constraint manifold M W to get a fused
result in the transformed space as shown in Fig. 1(b). Inverse Whitening Transform is
applied to obtain the optimal fused mean and covariance in the original space as,
~x ¼ W À1 Pr W^x

ð2Þ

~ ¼ W À1 Pr PTr W ÀT
P

ð3Þ



T
T
where Pr is the projection matrix computed as Pr ¼ M W M W M W M W . It should be
noted that the framework of CP method can incorporate any linear constraints among
data sources without any additional processing. Using definition of various components
in (2) and (3), we get the closed-form simplification of fused mean and covariance for
CP method as,
~x ¼

À T À1 ÁÀ1 T À1

M P M M P ^x

~ ¼
P

À

M T PÀ1 M

ÁÀ1

ð4Þ
ð5Þ


Covariance Projection as a General Framework of Data Fusion and Outlier Removal

9

The details of the simplification are provided in Appendix 1. Using the values of
M; ^x; and P from (1) in (4) and (5), we get the CP fused mean and covariance of two
sensor estimates as,
~x ¼ ðP2 À P21 ÞðP1 þ P2 À P12 À P21 ÞÀ1^x1
þ ðP1 À P21 ÞðP1 þ P2 À P12 À P21 ÞÀ1^x2
~ ¼ P1 À ðP1 À P12 ÞðP1 þ P2 À P12 À P21 ÞÀ1 ðP1 À P21 Þ
P

ð6Þ
ð7Þ


Given n sensor estimates ð^x1 ; P1 Þ, ð^x2 ; P2 Þ; . . .; ð^xn ; Pn Þ of a true state x 2 RN with
known cross-covariance Pij ; i; j ¼ 1; . . .; n, (4) and (5) can be used to provide the
optimal fused mean and covariance with M ¼ ½IN1 ; IN2 ; . . .; INn ŠT . Where IN is the
identity matrix and N the dimension of individual data source. The proposed CP
method provides an unbiased and optimal fused solution in the sense of Minimum
Mean Square Error (MMSE) for a multisensor system of arbitrary redundancies.
Theorem 1: The fused estimate ~x given by the CP method in Eq. (2) is an unbiased
estimator of x, that is, E ð~xÞ ¼ Eð xÞ:
Proof: Using (2) we can write
x À ~x ¼ W À1 Pr W ðx À ^xÞ
Taking expectation on both sides, we get
À
Á
Eðx À ~xÞ ¼ W À1 Pr W E ðx À ^xÞ
Eðx À ~xÞ ¼ 0
Eð xÞ ¼ Eð~xÞ
where the assumption of unbiasedness is used for E ð xÞ ¼ E ð^xÞ. This conclude that the
fused state estimate ~x is an unbiased estimate of x.
~ of the CP method is smaller than the individual
Theorem 2: The fused covariance P
~
covariances, that is, P
Pi ; i ¼ 1; 2; . . .; n:
Proof: From Eq. (5), we can write
~ ¼
P

À

M T PÀ1 M


ÁÀ1

By Schwartz matrix inequality, we have
~ ¼
P



PÀ2 M
1

T 

1

P2 Mi

!T

Â

 1 T  1 
P2 Mi
P2 Mi ¼ Pi



PÀ2 M
1


T  1 !À1
 1 T  1 !
P À2 M
 PÀ2 M
P 2 Mi


10

S. Lee and M. A. Bakr

where M is the constraint among data sources and Mi ¼ ½INi ; 0; . . .; 0ŠT is the con~ ¼ Pi ; when Pi ¼ Pij ;
straint matrix for Pi . The equality holds for Pi ¼ Pij ; that is, P
j ¼ 1; 2; . . .; n.
Since the estimates of the state provided by sensors in a distributed architecture are
correlated, computation of cross-covariance Pij ; is needed to compute the fused mean
(4) and covariance (5). The cross-covariance between the sensor estimates can be
computed as [15],
h

ÃT
T
T
Pij ¼ ½I À Ki Hi Š APkÀ1
I À Kj Hj
ij A þ BQB

ð8Þ


where Ki and Kj are the Kalman gain of sensor i and j respectively for i; j ¼ 1; . . .; n
and PkÀ1
represent the cross covariance of the previous cycle between sensor i and j.
ij

3 Confidence Measure of Data Sources
The working of fusion algorithms is based on assumption that the input sensor estimates are consistent and consequently fails in the case of inconsistent estimates. Hence,
a data validation scheme is required to identify and eliminate the outliers before fusion.
The proposed approach identifies relative disparity and confidence measure of the
multi-sensory data by utilizing the relationship among data sources. Assuming that the
data sources can be represented jointly as a multivariate normal distribution, the
confidence of data sources can be measured by calculating the distance from the
constraint manifold as depicted in Fig. 2. Suppose that we have n Gaussian data
sources in RN with corresponding joint mean and covariance matrices as,
2

2

3
^xN1
6 ^xN2 7
7
^x ¼ 6
4 ... 5;

6
6
P ¼ 6
4


^xNn

P1
PT12
..
.
PT1n

P12
P2
..
.

...

3
. . . P1n
.. 7
...
. 7
..
.. 7
.
. 5
... P
n

Then the distance d from the manifold representing confidence measure can be computed as,
d ¼ ð^x À ~xÞT PÀ1 ð^x À ~xÞ


ð9Þ

where ~x is the point on the manifold and can be obtained by using (4). For instance,
given two data sources with mean as ^x1 , ^x2 2 RN and respective covariance matrices
P1 and P2 2 RNÂN . The distance d can be obtained as,
d ¼

Â

ð^x1 À ~xÞ

T

à P1
ð^x2 À ~xÞ
0
T

0
P2

!À1

^x1 À ~x
^x2 À ~x

!


Covariance Projection as a General Framework of Data Fusion and Outlier Removal


11

Fig. 2. Distance of the multi-variate distribution from the constraint manifold.

The point on the manifold is given as,
~x ¼ P2 ðP1 þ P2 ÞÀ1^x1 þ P1 ðP1 þ P2 ÞÀ1^x2
Simplifying we get,
d ¼ ½^x1 À ^x2 ŠT ðP1 þ P2 ÞÀ1 ½^x1 À ^x2 Š

ð10Þ

The details of simplifications are provided in Appendix 2. From (10), it can be
observed that distance d is a weighted distance between the two data sources and it can
provide a measure of nearness or farness of the two data sources to each other. A large
value of d implies a large separation while a small d signifies closeness of the data
sources. In other words, the distance from the manifold provides an indication of the
relative disparity among the data sources.
Theorem 3: For N dimension of n data sources, the d distance (9) follow a chi-squared
distribution with nN degrees of freedom (DOF), that is, d $ v2 ðNnÞ.
Proof: From (9) we can write
d ¼ ð^x À ~xÞT PÀ1 ð^x À ~xÞ

ð11Þ

Applying Whitening Transformation, we get,
ð^x À ~xÞT PÀ1 ð^x À ~xÞ ¼ ð^xW À ~xW Þ ð^xW À ~xW Þ
T

) ðW ð^x À ~xÞÞT ðW ð^x À ~xÞÞ ¼ yT y


ð12Þ


12

S. Lee and M. A. Bakr

where y ¼ W ð^x À ~xÞ $ N ð0; 1Þ is an independent standard normal distribution. For N
PN 2
dimensions of state vector, the right-hand side of (12) is
i¼1 yi , thus distance d
follows a chi-square distribution with N DOF, that is, d $ v2 ðN Þ. For n data sources
with N states,
d $ v2 ðnN Þ
Since d is a chi-square distribution with nN DOF, then for any significance level
a 2 ð0; 1Þ, v2a ðNnÞ is defined such that the probability,
È
É
P d ! v2a ðNnÞ ¼ a
Hence, to have a confidence of 100ð1 À aÞ percent, d should be less than respective
critical value. A value of a ¼ 0:05 is assumed in this paper unless specified.
Chi-square table [16] can be used to obtain the critical value for the confidence distance
with a particular significance level and DOF.
3.1

Inconsistency Detection and Exclusion

To obtain reliable and consistent fusion results, it is important that the inconsistent
estimates in a multisensor distributed system be identified and excluded before fusion.

For this reason, at each time step when the fusion center receives computed estimates
from sensor nodes, distance d is calculated. A computed distance d less than the critical
value mean that we are confident about the closeness of sensor estimates and that they
can be fused together to provide better estimate of the underlying states. On the other
hand, a distance d greater than or equal to the critical value indicate spuriousness of the
sensor estimates. At least one of the sensor estimate is significantly different than the
other sensor estimates. To exclude the outliers, a distance from the manifold is computed for every estimate and compared with the respective critical values. For n sensor
estimates the hypothesis and decision rule are summarized as follow,
Hypotheses:

Decision Rule:

H0 : ^x1 ¼ ^x2 ¼ Á Á Á ¼ ^xn
H1 : ^x1 ¼
6 ^x2 6¼ Á Á Á 6¼ ^xn
Accept H0 if d \ v2a ðNnÞ
Reject H0 if d ! v2a ðNnÞ

If the hypothesis H0 is accepted then the estimates are optimally fused using (4) and
(5). On the other hand, rejection of null hypothesis means that at least one of the sensor
estimate is significantly different than the other sensor estimates. The next step is to
identify the inconsistent sensor estimates. A distance from the manifold is computed for
each of the estimates as,
di ¼ ð^xi À ~xÞT PÀ1
xi À ~xÞ;
i ð^

i ¼ 1; 2; . . .; n



Covariance Projection as a General Framework of Data Fusion and Outlier Removal

13

The outliers are identified and eliminated based on the respective critical value, that is,
if di ! v2a ðN Þ they are rejected. Where N is the dimension of individual data source.
3.2

Effect of Correlation on d Distance

Since the estimates provided by sensor nodes in a distributed fusion architecture are
correlated, it is important to consider the effect of cross-correlation in the calculation of
distance.
Àconfidence
Á
À
Á The d distance for a pair of multivariate Gaussian estimates
^x1 ; r21 and ^x2 ; r22 , with cross-correlation r212 can be written as,
d ¼

½^x1 À ^x2 Š2
r21 þ r22 À r212 À r221

ð13Þ

It is apparent that the distance between the mean values is affected by the correlation
between the data sources. Figure 3 illustrates the dependency of confidence distance d
on the correlation coefficient. Figure 3(a) shows the scenario in which a data source
(with changing mean and constant variance) is moving away from another data source
(with constant mean and constant variance). The distance d is plotted for various values

of correlation coefficients. The y-axis shows the percentage of rejection of the null
hypothesis H0 . Figure 3(b) shows the distance d with changing correlation coefficient
from −1 to 1. It can be noted that ignoring the cross-correlation in distance d result in
underestimated or overestimated confidence and may lead to incorrect rejection of true
null hypothesis (Type I error) or incorrect retaining of false null hypothesis (Type II
error). The proposed framework inherently takes care of any cross-correlation among
multiple data sources in the computation of distance d.

Fig. 3. Effect of correlation on d distance (a) Percentage of rejecting the null hypothesis H0 with
different correlation values (b) d distance with correlation q 2 ½À1; 1Š:


14

S. Lee and M. A. Bakr

Example: Consider a numerical simulation with the constant state,
xk ¼ 10
Three sensors are used to estimate the state xk , where the measurements of the
sensors are corrupted with respective variance of R1 ; R2 and R3 . The values for the
parameters assumed in the simulation are,
Q ¼ 2; R1 ¼ 0:5; R2 ¼ 1; R1 ¼ 0:9
The sensors measurements are assumed to be cross-correlated. It is also assumed that
the sensor 1, sensor 2 and sensor 3 measurements are independently affected by
unmodeled random noise and produce inconsistent data for 33, 33 and 34% of the time
respectively. The sensors compute local estimates of the state and send it to the fusion
center. Three strategies for fusing the local sensor estimates are compared: (1) CP,
which fuses the three sensor estimates using (4) and (5) without removing outliers,
(2) CP WO-d means the outliers were identified and rejected based on (13) with
r212 ¼ 0 before fusion, that is, correlation in computation of d is ignored and, (3) CP

WO-dC, reject the outliers based on (13) with taking into account the cross-correlation.
Figure 4 shows the fused solution of three sensors when the estimate provided by
sensor 2 is in disagreement with sensor 1 and 3. It can be observed from Fig. 4 that
neglecting the cross-correlation in CP WO-d result in Type II error, that is, all the three
estimates are fused despite the fact that estimate 2 is inconsistent. CP WO-dC correctly
identifies and eliminates the spurious estimate before the fusion process. Figure 5

Fig. 4. Three sensors fusion when the estimate of sensor 2 is inconsistent. Neglecting the
cross-correlation results in Type II error.


Covariance Projection as a General Framework of Data Fusion and Outlier Removal

15

Fig. 5. Estimated state after three sensor fusion in presence of inconsistent estimates.

shows the estimated state after fusion of three sensors estimates for 100 samples. It can
be seen that the presence of outliers greatly affects the outcome of multisensor data
fusion. As depicted in Fig. 5, eliminating outliers before fusion can improve the estimation performance. The fused samples of CP WO-d and CP WO-dC on average lies
closer to the actual state. Figure 5 also shows the fusion performance when outliers are
identified with and without cross-correlation. It can be noted that inconsideration of
correlation affects the estimation quality because of Type I and Type II error.

4 Simulation Results
In this section, simulation results are provided to demonstrate the effectiveness of the
proposed method for fusion of spurious data. The performance is assessed by root mean
square error (RMSE) over the simulation time computed as,
SRMSE ¼


V
1X

V

i¼1

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ðxActual ðiÞ À xEstimated ðiÞÞ2
L

where L is the length of simulation and V is the Monte-Carlo runs.
Consider a target tracking scenario characterized by the following dynamic system
model,


16

S. Lee and M. A. Bakr

xk ¼

1
0

!
!
T
T 2 =2
x þ

ukÀ1 þ wkÀ1
1 kÀ1
T

ð14Þ

with the state vector xkÀ1 ¼ ½s vŠT , where s and v are the position and velocity of the
target at time t respectively. T is the sampling period and assumed as 3 s. The system
process is affected by zero mean Gaussian noise wkÀ1 with covariance matrix Q. Three
sensors are employed to track the movement of the target, where the sensor measurements are approximated by the following equation,
z ki ¼

1
0

!
0
x þ vki þ eki ;
1 k

i ¼ 1; 2; 3

ð15Þ

The measurements of the sensors are corrupted by noise vki with respective covariance
of Ri ; i ¼ 1; . . .; 3. The covariance of the process noise assumed is Q ¼ 10 and
sensor measurement noises are,
R1 ¼ diagð50; 30Þ;

R2 ¼ diagð70; 20Þ;


R3 ¼ diagð10; 60Þ

The control input ukÀ1 ¼ 1 if v \ 30 otherwise it is changed to −1 until v \ 5: It is
assumed that the sensor 1, sensor 2 and sensor 3 measurements are independently
affected by unmodeled random noise eki for 33, 33 and 34% of the time respectively
and thus the estimates provided by sensors are sometimes spurious.
Starting from an initial value, in each time step the individual sensor uses local state
prediction, that is, (14) to predict the state of the target and then update the state
prediction by its own sensor measurements obtained through (15). The local estimates
are assumed to be correlated and (8) is used to calculate the track-to-track
cross-correlation. The estimated states and covariances by each sensor are sent to the
fusion center, where they are fused by CP Method, which takes care of the
cross-correlation among the estimates. The three fusion strategies of CP (fusion without
outlier removal), CP WO-d (outlier removal without considering cross-correlation) and
CP WO-dC (taking care of correlation in outlier removal) are compared based on
RMSE between the actual state value and fused estimate of the state for 1000 Monte
Carlo runs. In the simulation setup, the inconsistency is detected with significance level
a = 0.05. Figure 6(a) and (b) illustrate the RMSE of the target position and velocity
respectively versus time. Table 1 summarizes the average RMSE for 1000 Monte Carlo
runs.
Figure 6 and Table 1 shows the efficacy of the proposed method in identifying and
removing outliers. It can be observed that the presence of outliers deteriorates the fusion
performance of multisensor data fusion. Eliminating the outliers before fusion greatly
improve the estimation quality. Figure 6 and Table 1 also shows the fusion performance
when outliers are identified with and without consideration of cross-correlation in distance d. It can be noted that inconsideration of correlation affects the estimation quality
because of Type I and Type II error.


Covariance Projection as a General Framework of Data Fusion and Outlier Removal


17

Fig. 6. Illustration of distributed multisensor data fusion in presence of inconsistent estimates.
(a) Position RMSE (b) Velocity RMSE.


18

S. Lee and M. A. Bakr
Table 1. Average RMSE for 1000 Monte Carlo runs
Average RMSE CP
CP WO-d CP WO-dC
Position (m)
88.3793 50.7565 47.0373
Velocity (m/s) 29.5435 26.9081 25.0586

5 Conclusion
Sensors often produce inconsistent and spurious data. Detection and removal of such
inconsistencies before fusion is essential for accurate state estimation. In this paper, we
propose a general approach to the fusion of correlated and uncertain data sources. The
proposed method provides an unbiased and optimal fusion rule for arbitrary sensors in
a distributed sensor architecture. The method automatically detects and remove
inconsistent estimates from multiple data sources by assigning statistical confidence
measure. Simulation results verified the effectiveness of the proposed method in the
identification of spuriousness in distributed sensor data. It was shown that the proposed
method improves the estimation quality by effectively identifying and removing the
incorrect sensor data. It was also observed that consideration of cross-correlation by the
proposed method in the detection of outliers result in lower RMSE due to avoidance of
Type I and II errors.

Acknowledgments. The original idea of the proposed approach is due to Sukhan Lee. This
research was supported, in part, by the “Space Initiative Program” of National Research Foundation (NRF) of Korea (NRF-2013M1A3A3A02042335), sponsored by the Korean Ministry of
Science, ICT and Planning (MSIP), and in part, by the “3D Visual Recognition Project” of Korea
Evaluation Institute of Industrial Technology (KEIT) (2015-10060160), and in part, by the
“Robot Industry Fusion Core Technology Development Project” of KEIT (R0004590).

Appendix 1
The fused mean and covariance of Covariance Projection (CP) Method are given as,
~x ¼ W À1 Pr W^x

ðA1Þ

~ ¼ W À1 Pr WPW T PTr W ÀT
P

ðA2Þ


À1
T
T
M W and M W ¼ WM in (A2), we
Putting W ¼ DÀ1=2 E T ; Pr ¼ M W M W M W
get,

 
T
À
Á
À

Á
~ ¼ W À1 WM M T W T WM À1 M T W T Â WM M T W T WM À1 M T W T W ÀT
P
Let a ¼ M T W T WM, then,


×