Tải bản đầy đủ (.pdf) (40 trang)

New Trends and Developments in Automotive System Engineering Part 12 pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.09 MB, 40 trang )

Towards Automotive Embedded Systems with Self-X Properties

427
Nowadays there are three major vehicle network systems (cp. Figure 6): The most common
network technology used in vehicles is the Controller Area Network (CAN) bus (Robert Bosch
GmbH, 1991). CAN is a multi-master broadcast bus for connecting ECUs without central
control, providing real-time capable data transmission. FlexRay (FlexRay Consortium, 2005)
is a fast, deterministic and fault-tolerant automotive network technology. It is designed to be
faster and more reliable than CAN. Therefore, it is used in the field of safety-critical
applications (e.g. active and passive safety systems). The Media Oriented Systems Transport
(MOST) (MOST Cooperation, 2008) bus is used for interconnecting multimedia and
infotainment components proving high data rates and synchronous channels for the
transmission of audio and video data.










Fig. 6. In-vehicle network topology of a BMW 7-series (Source: BMW AG, 2005)
The vehicle features reach from infotainment functionalities without real-time requirements
over features with soft real-time requirements in the comfort domain up to safety-critical
features with hard real-time requirements in the chassis or power train domain. Therefore,
various requirements and very diverse system objectives have to be satisfied during
runtime.
By using a multi-layered control architecture it is possible to manage the complexity and
heterogeneity of modern vehicle electronics and to enable adaptivity and self-x


properties. To achieve a high degree of dependability and a quick reaction to changes, we
use different criteria for partitioning the automotive embedded system into clusters (see
Figure 7):
New Trends and Developments in Automotive System Engineering

428

FunctionFunctionFunctionFunctionFunction
Vehicle Cluster
Safety Cluster
SIL 1
Safety Cluster
SIL 3
Safety Cluster
SIL 4
Safety Cluster
SIL 2
Network Cluster
PT-CAN
Network Cluster
FlexRay
Network Cluster
MOST
Feature Cluster
(Engine Control)
Feature Cluster
(ESP)
Feature Cluster
(Keyless Entry)
Feature Cluster

(AuxIn)
Service Cluster Service Cluster
Layer 1
Top
Layer
Layer 2 Layer 3 Layer 4
Network Cluster
K-CAN
Safety Cluster
SIL 0
Feature Cluster
(Parking Assistant)
Layer 0
Function
Service Cluster
FunctionFunctionFunction
Service Cluster


Fig. 7. Example of a hierarchical multi-layered architecture for today’s automotive
embedded systems
In a first step, the whole system (Vehicle Cluster on the top layer) is divided into the five
Safety Integrity Levels (SIL 0-4) (International Electrotechnical Commission (IEC), 1998),
because features with the same requirements on functional safety can be managed using the
same algorithms and reconfiguration mechanisms. Nowadays, this classification is more
appropriate than the traditional division into different automotive software domains
because most new driver-assistance features do not fit into this domain-separated
classification anymore.
In a second partitioning, the system is divided into the physical location of the vehicle’s
features according to the network bus the feature is designed for. This layer is added, so that

all features with the same or similar communication requirements (e.g. required bandwidth)
and real-time requirements can be controlled in the same way.
On the next layer, each Network Cluster is divided into the different features which are
communicating using this vehicle network bus. Hence, each feature is controlled by its own
control loop, managing its individual requirements and system objectives.
Most features within the automotive domain are composed of several software components
as well as sensors and actuators. One example is the Adaptive Cruise Control (ACC) feature
which can automatically adjust the car’s speed to maintain a safe distance to the vehicle in
front. This is achieved through a radar headway sensor to detect the position and the speed
of the leading vehicle, a digital signal processor and a longitudinal controller for calculating

New Trends and Developments in Automotive System Engineering

430
https: //www.autosar.org.
Cai, L. & Gajski, D. (2003). Transaction level modeling: an overview, Proceedings of the 1
st

IEEE/ACM/IFIP international conference on Hardware/software Codesign and system
synthesis (CODES+ISSS ’03), pp. 19–24.
CAR 2 CAR Communication Consortium (2010).
.
Cattrysse, D. & Van Wassenhove, L. (1990). A survey of algorithms for the generalized
assignment problem, Erasmus University, Econometric Institute.
Cimatti, A., Pecheur, C. & Cavada, R. (2003). Formal verification of diagnosability via
symbolic model checking, In Proceedings of the 18th International Joint Conference on
Artificial Intelligence IJCAI03, pp. 363–369.
Cuenot, P., Frey, P., Johansson, R., Lönn, H., Reiser, M., Servat, D., Koligari, R. & Chen, D.
(2008). Developing Automotive Products Using the EASTADL2, an AUTOSAR
Compliant Architecture Description Language, Embedded Real-Time Software

Conference, Toulouse, France.
Czarnecki, K. & Eisenecker, U. (2000). Generative programming: methods, tools, and applications,
Addison-Wesley.
Dinkel, M. (2008). A Novel IT-Architecture for Self-Management in Distributed Embedded
Systems, PhD thesis, TU Munich.
Dinkel, M. & Baumgarten, U. (2007). Self-configuration of vehicle systems - algorithms and
simulation, WIT ’07: Proceedings of the 4th International Workshop on Intelligent
Transportation, pp. 85–91.
EAST-ADL2 (2010). Profile Specification 2.1 RC3,
/>Specification_ 2010-06-02.pdf.
FlexRay Consortium (2005). The FlexRay Communications System Specifications Version
2.1.
Fürst, S. (2010). Challenges in the design of automotive software, Proceedings of Design,
Automation, and Test in Europe (DATE 2010).
Geihs, K. (2008). Selbst-adaptive Software, Informatik Spektrum 31(2): 133–145.
Hardung, B., Kölzow, T. & Krüger, A. (2004). Reuse of software in distributed embedded
automotive systems, Proceedings of the 4th ACM international conference on Embedded
software pp. 203 – 210.
Hofmann, P. & Leboch, S. (2005). Evolutionäre Elektronikarchitektur für Kraftfahrzeuge
(Evolutionary Electronic Systems for Automobiles), it-Information Technology
47(4/2005): 212–219.
Hofmeister, C. (1993). Dynamic reconfiguration of distributed applications, PhD thesis,
University of Maryland, Computer Science Department.
Horn, P. (2001). Autonomic computing: IBM’s perspective on the state of information
technology, IBM Corporation 15.
IEEE (2005). IEEE Standard 1666-2005 - System C Language Reference Manual.
International Electrotechnical Commission (IEC) (1998). IEC 61508: Functional safety of
Electrical/ Electronic/Programmable Electronic (E/E/PE) safety related systems.
Towards Automotive Embedded Systems with Self-X Properties


431
Kephart, J. O. & Chess, D. M. (2003). The vision of autonomic computing, Computer 36(1):
41– 50.
McKinley, P. K., Sadjadi, S. M., Kasten, E. P. & Cheng, B. H. (2004). Composing adaptive
software, IEEE Computer 37(7): 56–64.
Mogul, J. (2005). Emergent (Mis)behavior vs. Complex Software Systems, Technical report,
HP Laboratories Palo Alto.
MOST Cooperation (2008). MOST Specification Rev. 3.0.
tcooperation. com/.
Mühl, G., Werner, M., Jaeger, M., Herrmann, K. & Parzyjegla, H. (2007). On the definitions
of self-managing and self-organizing systems, KiVS 2007 Workshop:
Selbstorganisierende, Adaptive, Kontextsensitive verteilte Systeme (SAKS 2007).
Müller-Schloer, C. (2004). Organic computing: on the feasibility of controlled emergence,
CODES+ISSS ’04: Proceedings of the 2nd IEEE/ACM/IFIP international conference on
Hardware/software codesign and system synthesis, ACM, pp. 2–5.
Open SystemC Initiative (OSCI) (2010). SystemC,
.
OSEK VDX Portal (n.d.). .
Pretschner, A., Broy, M., Kruger, I. & Stauner, T. (2007). Software engineering for automotive
systems: A roadmap, Future of Software Engineering (FOSE ’07) pp. 55–71.
Robert Bosch GmbH (1991). CAN Specification Version 2.0.
http://www. semiconductors.bosch.de/pdf/can2spec.pdf.
Robertson, P., Laddaga, R. & Shrobe, H. (2001). Self-adaptive software, Proceedings of the 1st
international workshop on self-adaptive software, Springer, pp. 1–10.
Schmeck, H. (2005). Organic computing - a new vision for distributed embedded systems,
ISORC ’05: Proceedings of the Eighth IEEE International Symposium on Object-Oriented
Real-Time Distributed Computing, IEEE Computer Society, pp. 201–203.
Serugendo, G., Foukia, N., Hassas, S., Karageorgos, A., Mostéfaoui, S., Rana, O., Ulieru, M.,
Valckenaers, P. & Aart, C. (2004). Self-organisation: Paradigms and Applications,
Engineering Self-Organising Systems pp. 1–19.

Teich, J., Haubelt, C., Koch, D. & Streichert, T. (2006). Concepts for self-adaptive automotive
control architectures, Friday Workshop Future Trends in Automotive Electronicsand Tool
Integration (DATE’06).
Trumler, W., Helbig, M., Pietzowski, A., Satzger, B. & Ungerer, T. (2007). Self-configuration
and self-healing in autosar, 14th Asia Pacific Automotive Engineering Conference
(APAC- 14).
Urmson, C. & Whittaker, W. R. (2008). Self-driving cars and the urban challenge, IEEE
Intelligent Systems 23: 66–68.
Weiss, G., Zeller, M., Eilers, D. & Knorr, R. (2009). Towards self-organization in automotive
embedded systems, ATC ’09: Proceedings of the 6th International Conference on
Autonomic and Trusted Computing, Springer-Verlag, Berlin, Heidelberg, pp. 32–46.
Williams, B. C., Nayak, P. P. & Nayak, U. (1996). A model-based approach to reactive self-
configuring systems, In Proceedings of AAAI-96, pp. 971–978.
Wolf, T. D. & Holvoet, T. (2004). Emergence and self-organisation: a statement of similarities
and differences, Lecture Notes in Artificial Intelligence, Springer, pp. 96–110.
New Trends and Developments in Automotive System Engineering

432
Zadeh, L. (1963). On the definition of adaptivity, Proceedings of the IEEE 51(3): 469–470.
Zeller, M., Weiss, G., Eilers, D. & Knorr, R. (2009). A multi-layered control architecture for
self-management in adaptive automotive systems, ICAIS ’09: Proceedings of the 2009
International Conference on Adaptive and Intelligent Systems, IEEE Computer Society,
Washington, DC, USA, pp. 63–68.
0
4D Ground Plane Estimation Algorithm for
Advanced Driver Assistance Systems
Faisal Mufti
1
, Robert Mahony
1

and Jochen Heinzmann
2
1
Australian National University
2
Seeing Machines Ltd.
Australia
1. Introduction
Over the last two decades there has been a significant improvement in automotive design,
technology and comfort standards along with safety regulations and requirements. At the
same time, growth in population and a steady increase in the number of road users has
resulted in a rise in the number of accidents involving both automotive users as well as
pedestrians. According to World Health Organization, road traffic accidents, including auto
accidents and personal injury collisions account for the deaths of an estimated 1.2 million
people worldwide each year, with 50 million or more suffering injuries (Organization, 2009).
These figures are expected to grow by 20% within the next 20 years (Peden et al., 2004). In
the European Union alone the imperative need for Advanced Driver Assistance Systems
(ADAS) sensors can be gauged from the fact that every day the total number of people
killed on Europe’s roads are almost the same as the number of people killed in a single
medium-haul plane crash (Commission, 2001) with 3
rd
party road users (pedestrian, cyclist,
etc) comprising the bulk of these fatalities (see Figure 1 for proportion of road injuries) (Sethi,
2008). This transforms into a direct and indirect cost on society, including physical and
psychological damage to families and victims, with an economic cost of 160 billion euros
annually (Commission, 2008). These statistics provide a strong motivation to improve the
ADAS ability of automobiles for the safety of both passengers and pedestrians.
The techniques to develop vision based ADAS depend heavily on the imaging device
technology that provides continuous updates of the surroundings of the vehicle and aid
ϯϮй

ϰϳй
ϭϲй
ϱй
WĞĚĞƐƚƌŝĂŶƐ
ĂƌƐ
DŽƚŽƌĐLJĐůĞƐͬLJĐůĞƐ
KƚŚĞƌƐ
Fig. 1. Proportion of road traffic injury deaths in Europe (2002-2004).
22
2 Trends and Developments in Automotive Engineering
drivers in safe driving. In general these sensors are either spatial devices like monocular
CCD cameras, stereo cameras or other sensor devices such as infrared, laser and time-of-flight
sensors. The fusion of multiple sensor modalities has also been actively pursued in
the automotive domain (Gern et al., 2000). A recent autonomous vehicle navigation
competition DARPA (US Defense Advanced Research Projects Agency) URBAN Challenge
(Baker & Dolan, 2008) has demonstrated a significant surge in efforts by major automotive
companies and research centres in their ability to produce ADAS that are capable of driving
autonomously in an urban terrain.
Range image devices based on the principle of time-of-flight (TOF) (Xu et al., 1998) are robust
against shadow, brightness and poor visibility making them ideal for use in automotive
applications. Unlike laser scanners (such as LIDAR or LADAR) that traditionally require
multiple scans, 3D TOF cameras are suitable for video data gathering and processing systems
especially in automotive that often require 3D data at video frame rate. 3D TOF cameras are
becoming popular for automotive applications such as parking assistance (Scheunert et al.,
2007), collision avoidance (Vacek et al., 2007), obstacle detection (Bostelman et al., 2005) as
well as the key task of ground plane estimation for on-road obstacle and obstruction avoidance
algorithms (Meier & Ade, 1998; Fardi et al., 2006).
The task of obstacle avoidance has normally been approached as by either (a) directly
detecting obstacles (or vehicles) and pedestrian or (b) estimating ground plane and locating
obstacles from the road geometry. Ground plane estimation has been tackled using methods

such as least squares (Meier & Ade, 1998), partial weighted eigen methods (Wang et al.,
2001), Hough Transforms (Kim & Medioni, 2007), and Expectation Maximization (Liu et al.,
2001), amongst others. Computationally expensive semantic or scene constraint approaches
(Cantzler et al., 2002; N ¨uchter et al., 2003) have also been used for segmenting planar features.
However, these methods work well for dense 3D point clouds and are appropriate for
laser range data. A statistical framework of RANdom SAmple Concensus (RANSAC)
for segmentation and robust model fitting using range data is also discussed in literature
(Bolles & Fischler, 1981). Existing work in applying RANSAC to 3D data for plane fitting
uses single frame of data (Bartoli, 2001; Hongsheng & Negahdaripour, 2004) or tracking of
data points (Yang et al., 2006), and does not exploit the temporal aspect of 3D video data.
In this work, we have formulated a spatio-temporal RANSAC algorithm for ground plane
estimation using 3D video data. The TOF camera/sensor provides 3D spatial data at video
frame rate and is recorded as a video stream. We model a planar 3D feature comprising two
spatial directions and one temporal direction in 4D. We consider a linear motion model for the
camera. In order that the resulting feature is planar in the full spatio-temporal representation,
we require that the camera rotation lies in the normal to the ground plane, an assumption
that is naturally satisfied for the automotive application considered. A minimal set of data
consisting of four points is chosen randomly amongst the spatio-temporal data points. From
these points, three independent vector directions, lying in the spatio-temporal planar feature
are computed. A model for the 3D planar feature is obtained by computing the 4D cross
product of the vector directions. The resulting model is scored in the standard manner of
RANSAC algorithm and the best model is used to identify inlier and outlier points. The final
planar model is obtained as a Maximum likelihood (ML) estimation derived from inlier data
where the noise is assumed to be Gaussian. By utilizing data from a sequence of temporally
separated image frames, the algorithm robustly identifies the ground plane even when the
ground plane is mostly obscured by passing pedestrians or cars and in the presence of walls
(hazardous planar surfaces) and other obstructions. The fast segmentation of the obstacles
434
New Trends and Developments in Automotive System Engineering
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems 3

CMOS correlation
in sensor matrix
3D data dispaly
upto 25 frames/sec
IR source
Modulated
signal
3D scene
Reflected signal
Signal
Processing
Within same housing unit
Signal
Generator/
Modulator
0
o
90
o
180
o
270
o
t
d
=
2
c
r
r

Fig. 2. Basic principle of TOF 3D imaging system.
is achieved using the statistical distribution of the feature and then employing a statistical
threshold. The proposed algorithm is simple as no spatio-temporal tracking of data points
is required. It is computationally inexpensive without the need of image/feature selection,
calibration or scene constraint and is easy to implement in fewest possible steps.
This chapter is organized as follows: Section 2 describes the time-of-flight camera/sensor
technology, Section 3 presents the structure and motion model constraints for planar feature,
Section 4 describes formulation of spatio-temporal RANSAC algorithm, Section 5 describes
application of the framework and Section 6 presents experimental results and discussion,
followed by conclusion in Section 7.
2. Time-of-flight camera
Time-of-Flight (TOF) sensors estimate distance to a target using the time of flight of a
modulated infrared (IR) wave between the sender and the receiver (see Fig. 2). The
sensor illuminates the scene with a modulated infrared waveform that is reflected back by
the objects and a CMOS (Complementary metal-oxide- semiconductor) based lock in CCD
(charge-coupled device) sensor samples four times per period. With the precise knowledge of
speed of light c,eachofthese(64
×48) smart pixels, known as Photonic Mixer Devices (PMD)
(Xu et al., 1998), measure four samples a
0
, a
1
, a
2
, a
3
at quarter wavelength intervals. The phase
ϕ of the reflected wave is computed by (Spirig et al., 1995)
ϕ
= arctan

a
0
− a
2
a
1
− a
3
.
The amplitude A (of reflected IR light) and the intensity B representing the gray scale image
returned by the sensor are respectively given by
A
=

(a
0
− a
2
)
2
+(a
1
− a
3
)
2
2
, B
=
a

0
+ a
1
+ a
2
+ a
3
4
.
With measured phase ϕ, known modulation frequency f
mod
and precise knowledge of speed
of light c it is possible to measure the un-ambiguous distance r from the camera,
r
=
c.ϕ
4π f
mod
.(1)
435
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems
4 Trends and Developments in Automotive Engineering
Y
i
X
i
Z
i
y
x

r
3D Point
PMD
(x,y,r)
f
Y
w
Z
w
X
w
Image projection
Fig. 3. Time-of-Flight sensor geometry
With a modulation wavelength of λ
mod
, this leads to a maximum possible unambiguous range
of (λ
mod
/2). For a typical camera such as PMD 3k-S (PMD, 2002), f
mod
=20Mhz and with a
speed of light c given by 3
× 10
8
m/s, the non-ambiguous range r
max
of the TOF camera is
given as
r
max

=
c
2 f
mod
=
3 ×10
8
2 ·20 ×10
6
= 7.5meters.
The sensor returns a range r value for each pixel as a function of pixel coordinates
(x, y) as
shown in Fig. 3.
The range values are used to compute 3D position X
X
X
(X,Y, Z) of the point
Z
= r(x, y).
f

f
2
+ x
2
+ y
2
; X = Z
x
f

; Y
= Z
y
f
,(2)
where f is the focal length of the camera.
3. Structure and motion constraints
In the following section we will discuss the motion model and the planar feature parameters
essential to derive the spatio-temporal RANSAC formulation for a planar feature.
3.1 Motion model
Consider a TOF camera moving in space. Let {i} denote the frame of reference at time stamp
i,1
≤i ≤ n, attached to the camera. Let {W}denote the fixed world reference frame. The rigid
body transformation
W
i
M : R
3
→ R
3
; X
X
X
i
→ X
X
X
W
:=
W

i
RX
X
X
i
+
W
T
i
(3)
is defined as the coordinate mapping from frame
{i}to world frame {W}with rotation (
W
i
R)
and translation (
W
T
i
) respectively. Let
¯
X
X
X ∈R
4
denote the homogenous coordinates of X
X
X ∈R
3
,

then the transformation (3) in matrix form is given by
W
i
¯
M : R
4
→ R
4
;(4)
¯
X
X
X
W
=

X
X
X
W
1

=

W
i
R
W
T
i

01

X
X
X
i
1

=
W
i
¯
M
¯
X
X
X
i
.(5)
Let
i
j
¯
M be the rigid body mapping from frame {j} to frame {i} then,
i
j
¯
M =
i
W

¯
M
W
j
¯
M =(
W
i
¯
M)
−1W
j
¯
M.
Hence
i
j
¯
M =

(
W
i
R

)(
W
j
R)(
W

i
R

)(
W
T
j

W
T
i
)
01

.(6)
436
New Trends and Developments in Automotive System Engineering
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems 5
3.2 Equation of planar feature with linear motion
Let P be a 2D planar feature that is stationary during the video sequence considered. Let
η
i
∈{i} be the normal vector to P in frame {i},thenη
i
is a direction that transforms between
frames of reference as
η
i
=
j

i
R

η
j
=
i
j

j
.(7)
The homogenous coordinates of a direction (free vector) such as η
i
are given by
¯
η
i
=

η
i
0

∈ R
4
,
j
i
¯
M

¯
η
i
=
j
i
¯
M

η
i
0

=

j
i

i
0

=
¯
η
j
.(8)
Let X
X
X
i

, X
X
X
j
∈ P be different elements of the planar feature P observed in different frames {i}
and {j}.NotethatX
X
X
i
=
i
j
MX
X
X
j
in general as the points do not correspond to the same physical
point in the plane, however, (X
X
X
i
,
i
j
MX
X
X
j
) must both lie in P in {i}.Sinceη
i

is a normal to P in
{i}, one has
(
¯
X
X
X
i

i
j
M
¯
X
X
X
j
),
¯
η
i
 = 0. (9)
Thus


X
X
X
i
1




(
W
i
R

)(
W
j
R )X
X
X
j
+(
W
i
R

)(
W
T
j

W
T
i
)
1


,

η
i
0


= 0

X
X
X
i
−(
W
i
R

)(
W
j
R)X
X
X
j
−(
W
i
R


)(
W
T
j

W
T
i
), η
i

= 0

X
X
X
i
−(
W
i
R

)(
W
j
R)X
X
X
j


i



(
W
i
R

)(
W
T
j

W
T
i
), η
i

= 0

X
X
X
i
− X
X
X

j
,(
W
j
R)(
W
i
R


i



(
W
T
j

W
T
i
), (
W
i
R)η
i

= 0. (10)
Let V

∈{W} denote the linear velocity then the rigid body dynamics for a moving body (an
automotive) is modelled by
˙
T
= V; T(0)=T
1
˙
R
=

ωR; R
(0)=R
1
, (11)
where ω
∈{W} is the angular velocity and

ω ∈ R
3×3
denote the skew symmetric matrix that
corresponds to vector cross product operation in 3D.
Assumption: We assume that the angular velocity ω of the camera is parallel to η
∈{W},the
normal to the ground plane at all times and the translation velocity V in the direction normal
to the ground plane is constant such that
η
×ω = 0and

V, η


=
constant, (12)
where
× represents a cross product between two vectors. For normal motion of a vehicle, roll
and pitch rotations are negligible compared to yaw motion associated with angular velocity of
the turning vehicle Gracia et al. (2006) and corresponds to common ground-plane constraint
(GPC) Sullivan (1994) (see Figure 4).
In real environments for motion captured at nearly video frame rate, the piecewise linear
velocity along the normal direction can be assumed constant as evident from the experiments
437
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems
6 Trends and Developments in Automotive Engineering
yaw
pitch
roll
Fig. 4. Vehicle with roll, pitch and dominant yaw motion
in Section 4. This is to be expected in the case where the camera is attached to a vehicle that
moves on a plane P, precisely the case for the automotive example considered. In practice, this
degree of motion is important to model situations where the car suspension is active and is
also used to identify non-ground features that the vehicle may be approaching with constant
velocity.
As a consequence of (12)
ω
= s(t)η ∈{W}; s : R →R in time t. (13)
Following (13) one can re-write (11) as
˙
R
= s(t)

ηR; R

(0)=R
1
.
Therefore the continuous rotation motion R
(t) : R → SO(3) for the automobile trajectory is
expressed as
R
(t)=exp(θ(t
i
)

η
)R
1
; θ(t
i
)=

t
i
0
s(τ)dτ (14)
where t
i
time is at frame {i}and
W
i
R = R(t
i
).

By definition
W
i

i
= η and hence,
η
i
=
W
i
R

η
= R

1
exp(θ(t
i
)

η
)

η
= R

1
η = η
1

(15)
Using (15), we can re-write (10) as

X
X
X
i
− X
X
X
j

1



W
T
j

W
T
i

1

= 0. (16)
We assume the frames are taken at constant time interval δt and hence t
i
= δt(i −1)+t

1
.Since

V, η

is constant and t
1
= 0, the linear translation motion
W
T
i
satisfies

W
T
i

i

=

V, η

δt(i − 1)+

T
1

1


. (17)
Using assumption (12), define α
∈ R to be
α
=

V, η

δt = constant. (18)
438
New Trends and Developments in Automotive System Engineering
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems 7
Thus, from (16) and (17), the structure and motion constraint that X
X
X
i
, X
X
X
j
lie in the plane P can
be expressed as

X
X
X
i
− X
X
X

j

1

−α(j − i)=0. (19)
This is an equation for a plane P parameterized by η
1
∈ S
2
(η
1
 = 1) and motion parameter
α
∈ R. An additional parameter, the distance h ∈ R of the plane P from the origin in frame
{1} in the direction η
1
, completes the structure and motion constraints of planar feature. Note
that α is the component of translational camera velocity in the direction normal to the planar
feature P.Thecomponentα will be the defining parameter for the temporal component of the
3D planar feature that is identified in the RANSAC algorithm (see Section 4).
Let
¯
¯
X
X
X
i
be a 4D spatio-temporal coordinate that incorporates both spatial coordinates X
X
X

i
and a
reference to the frame index or time coordinates i
¯
¯
X
X
X
i
=

X
X
X
i
i

. (20)
Associated with this we define a normal vector that incorporates the spatial normal direction
η
1
and the motion parameter α
¯
¯
η
=

η
1
α


. (21)
Using these definitions (19) may be re-written as

¯
¯
X
X
X
i

¯
¯
X
X
X
j
,
¯
¯
η

= 0. (22)
4. Spatio-temporal RANSAC algorithm
In this section we present the spatio-temporal RANSAC algorithm and compute a 3D
spatio-temporal planar hypothesis based on the structure and motion model derived in
Section 3.2 and a minimal data set.
4.1 Computing a spatio-temporal planar hypothesis
Equation (19) provides a constraint that (
¯

¯
X
X
X
i

¯
¯
X
X
X
j
) ∈R
4
lies in the 3D spatio-temporal planar
feature P in R
4
with parameters η
1
∈ S
2
, α ∈ R and h ∈ R. Given a sample of four points
{
¯
¯
X
X
X
i
1

,
¯
¯
X
X
X
i
2
,
¯
¯
X
X
X
i
3
,
¯
¯
X
X
X
i
4
}, one can construct a normal vector
¯
¯
η to P by taking the 4D cross product
(see Appendix A)
¯

¯
η
o
= cross
4
(
¯
¯
X
X
X
i
1

¯
¯
X
X
X
i
2
,
¯
¯
X
X
X
i
1


¯
¯
X
X
X
i
3
,
¯
¯
X
X
X
i
1

¯
¯
X
X
X
i
4
) ∈R
4
, (23)
where
¯
¯
X

X
X
i
∈{{1}, ,{n}}. To apply the constraint η
1
∈ S
2
we normalize
¯
¯
η
o
=(
¯
¯
η
x
o
,
¯
¯
η
y
o
,
¯
¯
η
z
o

,
¯
¯
η
t
o
)
by
¯
¯
η
=
1
β
¯
¯
η
o
; β =

(
¯
¯
η
x
o
)
2
+(
¯

¯
η
y
o
)
2
+(
¯
¯
η
z
o
)
2
. (24)
The resulting estimate
¯
¯
η
=(η
1
,α) is an estimate of the normal η
1
∈ S
2
and α, the normal vector
component of translation velocity (18).
Note that the depth parameter h can be determined by
h
1

=

X
X
X
i

1


α(i −1). (25)
However, the parameter h is not required for the robust estimation phase of the RANSAC
algorithm and is evaluated in the second phase where a refined model is estimated.
439
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems
8 Trends and Developments in Automotive Engineering
−0.6 −0.4 −0.2 0 0.2 0.4 0.6
0
200
400
600
800
1000
1200
1400
1600
Distance error
Frequency
Fig. 5. Statistical distribution of planar feature data points derived from experimental data
documented in Section 6.

4.2 Statistical distribution of 4D data points
The spatio-temporal data points that have a probability p of lying in the planar feature are
defined as inliers. Due to Gaussian noise in range measurements of TOF camera, the distance
of these inliers from the model (planar feature) have a Gaussian distribution with
N(0, σ) as
shown in Fig. 5.
As a consequence, the point square distance a
2

,
a
2

=((
¯
¯
X
X
X

¯
¯
X
X
X
i
1
),
¯
¯

η
)
2
;
¯
¯
X
X
X
∈ all spatio-temporal data points,
of the inliers (Hartley & Zisserman, 2003) from the planar feature associated with the data
point
¯
¯
X
X
X
i
, have a chi-squared distribution χ
2
. Since we consider a spatio-temporal planar
feature, there are three degrees of freedom in the chi-squared distribution. Let F
χ
2
3
denote the
cumulative frequency of three degree of freedom of chi-squared distribution χ
2
3
then one can

define the threshold coefficient q
2
by
q
2
= F
−1
χ
2
3
(p)σ
2
. (26)
Thus, the statistical test for inliers is defined by

inliers a
2

< q
2
outliers a
2

≥ q
2
.
(27)
In the experiments documented in Section 6, we use a value of p
= 0.95. In
this case the threshold is q

2
= 7.81σ
2
where σ is determined empirically. Spatial
ground plane estimation algorithms using single 3D images (Cantzler et al., 2002; Bartoli,
2001; Hongsheng & Negahdaripour, 2004) are associated with two degree of chi-squared
distribution since they lack temporal dimension. As a result the same analysis leads to a
threshold of q
2
= 5.99σ
2
(for p = 0.95). The additional threshold margin for the proposed
spatio-temporal algorithm quantifies the added robustness that comes from incorporating the
temporal dimension along with the data available by incorporating multiple images from the
440
New Trends and Developments in Automotive System Engineering
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems 9
video stream. This leads to significant improvement in robustness and performance of the
proposed algorithm over single image techniques. The resulting spatio-temporal RANSAC
algorithm is outlined in Algorithm 1.
5. Application
The planar feature estimation algorithm in 4D is an approach that can be utilized in multiple
scenarios with reference to automotive domain. Since the dominating planar feature for an
automotive is a road, we have presented an application of the proposed algorithm for robust
ground plane estimation and detection.
A constant normal velocity component α (18) helps to detect ground plane due to the fact
that piecewise linear velocity in the normal direction of the automotive motion is small and
constant over the number of frames recorded at frame rate. Detection of ground plane in
spatio-temporal domain provides an added advantage for cases where there is occlusion and
single frame detection is not possible. Section 6 presents number of examples for ground

plane.
Algorithm 1: Pseudo code Spatio-temporal RANSAC algorithm
Initialization: Choose a probability p of inliers. Initialize a sample count m = 0 and the trial
process N
= ∞.
repeat
a. Select at random, 4 spatio-temporal points (
¯
¯
X
X
X
i
1
,
¯
¯
X
X
X
i
2
,
¯
¯
X
X
X
i
3

,
¯
¯
X
X
X
i
4
).
b. Compute the temporal normal vector
¯
¯
η according to (23) and (24).
c. Evaluate the spatio-temporal constraint (22) to develop a consensus set C
m
consisting of all data points classified as inliers according to (27).
d. Update N to estimate the number of trials required to have a probability p
so that the selected random sample of 4 points is free from outliers
as (Fischler & Bolles, 1981),
N
= log(1 − p)/log

1 −
number of inliers
number of points

4
.
until at least N trials are complete
Select the consensus set C


m
that has the most inliers.
Optimize the solution by re-estimating from all spatio-temporal data points in C

m
by
maximizing the likelihood of the function φ
φ
(
¯
¯
η, h
)=

¯
¯
X
X
X
∈C

m
(
¯
¯
η,
¯
¯
X

X
X
−h)
2
(28)
L(φ)=

¯
¯
X
X
X
∈C

m
φ(
¯
¯
X
X
X
|
¯
¯
η, h
); (
ˆ
¯
¯
η,

ˆ
h
)=argmax
¯
¯
η,h
(L),
where we assume a normal distribution in observed depth.
441
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems
10 Trends and Developments in Automotive Engineering
An obstacle detection algorithm can be applied once a robust estimation of planar ground
surface is available. In the proposed framework, the algorithm evaluates each spatio-temporal
data point and categorizes traversable and non-traversable objects or obstacles. Traversable
objects are the points that can be comfortably driven over in a vehicle. We are inspired by a
similar method proposed in (Fornland, 1995). The estimated Euclidean distance
ˆ
d to the plane
for an arbitrary data point
¯
¯
X
X
X is defined as
ˆ
d
= 
¯
¯
X

X
X,
ˆ
¯
¯
η
−
ˆ
h. (29)
Objects (in each frame) are segmented from the ground plane by a threshold τ as
¯
¯
X
X
X
=

Obstacle
|
ˆ
d
|≥τ
o
Traversable object |
ˆ
d
| < τ
o
,
(30)

where τ
o
is set by the user for the application under consideration. This threshold
segmentation helps in reliable segregation of potential obstacles. The allowance of larger
threshold in inliers for plane estimation makes obstacle detection phase robust for various
applications especially for on road obstacle detection.
6. Experimental results and discussions
Experiments were performed using real video data recorded from PMD 3k-S TOF camera
mounted on a vehicle with an angle varying between 2

to 20

to the ground. The camera
records at approx 20 fps and provides both gray scale and range images in real time. The
sensor has a field of view of 33.4

× 43.6

. The video sequences depict scenarios in an
under cover car park. In particular, we consider cases with pedestrians, close by vehicles,
obstacles, curbs/foothpaths and walls etc. Five experimental scenarios have been presented
to evaluate the robustness of the algorithm against real objects and also compared with
standard 3D RANSAC algorithm. The gray scale images shown represent the first and
the last frame of video data. It is not possible to have a 4D visualization environment,
therefore a 3D multi-frame representation (each data frame represented in different color)
provides a spatio-temporal video range data. The estimated spatio-temporal planar feature is
represented in frame
{1}. The final solution is rotated for better visualisation.
In the first set of experiments shown in Fig. 6 and Table. 1( sequence 1-4), four different
scenarios are presented. The first scenario shows multiple walls at varying level of depth

and a ground plane. The algorithm correctly picks the ground plane rejecting other planar
features. In the next scenario, a truck in close vicinity is obstructing the clear view but the
ground plane has been identified by exploiting the full video sequence of the data. A number
of obstacles including cars, wall and a person are visible while the car is manoeuvring a turn in
the third scenario. The algorithm clearly estimates actual ground plane. In the fourth scenario
the result is not perturbed by passing pedestrians and the algorithm robustly identifies the
ground plane. In a typical sequence a 8-10 frame data is enough to resolve a ground plane
even in the presence of some kind of occlusion.
In another experiment shown in Fig. 7a (sequence 5 with single frame data), the standard
RANSAC algorithm is applied using a single frame data for comparison.
The obvious failure of a standard RANSAC algorithm is due to the bias of planar data points
towards the wall. On the other hand, the proposed algorithm has correctly identified the
ground surface in Fig. 7b by simply incorporating more frames (10 frames and
|α| = 0.0018)
due to the availability of temporal data without imposing any scene constraint.
442
New Trends and Developments in Automotive System Engineering
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems 11
frame 1
(a)
frame 8
(b) (c)
frame 1
(d)
frame 10
(e) (f)
frame 1
(g)
frame 10
(h) (i)

frame 1
(j)
frame 10
(k) (l)
Fig. 6. Experimental data shown in a three column format. First two columns show gray
scale image of first and last video frame and third column shows spatio-temporal fit on 4D
data. Each frame of 3D data is represented by a different color. (a-b) Gray scale images of a
double wall and ground plane at turning (c) Spatio-temporal ground plane fitting of 8 frames
at t=1. (d-e) A truck in close vicinity (f) Corresponding spatio-temporal ground plane fit of 10
frames. (g-h) Cars, wall and a person as obstacles at turning. (i) Corresponding
spatio-temporal ground plane fit. (j-k) Pedestrians. (l) Ground plane fit.
443
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems
12 Trends and Developments in Automotive Engineering
(a) (b)
Fig. 7. Using data from sequence 5, (a) Standard RANSAC plane fitting algorithm picks the
wall with a single frame data. (b) Spatio-temporal RANSAC algorithm picks the correct
ground plane (10 frames).
Obstacle detection algorithm is effectively applied after robust estimation of ground plane.
In the experiment shown in Fig. 8, pedestrians are segmented with τ
o
= 0.1 by the obstacle
detection algorithm after correct identification of ground plane. This threshold implies that
objects with a height greater than 10 cm (shown in red color) are considered as obstacle where
data points close to ground plane are ignored (traversable objects) with this threshold.
The experimental results are straightforward and show excellent performance. The proposed
4D spatio-temporal RANSAC algorithm’s computation cost is associated with picking the
normal vector to the 3D planar feature by random sampling (please note that this is the only
computation cost associated with 4D spatio-temporal RANSAC algorithm). This eliminates
any computation cost associated with pre-processing images unlike conventional algorithms.

The experiments were performed on a PC machine with Intel Core 2 Duo 3GHz processor and
2 GB RAM. The algorithm is implemented in MATLAB. The computation cost varies with the
number of inliers and the planar surface occlusion in the range data as shown in Fig. 9.
7. Conclusion
Many vision based applications use some kind of segmentation and planar surface detection
as a preliminary step. In this paper we have presented a robust spatio-temporal RANSAC
framework for ground plane detection for use in ADAS of automotive industry. Experimental
Sequence no Sequence No of frames used |α| (m/frame)
1 Double Wall 8 0.0016
2
Moving truck 10 0.0017
3
Multiple objects 10 0.0021
4
Pedestrian 10 0.0020
5
Front wall 10 0.0018
Table 1. Experimental data for ground plane estimation
444
New Trends and Developments in Automotive System Engineering
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems 13
(a)
0 0.2 0.4 0.6 0.8 1 1.2 1.4
0
1000
2000
3000
4000
5000
6000

7000
8000
9000
Distance error (d)
Frequency
Obstacle
τ
o
(b)
Fig. 8. (a) Potential obstacles and Pedestrians are shown in red color. (b) Histogram of
ground plane and obstacles.
results validate the structure and motion model of a 3D spatio-temporal planar feature in
4D. Since the algorithm does not involve any tracking or feature selection, it is highly robust,
simple and practical to implement. The algorithm is suitable not only for automotive industry
but also in general computer vision applications that satisfy the particular motion constraint

× ω = 0). This constraint ensures that a spatial planar feature generates a planar feature
in spatio-temporal domain. The spatio-temporal constraints increases reliability in planar
surface estimation that is otherwise susceptible to noisy data in any algorithm developing
a single frame data. Further improvement in computation cost can be achieved through
dedicated hardware implementation.
445
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems
14 Trends and Developments in Automotive Engineering
5 6 7 8 9 10
0
0.2
0.4
0.6
0.8

1
No of frames
Time in seconds
Seq 1
Seq 2
Seq 3
Seq 4
Seq 5
Fig. 9. Performance plots for Spatio-temporal RANSAC for all the sequences.
8. Appendix A
Given an orthonormal basis {e
1
, ,e
n
}∈R
n
the ‘Levi-Civita’ (ε) antisymmetric tensor is
defined as (Shaw, 1987)
ε
i,j, ,n
=



+1if(i, j, ,n) an even permutation of (1, 2, . . . , n)

1if(i, j, ,n) an odd permutation of (1, 2,. . ., n)
0if(i, j, ,n) not a permutation of (1, 2,. . ., n)
The cross product of three vectors a,b, c ∈ R
4

is defined as
cross
4
(a, b, c)=(a ×b ×c)=
n=4

i,j,k,l=1
ε
ijkl
a
j
b
k
c
l
e
i
(31)
The vector cross product of the three vectors in R
4
has the following properties (amongst
others).
1. Trilinearity: For α, β, γ
∈ R, αa × βb × γc = αβγ(a ×b ×c).
2. Linear dependence: cross
4
(a, b, c)=0iffa, b, c are linearly dependent.
3. Orthogonality: Let d
= a ×b ×c ⇒d, a = d, b = d, c = 0
9. References

Baker, C. & Dolan, J. (2008). Traffic interaction in the urban challenge: Putting boss on its best
behavior, Proc. International Conference on Intelligent Robots and Systems (IROS 2008),
pp. 1752–1758.
Bartoli, A. (2001). Piecewise planar segmentation for automatic scene modeling, Proc. IEEE
Int. Conf. Computer Vision and Pattern Recognition (CVPR ’01).
Bolles, R. C. & Fischler, M. A. (1981). A RANSAC-based approach to model fitting and its
application to finding cylinders in range data, Proc. Seventh Int. Joint Conf. Artificial
Intelligence, pp. 637–643.
446
New Trends and Developments in Automotive System Engineering
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems 15
Bostelman, R., Hong, T. & Madhavan, R. (2005). Towards AGV safety and navigation
advancement obstacle detection using a TOF range camera, Proc. 12th Int. Conf.
Advanced Robotics (ICAR ’05).
Cantzler, H., Fisher, R. B. & Devy, M. (2002). Improving architectural 3D reconstruction by
plane and edge constraining, Proc. British Machine Vision Conf. (BMVC ’02), pp. 43–52.
Commission, E. (2001). European transport policy for 2010: time to decide (white paper),
Technical R eport COM(2001) 370 final, Commission of the European Communities,
Brussels.
Commission, E. (2008). CARS 21 mid-term review high level conference conclusions and
report, Technical report, European Commission, Enterprise and Industry.
Fardi, B., Dousa, J., Wanielik, G., Elias, B. & Barke, A. (2006). Obstacle detection and pedestrian
recognition using a 3D PMD camera, Proc. IEEE Intell. Vehicles Symp., pp. 225–230.
Fischler, M. A. & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting
with applications to image analysis and automated cartography, Communications of
the ACM 24, Issue 6 June 1981: 381–395.
Fornland, P. (1995). Direct obstacle detection and motion from spatio-temporal derivatives,
Proc. 6th Intl. Conf. Comp. Anal. of Images and Patterns, pp. 874–879.
Gern, A., Franke, U. & Levi, P. (2000). Advanced lane recognition-fusing vision and radar,
Proc. IEEE Intelligent Vehicles Symposium IV 2000, pp. 45–51.

Gracia, G. A., Jimenez, F., Paez, J. & Narvaez, A. (2006). Theoretical and experimental analysis
to determine the influence of the ageing process of the shock-absorber on safety, Int.
J. Vehicle Design 40(1/2/3): 15–35.
Hartley, R. & Zisserman, A. (2003). Multiple View Geometry in Computer Vision,Cambridge
Universty Press, chapter Estimatio-2D Projective Transformation. pp. 118.
Hongsheng, Z. & Negahdaripour, S. (2004). Improved temporal correspondences in
stereo-vision by RANSAC, Proc. 17th Int. Conf. Pattern Recognition (ICPR ’04).,Vol.4,
pp. 52– 55.
Kim, E. & Medioni, G. Lee, S. (2007). Planar patch based 3D environment modeling with
stereo camera, Proc. 16th IEEE Int. Symp. Robot and Human Interactive Communication,
Jeju island, Korea.
Liu, Y., R., E., Charabarti, D., Burgard, W. & Thrun, S. (2001). Using EM to learn 3D models
of indoor environments with mobile robots, Proc. 18th Int. Conf. Machine Learning,
pp. 329–336.
Meier, E. B. & Ade, F. (1998). Object detection and tracking in range image sequences by
separation of image features, Proc. IEEE Int. Conf. Intelligent Vehicles.
N¨uchter, A., Surmann, H. & Hertzberg, J. (2003). Automatic model refinement for 3D
reconstruction with mobile robots, Proc. Fourth Int. Conf. 3-D Digital Imaging and
Modeling (3DIM ’03), pp. 394– 401.
Organization, W. H. (2009). Global status report on road safety: Time for action, Technical
report, World Health Organization.
Peden, M., Scurfield, R., Sleet, D., Mohan, D., Hyder, A. A., Jarawan, E. & Mathers, C. (2004).
World Report on Road Traffic Injury Prevention, World Health Organization(WHO).
PMD (2002). PMD tech., .
URL:
Scheunert, U., Fardi, B., Mattern, N., Wanielik, G. & Keppeler, N. (2007). Free space
determination for parking slots using a 3D PMD sensor, Proc. IEEE Intelligent Vehicles
Symposium, pp. 154–159.
447
4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems

16 Trends and Developments in Automotive Engineering
Sethi, D. (2008). Road traffic injuries among vulnerable road users.
Shaw, R. (1987). Vector cross products in n dimensions, Int. J. Math. Educ. Sci. Technol.
18(6): 803–816.
Spirig, T., Seitz, P., Vietze, O. & Heitger, F. (1995). The lock-in CCD-two dimensional
synchronous detection of light, IEEE J. Quantum Electron. 31: 1705–1708.
Sullivan, G. D. (1994). Real-time Computer Vision, Cambridge University Press, Cambridge,
chapter Model-based vision for traffic scenes using the ground plane constraint,
pp. 93–115.
Vacek, S., Schamm, T., Schroder, J. & Dillmann, R. (2007). Collision avoidance for cognitive
automobiles using a 3D PMD camera, Proc. 6th IFAC Symp. on Intell. Autonomous
Vehicles Symp.
Wang, C., Tanahashi, H., Hirayu, H., Niwa, Y. & Yamamoto, K. (2001). Comparison of local
plane fitting methods for range data, Proc. IEEE Conf. Computer Vision and Pattern
Recognition (CVPR ’01).
Xu, Z., Schwarte, R., Heinol, H., Buxbaum, B. & Ringbeck, T. (1998). Smart pixel - photonic
mixer device (PMD), new system concept of a 3D-imaging camera-on-a-chip, 5th Int.
Conf. Mechatronics and Machine Vision in Practice, Nanjing, pp. 259–264.
Yang, A., Rao, S. & Ma, Y. (2006). Robust statistical estimation and segmentation of multiple
subspaces, Proc. Conference on Computer Vi sion and Pattern Recognition Workshop
CVPRW ’06, pp. 99–99.
448
New Trends and Developments in Automotive System Engineering
Part 5
Infotainment and Navigation Systems

23
The Car Entertainment System
Niels Koch
Altran GmbH & Co. KG, Munich

Germany
1. Introduction
In recent years, we spent more and more time in our cars. So it became obvious to
implement Car Entertainment Systems into the car for comfort and driver information. We
can differ between driver information devices and passenger entertainment devices.
Car entertainment began with AM-reception. FM-tuners followed soon, with stereo sound,
cassette players and CD-players to entertain passengers. Today we know a number of
different analog- and digital broadcasting systems, such as DAB, DMB, DRM, DVB and
player standards like MP3, MP4, DVD, BlueRay and many more, which are integrated into
the car console. With the variety of different entertainment sources in the car, each
passenger in the vehicle may wish for their own program. For this, rear-seat entertainment
systems are implemented.
For driver information, modern navigation systems not only help to find the most
efficient route, but also give an overview of traffic situation. New concepts of user
interaction shall provide comfort and intuitive usage of the devices. So the man-machine-
interface is an important marketing feature, on which car manufacture philosophy is
mirrored. Touch-screens provide an intuitive operation as information and buttons merge
to one device.
This Chapter is structured into different subsections, explaining the tool chain of vehicular
entertainment systems from reception of radio waves, demodulation and distribution of
signal to the audio and visual end.
We begin with a brief historical overview of car entertainment and show modern
installations in contrast. In order to understand the evolution better, we give a list of existing
broadcasting standards and explain tuner concepts and diversity reception to combat fading
effects for analog and digital broadcasting systems.
When the signal is demodulated it needs to be distributed to the audio subsystem.
The audio system consists of amplifier, equalizer and loudspeakers. We unveil the secrets of
high quality sound.
As passengers wish to get their own entertainment source, the distribution of these sources
in a rear-seat entertainment system is described.

For driver information, navigation systems offer intelligent route guidance, road status and
helpful information on and along the way. In the recent years the development of
navigation systems is advancing. So the pro- and contras for portable navigators in contrast
to integrated systems are discussed.

×