Tải bản đầy đủ (.pdf) (30 trang)

Model-Based Design for Embedded Systems- P7 pot

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (865.1 KB, 30 trang )

Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 156 2009-10-1
156 Model-Based Design for Embedded Systems
the sending and the receiving node, and a is an environment parameter
(typically in the range from 2 to 4). If the received energy is below a user-
defined threshold, then no reception will take place.
A node that wants to transmit a message will proceed as follows: The
node first checks whether the medium is idle. If that has been the case for
50 μs, then the transmission may proceed. If not, the node will wait for a ran-
dom back-off time before the next attempt. The signal-to-interference ratio in
the receiving node is calculated by treating all simultaneous transmissions
as an additive noise. This information is used to determine a probabilistic
measure of the number of bit errors in the received message. If the number
of errors is below a configurable bit-error threshold, then the packet could be
successfully received.
6.5 Example: Constant Bandwidth Server
The constant bandwidth server (CBS) [1] is a scheduling server for aperiodic
and soft tasks that executes on top of an EDF scheduler. A CBS is character-
ized by two parameters: a server period T
s
and a utilization factor U
s
.The
server ensures that the task(s) executing within the server can never occupy
more than the U
s
of the total CPU bandwidth.
Associated with the server are two dynamic attributes: the server budget
c
s
and the server deadline d
s


. Jobs that arrive at the server are placed in a
queue and are served on a first-come first-serve basis. The first job in the
queue is always eligible for execution, using the current server deadline, d
s
.
The server is initialized with c
s
:= U
s
T
s
and d
s
= T
s
. The rules for updating
the server are as follows:
1. During the execution of a job, the budget c
s
is decreased at unit rate.
2. Whenever c
s
= 0, the budget is recharged to c
s
:= U
s
T
s
, and the dead-
line is postponed one server period: d

s
:= d
s
+T
s
.
3. If a job arrives at an empty server at time r and c
s
≥ (d
s
−r)U
s
, then the
budget is recharged to c
s
:= U
s
T
s
, and the deadline is set to d
s
:= r +T
s
.
The first and second rules limit the bandwidth of the task(s) executing in the
server. The third rule is used to “reset” the server after a sufficiently long idle
period.
6.5.1 Implementation of CBS in TrueTime
TrueTime provides a basic mechanism for execution-time monitoring and
budgets. The initial value of the budget is called the WCET of the task.

By default, the WCET is equal to the period (for periodic tasks) or the
relative deadline (for aperiodic tasks). The WCET value of a task can be
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 157 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 157
changed by calling ttSetWCET(value,task). The WCET corresponds to
the maximum server budget, U
s
T
s
, in the CBS. The CBS period is specified
by setting the relative deadline of the task. This attribute can be changed by
calling ttSetDeadline(value,task).
When a task executes, the budget is decreased at unit rate. The
remaining budget can be checked at any time using the primitive
ttGetBudget(task). By default, nothing happens when the budget
reaches zero. In order to simulate that the task executes inside a CBS, an exe-
cution overrun handler must be attached to the task. A sample initialization
script is given below:
function node_init
% Initialize kernel, specifying EDF scheduling
ttInitKernel(0,0,’prioEDF’);
% Specify CBS rules for initial deadlines and initial
budgets
ttSetKernelParameter(’cbsrules’);
% Specify CBS server period and utilization factor
T_s = 2;
U_s = 0.5;
% Create an aperiodic task
ttCreateTask(’aper_task’,T_s,1,’codeFcn’);
ttSetWCET(T_s

*
U_s,’aper_task’);
% Attach a WCET overrun handler
ttAttachWCETHandler(’aper_task’,’cbs_handler’);
The execution overrun handler can then be implemented as follows:
function [exectime,data] = cbs_handler(seg,data)
% Get the task that caused the overrun
t = ttInvokingTask;
% Recharge the budget
ttSetBudget(ttGetWCET(t),t);
% Postpone the deadline
ttSetAbsDeadline(ttGetAbsDeadline(t)+ttGetDeadline(t),t);
exectime = -1;
If many tasks are to execute inside CBS servers, the same code function can
be reused for all the execution overrun handlers.
6.5.2 Experiments
The CBS can be used to safely mix hard, periodic tasks with soft, aperiodic
tasks in the same kernel. This is illustrated in the following example, where
a ball and beam controller should execute in parallel with an aperiodically
triggered task. The Simulink model is shown in Figure 6.6.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 158 2009-10-1
158 Model-Based Design for Embedded Systems
FIGURE 6.6
TrueTime model of a ball and beam being controlled by a multitasking real-
time kernel. The Poisson arrivals trigger an aperiodic computation task.
The ball and beam process is modeled as a triple integrator disturbed by
white noise and is connected to the TrueTime kernel block via the A/D and
D/A ports. A linear-quadratic Gaussian (LQG) controller for the ball and
beam has been designed and is implemented as a periodic task with a sam-
pling period of 10 ms. The computation time of the controller is 5 ms (2 ms

for calculating the output and 3 ms for updating the controller state). A Pois-
son source with an intensity of 100/s is connected to the interrupt input of
the kernel, triggering an aperiodic task for each arrival. The relative dead-
line of the task is 10 ms, while the execution time of the task is exponentially
distributed with a mean of 3 ms.
The average CPU utilization of the system is 80%. However, the aperiodic
task has a very uneven processor demand and can easily overload the CPU
during some intervals. The control performance in the first experiment, using
plain EDF scheduling, is shown in Figure 6.7. A close-up of the correspond-
ing CPU schedule is shown in Figure 6.8. It is seen that the aperiodic task
sometimes blocks the controller for several sampling periods. The resulting
execution jitter leads to very poor regulation performance.
Next, a CBS is added to the aperiodic task. The server period is set to
T
s
= 10 ms and the utilization to U
s
= 0.49, implying a maximum budget
(WCET) of 4.9 ms. With this configuration, the CPU will never be more than
99% loaded. A new simulation, using the same random number sequences
as before, is shown in Figure 6.9. The regulation performance is much
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 159 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 159
0 1 2 3 4 5
6 7 8 910
−0.05
0
0.05
Output
0 1 2 3 4 5 6 7 8 9 10

–50
0
50
Time
Input
FIGURE 6.7
Control performance under plain EDF scheduling.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Aperiodic task
Time
Controller
FIGURE 6.8
Close-up of CPU schedule under plain EDF scheduling.
better—this is especially evident in the smaller control input required. The
close-up of the schedule in Figure 6.10 shows that the controller is now able
to execute its 5 ms within each 10 ms period and the jitter is much smaller.
6.6 Example: Mobile Robots in Sensor Networks
In the EU/IST FP6 integrated project RUNES (reconfigurable ubiquitous net-
worked embedded systems, [32]) a disaster-relief road-tunnel scenario was
used as a motivating example [5]. In this scenario, mobile robots were used
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 160 2009-10-1
160 Model-Based Design for Embedded Systems
0 1 2 3 4 5 6 7 8 910
−0.05
0
0.05
Output
0 1 2 3 4 5 6 7 8 910
−50

0
50
Time
Input
FIGURE 6.9
Control performance under CBS scheduling.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Aperiodic task
Time
Controller
FIGURE 6.10
Close-up of CPU schedule under CBS scheduling.
as mobile radio gateways that ensure the connectivity of a sensor network
located in a road tunnel in which an accident has occurred. A number of
software components were developed for the scenario. A localization com-
ponent based on ultrasound was used for localizing the mobile robots and
a collision-avoidance component ensured that the robots did not collide (see
[2]). A network reconfiguration component [30] and a power control com-
ponent [37] were responsible for deciding the best position for the mobile
robot in order to maximize radio connectivity, and to adjust the radio power
transmit level.
In parallel with the physical implementation of this scenario, a TrueTime
simulation model was developed. The focus of the simulation was the timing
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 161 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 161
aspects of the scenario. It should be possible to simultaneously simulate the
computations that take place within the nodes, the wireless communication
between the nodes, the power devices (batteries) in the nodes, the sensor
and actuator dynamics, and the dynamics of the mobile robots. In order to

model the limited resources correctly, the simulation model must be quite
realistic. For example, it should be possible to simulate the timing effects of
interrupt handling in the microcontrollers implementing the control logic of
the nodes. It should also be possible to simulate the effects of collisions and
contention in the wireless communication. Because of simulation time and
size constraints, it is at the same time important that the simulation model is
not too detailed. For example, simulating the computations on a source-code
level, instruction for instruction, would be overly costly. The same applies to
simulation of the wireless communication at the radio-interface level or on
the bit-transmission level.
6.6.1 Physical Scenario Hardware
The physical scenario consists of a number of hardware and software com-
ponents. The hardware consists of the stationary wireless communication
nodes and the mobile robots. The wireless communication nodes are imple-
mented by Tmote Sky sensor network motes executing the Contiki operat-
ing system [14]. In addition to the ordinary sensors for temperature, light,
and humidity, an ultrasound receiver has been added to each mote (see
Figure 6.11).
The two robots, RBbots, are shown in Figure 6.12. Both robots are
equipped with an ultrasound transmitter board (at the top). The robot to the
left has the obstacle-detection sensors mounted. This consists of an IR prox-
imity sensor mounted on an RC-servo that sweeps a circle segment in front
of the robot and a touch sensor bar.
The RBbots internally consist of one Tmote Sky, one ATMEL AVR
Mega128, and three ATMEL AVR Mega16 microprocessors. The nodes com-
municate internally over an I
2
C bus. The Tmote Sky is used for the radio
communication as the master. Two of the ATMEL AVR Mega16 processors
are used as interfaces to the wheel motors and the wheel encoders measuring

the wheel angular velocities. The third ATMEL AVR Mega16 is used as the
interface to the ultrasound transmitter and to the obstacle-detection sensors.
The AVR Mega128 is used as a compute engine for the software-component
code that does not fit the limited memory of the TMote Sky. The structure is
shown in Figure 6.13.
6.6.2 Scenario Hardware Models
The basic programming model used for the TI MSP430 processor used
in the Tmote Sky systems is event-driven programming with interrupt
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 162 2009-10-1
162 Model-Based Design for Embedded Systems
FIGURE 6.11
Stationary sensor network nodes with ultrasound receiver circuit. The node
is packaged in a plastic box to reduce wear.
FIGURE 6.12
The two Lund RBbots.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 163 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 163
TMote Sky
ATMEL AVR
Mega16
ATMEL AVR
Mega128
ATMEL AVR
Mega16
ATMEL AVR
Mega16
Left wheel
motor &
encoder
Right wheel

motor &
encoder
Ultrasound
transmitter
Obstacle-
detection
sensors
FIGURE 6.13
RBbot hardware architecture.
handlers for handling timer interrupts, bus interrupts, etc. In TrueTime,
the same architecture can be used. However, the Contiki OS also supports
protothreads [15], lightweight stackless threads designed for severely
memory-constrained systems. Protothreads provide linear code execution
for event-driven systems implemented in C. Protothreads can be used to
provide blocking event-handlers. They provide a sequential flow of control
without complex-state machines or full multithreading. In TrueTime, pro-
tothreads are modeled as ordinary tasks. The ATMEL AVR processors are
modeled as event-driven systems. A single nonterminating task acts as the
main program and the event handling is performed in interrupt handlers.
The software executing in the TrueTime processors is written in C++. The
names of the files containing the code are input parameters of the network
blocks. The localization component consists of two parts. The distance sensor
part of the component is implemented as a (proto-)thread in each stationary
sensor node. An extended Kalman filter–based data fusion is implemented
in the Tmote Sky processor on board each robot. The localization method
makes use of the ultrasound network and the radio network. The collision-
avoidance component code is implemented in the ATMEL AVR Mega128
processor using events and interrupts. It interacts over the I
2
C bus with the

localization component and with the robot-position controller, both located
in the Tmote Sky processor.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 164 2009-10-1
164 Model-Based Design for Embedded Systems
6.6.3 TrueTime Modeling of Bus Communication
The I
2
C bus within the RBbots is modeled in TrueTime by a network block.
The TrueTime network model assumes the presence of a network interface
card or a bus controller implemented either in the hardware or the software
(i.e., as drivers). The Contiki interface to the I
2
C bus is software-based and
corresponds well to the TrueTime model. In the ATMEL AVRs, however, it is
normally the responsibility of the application programmer to manage all bus
access and synchronization directly in the application code. In the TrueTime
model, this low-level bus access is not modeled. Instead, it is assumed that
there exists a hardware or a software bus interface that implements this.
Although the I
2
C is a multimaster bus that uses arbitration to resolve con-
flicts, this is not how it is modeled in TrueTime. On the Tmote Sky, the radio
chip and the I
2
C bus share connection pins. Because of this, it is only possi-
ble to have one master on the I
2
C bus and this master must be the Tmote
Sky. All communication must be initiated by the master. Because of this,
bus access conflicts are eliminated. Therefore, the I

2
C bus is modeled as a
CAN bus with the transmission rate set to match the transmission rate of the
I
2
Cbus.
6.6.4 TrueTime Modeling of Radio Communication
The radio communication used by the Tmote Sky is the IEEE 802.15.4
MAC protocol (the so-called Zigbee MAC protocol) and the correspond-
ing TrueTime wireless network protocol was used. The requirements on
the simulation environment from the network reconfiguration and radio
power–control components are that it should be possible to change the
transmit power of the nodes and that it should be possible to mea-
sure the received signal strength, that is, the so-called received signal
strength indicator (RSSI). The former is possible through the TrueTime
command, ttSetNetworkParameter(’transmitpower’,value).The
RSSI is obtained as an optional return value of the TrueTime function,
ttGetMsg.
In order to model the ultrasound, a special block was developed. The
block is a special version of the wireless network block that models the ultra-
sound propagation of a transmitted ultrasound pulse. The main difference
between the wireless network block and the ultrasound block is that in the
ultrasound block it is the propagation delay that is important, whereas in
the ordinary wireless block it is the medium access delay and the transmis-
sion delay that are modeled. The ultrasound is modeled as a single sound
pulse. When it arrives at a stationary sensor node an interrupt is generated.
This also differs from the physical scenario, in which the ultrasound signal is
connected via an AD converter to the Tmote Sky.
The network routing is implemented using a TrueTime model of the ad
hoc on-demand vector (AODV) routing protocol (see [31]) commonly used

Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 165 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 165
in sensor network and mobile robot applications. The AODV uses three
basic types of control messages in order to build and invalidate routes: route
request (RREQ), route reply (RREP), and route error (RERR) messages. These
control messages contain source and destination sequence numbers, which
are used to ensure fresh and loop-free routes. A node that requires a route
to a destination node initiates route discovery by broadcasting an RREQ
message to its neighbors. A node receiving an RREQ starts by updating its
routing information backward toward the source. If the same RREQ has not
been received before, the node then checks its routing table for a route to the
destination. If a route exists with a sequence number greater than or equal to
that contained in the RREQ, an RREP message is sent back toward the source.
Otherwise, the node rebroadcasts the RREQ. When an RREP has propagated
back to the original source node, the established route may be used to send
data. Periodic hello messages are used to maintain local connectivity infor-
mation between neighboring nodes. A node that detects a link break will
check its routing table to find all routes that use the broken link as the next
hop. In order to propagate the information about the broken link, an RERR
message is then sent to each node that constitutes a previous hop on any of
these routes.
Two TrueTime tasks are created in each node to handle the AODV send
and receive actions, respectively. The AODV send task is activated from the
application code, as a data message should be sent to another node in the net-
work. The AODV receive task handles the incoming AODV control messages
and forwarding of data messages. Communication between the application
layer and the AODV layer is handled using TrueTime mailboxes. Each node
also contains a periodic task, responsible for broadcasting hello messages
and determining local connectivity based on hello messages received from
neighboring nodes. Finally, each node has a task to handle the timer expiry

of route entries.
The AODV protocol in TrueTime is implemented in such a way that it
stores messages to destinations for which no valid route exists, at the source
node. This means that when, eventually, the network connectivity has been
restored through the use of the mobile radio gateways, the communication
traffic will be automatically restored.
6.6.5 Complete Model
In addition to the above, the complete model for the scenario also contains
models of the sensors, motors, robot dynamics, and a world model that keeps
track of the position of the robots and the fixed obstacles within the tunnel.
The wheel motors are modeled as first-order linear systems plus integra-
tors with the angular velocities and positions as the outputs. From the motor
velocities, the corresponding wheel velocities are calculated. The wheel
positions are controlled by two PI-controllers residing in the ATMEL AVR
processors acting as interfaces to the wheel motors.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 166 2009-10-1
166 Model-Based Design for Embedded Systems
The Lund RBbot is a dual-drive unicycle robot. It is modeled as a third-
order system
˙
p
x
=
1
2
(R
1
ω
1
+R

2
ω
2
) cos(θ)
˙
p
y
=
1
2
(R
1
ω
1
+R
2
ω
2
) sin(θ)
˙
θ =
1
D
(R
2
ω
2
−R
1
ω

1
)
(6.1)
where the state consists of the x-andy-positions and the heading θ.Inputs
to the system are the angular velocities, ω
1
and ω
2
, of the two wheels. The
parameters R
1
and R
2
are the radii of the two wheels and D is the distance
between the wheels.
The top-level TrueTime model diagram is shown in Figure 6.14. The
stationary sensor nodes are implemented as Simulink subsystems that
internally contain a TrueTime kernel modeling the Tmote Sky mote, and
connections to the radio network and the ultrasound communication blocks.
In order to reduce the wiring From and To, blocks hidden inside the corre-
sponding subsystems are used for the connections. The block handling the
dynamic animation is not shown in Figure 6.14.
The subsystem for the mobile robots is shown in Figure 6.15. The robot
dynamics block contains the motor models and the robot dynamics model.
The position of the robots and the status of the stationary sensor nodes
(i.e., whether or not they are operational) are shown in a separate animation
workspace (see Figure 6.16). The workspace shows one tunnel segment with
sensor nodes (out of which some are non-operational) along the walls. Two
robots are inside the tunnel together with two obstacles that the robots must
avoid.

FIGURE 6.14
The TrueTime model diagram. In order to reduce the use of wires From and
To, blocks hidden inside the corresponding subsystems are used to connect
the stationary sensor nodes to the radio and ultrasound networks.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 167 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 167
(radio2)
(ultra1)
In1
In2
In3
A/D
Interrupts
Rcv
D/A
Snd
Schedule
P
Monitors
From radio
network
A/D
Interrupts
Rcv
D/A
Snd
Schedule
P
Monitors
AVR Mega16-1

A/D
Interrupts
Rcv
D/A
Snd
Schedule
P
Monitors
AVR Mega128
A/D
Interrupts
Rcv
D/A
Snd
Schedule
P
Monitors
AVR Mega16-3
Left
Right
x
1
x
2
y
y
Theta
Rspeed
Ispeed
Robot dynamics

A/D
Interrupts
Rcv
D/A
Snd
Schedule
P
Monitors
AVR Mega16-2
To ultrasound
network
Tmote Sky
To radio
network
I2C Bus
(radio1)
In4 Out4
In5 Out5
Out1
Out2
Out3
FIGURE 6.15
The Simulink model of the mobile robots. For the sake of clarity, the obstacle-
detection sensors have been omitted. These should be connected to AVR
Mega16-1.
6.6.6 Evaluation
The implemented TrueTime model contains several simplifications. For
example, interrupt latencies are not simulated, only context switch over-
heads. All execution times are chosen based on experience from the hard-
ware implementation. Also, it is important to stress that the simulated code

is only a model of the actual code that executes in the sensor nodes and in the
robots. However, since C is the programming language used in both cases
the translation is, in most cases, quite straightforward.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 168 2009-10-1
168 Model-Based Design for Embedded Systems
Stationary sensor node
Stationary sensor node (out of operation)
Mobile robot
Obstacle
FIGURE 6.16
Animation workspace.
In spite of the above, it is our experience that the TrueTime simulation
approach gives results that are close to the real case. The TrueTime approach
has also been validated by others. In [7], a TrueTime-based model is com-
pared with a hardware-in-the-loop (HIL) model of a distributed CAN-based
control system. The TrueTime simulation result matched the HIL results very
well.
An aspect of the model that is extremely difficult, if not impossible, to val-
idate is the wireless communication. Simulation of wireless MANET systems
is notoriously difficult (e.g., see [3]). The effects of multipath propagation,
fading, and external disturbances are very difficult to model accurately. The
approach adopted here is to first start with an idealized exponential decay
ratio model and then, when this works properly, gradually add more and
more nondeterminism. This can be done either by setting a high probabil-
ity that a packet is lost, or by providing a user-defined radio model using
Rayleigh fading.
The total code size for the model was 3700 lines of C code. Parts of the
algorithmic code (e.g., the extended Kalman filter code) were exactly the
same as in real robots. The model contained five kernel blocks and one net-
work block per robot, one kernel block per sensor node, with six sensors, one

wireless network block for the radio traffic, and one ultrasound block model-
ing the ultrasound propagation. The simulation rate was slightly faster than
real time, executing on an ordinary dual-core MS Windows laptop.
6.7 Example: Network Interface Blocks
The last example illustrates how the stand-alone network interface blocks
can be used to simulate time-triggered or event-triggered networked control
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 169 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 169
loops. In this case, because there are no kernel blocks, no initialization scripts
or code functions must be written.
The networked control system in this example consists of a plant (an inte-
grator), a network, and two nodes: an I/O device (handling AD and DA con-
version) and a controller node. At the I/O node, the process is sampled by
a ttSendMsg network interface block, which transmits the value to the con-
troller node. There, the packet is received by a ttGetMsg network interface
block. The control signal is computed and the control is transmitted back to
the I/O node by another ttSendMsg block. Finally, the signal is received by
a ttGetMsg block at the I/O and is actuated to the process.
Two versions of the control loop will be studied. In Figure 6.17, both
ttSendMsg blocks are time triggered. The process output is sampled every
0.1 s, and a new control signal is computed with the same interval but
with a phase shift of 0.05 s. The resulting control performance and network
schedule are shown in Figure 6.18. The process output is kept close to zero
FIGURE 6.17
Time-triggered networked control system using stand-alone network
interface blocks. The ttSendMsg blocks are driven by periodic pulse
generators.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 170 2009-10-1
170 Model-Based Design for Embedded Systems
0246810

0246810
−1
0
1
Process output
Time
Network schedule
FIGURE 6.18
Plant output and network schedule for the time-triggered control system.
despite the process noise. The schedule shows that the network load is quite
high.
In the second version of the control loop, the ttSendMsg blocks are event
triggered instead (see Figure 6.19). A sample is generated whenever the
magnitude of the process output passes 0.25. The arrival of a measurement
sample at the controller node triggers—after a delay—the computation and
sending of the control signal back to the I/O node. The resulting control
performance and network schedule is shown in Figure 6.20. It can be seen
that the process is still stabilized, although much fewer network messages
are sent.
6.8 Limitations and Extensions
Although TrueTime is quite powerful, it has some limitations. Some of them
could be removed by extending TrueTime in different directions. This will
be discussed here.
6.8.1 Single-Core Assumption
Multicore architectures are increasingly common in embedded systems. The
TrueTime kernel, however, is single core. Modifying the kernel to instead
support a globally scheduled shared-memory multicore platform with a
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 171 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 171
FIGURE 6.19

Event-triggered networked control system using stand-alone network inter-
face blocks. The process output is sampled by the ttSendMsg block when the
magnitude exceeds a certain threshold.
single ready queue is probably relatively straightforward. However, to sup-
port a partitioned system with separate ready queues, separate caches, and
task migration overheads is significantly more complicated.
6.8.2 Execution Times
In TrueTime, it is the user’s responsibility to assign the execution times of
the different code segments. This should correspond to the amount of time
it should take to execute the code on the particular target machine where it
should run. For small microcontrollers, it is possible to perform these assess-
ments fairly well. However, for normal-size platforms, it is difficult to get
good estimates. The problem can be compared with the problem of perform-
ing the WCET analysis.
The idea behind the TrueTime approach is that the execution times
should be viewed as design parameters. By increasing or decreasing them,
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 172 2009-10-1
172 Model-Based Design for Embedded Systems
0 2 4 6
810
0246810
−1
0
1
Process output
Time
Network schedule
FIGURE 6.20
Plant output and network schedule for the event-triggered control system.
different processor speeds can be simulated. By adding a random element

to them, variations in execution times because of code branches and data-
dependent execution time statements can be accounted for. However, in a
real system, the execution time of a piece of code can be divided into two
parts. The first part is the execution of the different instructions in the code.
This is fairly straightforward to estimate. The second part is the time caused
by the hardware platform. This includes the time caused by cache misses,
pipeline breaks, memory access latencies, etc. This time is more difficult to
obtain good estimates for. A possible approach is to have this part of the
execution time added to the user-provided times automatically by the ker-
nel block based on different parameterized assumptions about the hardware
platform.
6.8.3 Single-Thread Execution
Since Simulink simulation is performed by a single-thread execution, the
multitasking in the kernel block has to be emulated. One consequence of this
is that it is the responsibility of the user that the context of each task is saved
and restored in the correct way. This is done by passing the context as an
argument to the code functions. Another partly related consequence of this
is the segmentation that has to be applied to every task. The latter is the main
reason why it is not possible to use the production C code in TrueTime sim-
ulations. In addition, a code function may not call other code functions, that
is, abstractions on the code function level are not supported.
Preliminary investigations indicate that it should be possible to map the
TrueTime tasks onto Posix threads (i.e., to use multiple threads inside each
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 173 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 173
kernel S-function). Using this approach, the problem with the task context
and segments would be solved automatically.
6.8.4 Simulation Platform
TrueTime is based on Simulink. This is both an advantage and a disadvan-
tage. It is good since it makes it easy for existing MATLAB/Simulink users to

start using it. However, MATLAB/Simulink is still not widely spread in the
computer science community. The threshold for a non-Simulink user to start
using TrueTime is therefore fairly high. An advantage with building upon
MATLAB is the vast availability of other toolboxes that can be combined
with TrueTime.
However, it is possible to port TrueTime to other platforms. In [19], a fea-
sibility study is presented where the kernel block of TrueTime is ported to
Scilab/Scicos (see [33]). Also, in the new European ITEA 2 project EUROSYS-
LIB, the TrueTime network blocks are being ported to the Modelica language
(see [24]) and the Dymola simulator (see [16]).
6.8.5 Higher-Layer Protocols
The network blocks only support link-layer protocols. In most cases this
suffices, since most real-time networks are local-area networks without any
routing or transport layers. However, if higher-layer protocols are needed,
these are not directly supported by TrueTime. The examples contain a
TCP transport protocol example and an AODV routing protocol example,
but these applications are implemented as application codes. It would be
interesting to provide built-in support also for some of the most popular
higher-order protocols. It would also be useful to have a plug-and-play facil-
ity that would make it easy for the user to add new protocols to the net-
work blocks. Currently, this involves modifications of the C++ network block
source code.
6.9 Summary
This chapter has presented TrueTime, a freeware extension to Simulink that
allows multithreaded real-time kernels and communication networks to be
simulated in parallel with the dynamics of the process under control. Having
been developed over almost 10 years, TrueTime has several more features
than those mentioned in this chapter. For a complete description, please see
the latest version of the reference manual (e.g., [26]). In particular, many fea-
tures related to real-time scheduling are detailed in [26].

Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 174 2009-10-1
174 Model-Based Design for Embedded Systems
References
1. L. Abeni and G. Buttazzo. Integrating multimedia applications in hard
real-time systems. In Proceedings of the 19th IEEE Real-Time Systems Sym-
posium, Madrid, Spain, 1998.
2. P. Alriksson, J. Nordh, K E. Årzén, A. Bicchi, A. Danesi, R. Schiavi, and
L. Pallottino. A component-based approach to localization and collision
avoidance for mobile multi-agent systems. In Proceedings of the European
Control Conference (ECC), Kos, Greece, 2007.
3. T.R. Andel and A. Yasinac. On the credibility of manet simulations. IEEE
Computer, 39(7), 48–54, July 2006.
4. M. Andersson, D. Henriksson, A. Cervin, and K E. Årzén. Simulation
of wireless networked control systems. In Proceedings of the 44th IEEE
Conference on Decision and Control and European Control Conference ECC
2005, Seville, Spain, December 2005.
5. K E. Årzén, A. Bicchi, G. Dini, S. Hailes, K.H. Johansson, J. Lygeros, and
A. Tzes. A component-based approach to the design of networked con-
trol systems. In Proceedings of the European Control Conference (ECC),Kos,
Greece, 2007.
6. N. Audsley, A. Burns, M. Richardson, and A. Wellings. STRESS—A sim-
ulator for hard real-time systems. Software—Practice and Experience, 24(6),
543–564, June 1994.
7. D. Ayavoo, M.J. Pont, and S. Parker. Using simulation to support the
design of distributed embedded control systems: A case study. In Pro-
ceedings of First U.K. Embedded Forum, Brimingham, U.K., 2004.
8. P. Baldwin, S. Kohli, E.A. Lee, X. Liu, and Y. Zhao. Modeling of sensor
nets in Ptolemy II. In IPSN’04: Proceedings of the Third International Sym-
posium on Information Processing in Sensor Networks, pp. 359–368. ACM
Press, 2004.

9. M. Branicky, V. Liberatore, and S.M. Phillips. Networked control sys-
tems co-simulation for co-design. In Proceedings of the American Control
Conference, Denver, CL, 2003.
10. A. Casile, G. Buttazzo, G. Lamastra, and G. Lipari. Simulation and trac-
ing of hybrid task sets on distributed systems. In Proceedings of the Fifth
International Conference on Real-Time Computing Systems and Applications,
Hiroshima, Japan, 1998.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 175 2009-10-1
TrueTime: Simulation Tool for Performance Analysis 175
11. A. Cervin, D. Henriksson, B. Lincoln, J. Eker, and K E. Årzén. How does
control timing affect performance? IEEE Control Systems Magazine, 23(3),
16–30, June 2003.
12. M.I. Clune, P.J. Mosterman, and C.G. Cassandras. Discrete event and
hybrid system simulation with simEvents. In Proceedings of the Eighth
International Workshop on Discrete Event Systems, Ann Arbor, MI, 2006.
13. J M. Dricot and P. De Doncker. High-accuracy physical layer model for
wireless network simulations in NS-2. In Proceedings of the International
Workshop on Wireless Ad-Hoc Networks (IWWAN), Oulu, FL, 2004.
14. A. Dunkels, B. Grönvall, and T. Voigt. Contiki — A lightweight and flex-
ible operating system for tiny networked sensors. In Proceedings of the
First IEEE Workshop on Embedded Networked Sensors (Emnets-I), Tampa,
FL, November 2004.
15. A. Dunkels, O. Schmidt, T. Voigt, and M. Ali. Protothreads: Simplifying
event-driven programming of memory-constrained embedded systems.
In Proceedings of the Fourth ACM Conference on Embedded Networked Sensor
Systems (SenSys 2006), Boulder, CL, November 2006.
16. Dymola. Homepage: . Visited 2008-09-30.
17. J. Eker and A. Cervin. A Matlab toolbox for real-time and control systems
co-design. In Proceedings of the Sixth International Conference on Real-Time
Computing Systems and Applications, Hong Kong, P.R. China, December

1999. Best student paper award.
18. J. El-Khoury and M. Törngren. Towards a toolset for architectural design
of distributed real-time control systems. In Proceedings of the 22nd IEEE
Real-Time Systems Symposium, London, U.K., December 2001.
19. D. Kusnadi. TrueTime in Scicos. Master’s thesis ISRN LUTFD2/TFRT–
5799–SE, Department of Automatic Control, Lund University, Sweden,
June 2007.
20. P. Levis, N. Lee, M. Welsh, and D. Culler. TOSSIM: Accurate and scalable
simulation of entire TinyOS applications. In Proceedings of the First Inter-
national Conference on Embedded Networked Sensor Systems, pp. 126–137,
Los Angeles, CA, 2003.
21. C.L. Liu and J.W. Layland. Scheduling algorithms for multiprogramming
in a hard-real-time environment. Journal of the ACM, 20(1), 40–61, 1973.
22. P.S. Magnusson. Simulation of parallel hardware. In Proceedings of the
International Workshop on Modeling Analysis and Simulation of Computer and
Telecommunication Systems (MASCOTS), San Diego, CA, 1993.
Nicolescu/Model-Based Design for Embedded Systems 67842_C006 Finals Page 176 2009-10-1
176 Model-Based Design for Embedded Systems
23. MATLAB. Homepage: . Visited 2008-
09-30.
24. Modelica. Homepage: . Visited 2008-09-30.
25. ns-2. Homepage: Visited 2008-09-30.
26. Martin Ohlin, Dan Henriksson, and Anton Cervin. TrueTime 1.5—
Reference Manual, January 2007. Homepage: />truetime.
27. OMNeT++. Homepage: . Visited 2008-09-30.
28. F. Österlind. A sensor network simulator for the Contiki OS. Technical
report T2006-05, SICS – Swedish Institute of Computer Science, February
2006.
29. L. Palopoli, L. Abeni, and G. Buttazzo. Real-time control system analysis:
An integrated approach. In Proceedings of the 21st IEEE Real-Time Systems

Symposium, Orlando, FL, December 2000.
30. A. Panousopoulou and A. Tzes. Utilization of mobile agents for Voronoi-
based heterogeneous wireless sensor network reconfiguration. In Pro-
ceedings of the European Control Conference (ECC), Kos, Greece, 2007.
31. C.E. Perkins and E.M. Royer. Ad-hoc on-demand distance vector
(AODV) routing. In Proceedings of the Second IEEE Workshop on Mobile
Computing Systems and Applications, New Orleans, LA, 1999.
32. RUNES—Reconfigurable Ubiquitous Networked Embedded Systems.
Homepage: . Visited 2008-09-30.
33. Scilab. Homepage: . Visited 2008-09-30.
34. F. Singhoff, J. Legrand, L. Nana, and L. Marcé. Cheddar: A flexible real
time scheduling framework. ACM SIGAda Ada Letters, 24(4), 1–8, 2004.
35. M.F. Storch and J.W S. Liu. DRTSS: A simulation framework for
complex real-time systems. In Proceedings of the Second IEEE Real-Time
Technology and Applications Symposium, Boston, MA, 1996.
36. H Y. Tyan. Design, realization and evaluation of a component-based
compositional software architecture for network simulation. PhD thesis,
Ohio State University, 2002.
37. B. Zurita Ares, C. Fischione, A. Speranzon, and K.H. Johansson. On
power control for wireless sensor networks: Radio model, software
implementation and experimental evaluation. In Proceedings of the Euro-
pean Control Conference (ECC), Kos, Greece, 2007.
Nicolescu/Model-Based Design for Embedded Systems 67842_S002 Finals Page 177 2009-10-1
Part II
Design Tools and
Methodology for
Multiprocessor
System-on-Chip
Nicolescu/Model-Based Design for Embedded Systems 67842_S002 Finals Page 178 2009-10-1
Nicolescu/Model-Based Design for Embedded Systems 67842_C007 Finals Page 179 2009-10-2

7
MPSoC Platform Mapping Tools for
Data-Dominated Applications
Pierre G. Paulin, Olivier Benny, Michel Langevin, Youcef Bouchebaba,
Chuck Pilkington, Bruno Lavigueur, David Lo, Vincent Gagne, and
Michel Metzger
CONTENTS
7.1 Introduction 179
7.1.1 Platform Programming Models 181
7.1.1.1 Explicit Capture of Parallelism 184
7.1.2 Characteristics of Parallel Multiprocessor SoC Platforms 184
7.2 MultiFlex Platform Mapping Technology Overview 185
7.2.1 Iterative Mapping Flow 186
7.2.2 Streaming Programming Model 187
7.3 MultiFlex Streaming Mapping Flow 188
7.3.1 Abstraction Levels 189
7.3.2 Application Functional Capture 190
7.3.3 Application Constraints 191
7.3.4 The High-Level Platform Specification . 192
7.3.5 Intermediate Format 192
7.3.6 Model Assumptions and Distinctive Features 192
7.4 MultiFlex Streaming Mapping Tools 194
7.4.1 Task Assignment Tool 194
7.4.2 Task Refinement and Communication Generation Tools 195
7.4.3 Component Back-End Compilation 197
7.4.4 Runtime Support Components 197
7.5 ExperimentalResults 198
7.5.1 3G Application Mapping Experiments . 198
7.5.2 Refinement and Simulation 202
7.6 Conclusions 203

7.6.1 Outlook 204
References 205
7.1 Introduction
The current deep submicron technology era—as it applies to low-cost, high-
volume consumer digital convergence products—presents two opposing
challenges: rising system-on-chip (SoC) platform development costs and
179
Nicolescu/Model-Based Design for Embedded Systems 67842_C007 Finals Page 180 2009-10-2
180 Model-Based Design for Embedded Systems
shorter product market windows. Compounding the problem is the rate of
change due to evolving specifications and the appearance of multiple stan-
dards that need to be incorporated into a single platform.
There are three main causes to the rising SoC platform development costs.
The first is the continued rise in gate and memory count. Today’s SoCs can
have over 100 million transistors—enough to theoretically place the logic of
over one thousand 32 bit RISC processors on a single die. Leveraging these
capabilities is a major challenge.
The second cause is the increased complexity of dealing with deep submi-
cron effects. These include electro-migration, voltage-drop, and on-chip vari-
ations. These effects are having a dampening impact on design productivity.
Also, rising mask set costs—currently over one million dollars—compound
the problem, and present a nearly insurmountable financial market entry
barrier for smaller companies.
The third cause is the rising embedded software development cost in
current generation SoCs, driven by an accelerated rate of new feature intro-
duction. This is partly because of the convergence of computing, consumer,
and communications domains that implies supporting a broader range of
functionalities and standards for a wide set of geographic markets. While
the growth of hardware complexity in SoCs has tracked Moore’s law, with
a resulting growth of 56% in transistor count per year, industry studies [22]

show that the complexity of embedded S/W is rising at astaggering 140% per
year. This software now represents over 50% of development costs in most
SoCs and over 75% in emerging multiprocessor SoC (MP-SoC) platforms.
As a result, the significant investment to develop the platform—typically
between 10M$ and 100M$ for today’s 65 nm platforms—requires to maximize
the time-in-market for a given platform. On the other hand, the consumer-led
product cycles imply increasingly shorter time-to-market for the applications
supported by the platform.
Finally, customers of a given SoC platform increasingly request to add
their own value-added features as a market differentiator. These features
are not just superficial additions, such as human-interface and top-level
control code. For example, a SoC platform customer may have proprietary
multimedia-oriented enhancements that they want to include in the platform
(e.g., image noise reduction, face recognition, etc.).
All of these factors lead to the need for a domain-specific flexible plat-
form that can be reused across a wide range of application variants. In
addition, time-to-market considerations mean that the platform must come
with high-level application-to-platform mapping tools that increase devel-
oper productivity. Both of these requirements point in the direction of highly
S/W programmable platform solutions. A wide range of general-purpose
and domain-specific cores exist and they come with powerful compilation,
debug, and analysis tools. This makes them a key component of the flexible
SoC of the future.

×