Tải bản đầy đủ (.pdf) (13 trang)

Tài liệu Lịch khai giảng trong các hệ thống thời gian thực P4 doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (260.2 KB, 13 trang )

4
Scheduling Schemes
for Handling Overload
4.1 Scheduling Techniques in Overload
Conditions
This chapter presents several techniques to solve the problem of scheduling real-time
tasks in overload conditions. In such situations, the computation time of the task set
exceeds the time available on the processor and then deadlines can be missed. Even
when applications and the real-time systems have been properly designed, lateness can
occur for different reasons, such as missing a task activation signal due to a fault of
a device, or the extension of the computation time of some tasks due to concurrent
use of shared resources. Simultaneous arrivals of aperiodic tasks in response to some
exceptions raised by the system can overload the processor too. If the system is not
designed to handle overloads, the effects can be catastrophic and some paramount
tasks of the application can miss their deadlines. Basic algorithms such as EDF and
RM exhibit poor performance during overload situations and it is not possible to control
the set of late tasks. Moreover, with these two algorithms, one missed deadline can
cause other tasks to miss their deadlines: this phenomenon is called the domino effect.
Several techniques deal with overload to provide deadline missing tolerance. The
first algorithms deal with periodic task sets and allow the system to handle variable
computation times which cannot always be bounded. The other algorithms deal with
hybrid task sets where tasks are characterized with an importance value. All these
policies handle task models which allow recovery from deadline missing so that the
results of a late task can be used.
4.2 Handling Real-Time Tasks with Varying
Timing Parameters
A real-time system typically manages many tasks and relies on its scheduler to decide
when and which task has to be executed. The scheduler, in turn, relies on knowledge
about each task’s computational time, dependency relationships and deadline supplied
by the designer to make the scheduling decisions. This works quite well as long as the
execution time of each task is fixed (as in Chapters 2 and 3). Such a rigid framework is


a reasonable assumption for most real-time control systems, but it can be too restrictive
Scheduling in Real-Time Systems.
Francis Cottet, Jo¨elle Delacroix, Claude Kaiser and Zoubir Mammeri
Copyright

2002 John Wiley & Sons, Ltd.
ISBN: 0-470-84766-2
80 4 SCHEDULING SCHEMES FOR HANDLING OVERLOAD
for other applications. The schedule based on fixed parameters may not work if the
environment is dynamic. In order to handle a dynamic environment, an execution
scheduling of real-time system must be flexible.
For example, in multimedia systems, timing constraints can be more flexible and
dynamic than control theory usually permits. Activities such as voice or image treat-
ments (sampling, acquisition, compression, etc.) are performed periodically, but their
execution rates or execution times are not as strict as in control applications. If a task
manages compressed frames, the time for coding or decoding each frame can vary
significantly depending on the size or the complexity of the image. Therefore, the
worst-case execution time of a task can be much greater than its mean execution time.
Since hard real-time tasks are guaranteed based on their worst-case execution times,
multimedia activities can cause a waste of processor resource, if treated as rigid hard
real-time tasks.
Another example is related to a radar system where the number of objects to be
monitored may vary from time to time. So the processor load may change due to the
increase of execution duration of a task related to the number of objects. Sometimes
it can be advantageous for a real-time computation not to pursue the highest possible
precision so that the time and resources saved can be used by other tasks.
In order to provide theoretical support for applications, much work has been done to
deal with tasks with variable computation times. We can distinguish three main ways
to address this problem:
• specific task model able to integrate a variation of task parameters, such as execu-

tion time, period or deadline;
• on-line adaptive model, which calculates the largest possible timing parameters for
a task at any time;
• fault-tolerant mechanism based on minimum software, for a given task, which
ensures compliance with specified timing requirements in all circumstances.
4.2.1 Specific models for variable execution
task applications
In the context of specific models for tasks with variable execution times, two approaches
have been proposed: statistical rate monotonic scheduling (Atlas and Bestavros, 1998)
and the multiframe model for real-time tasks (Mok and Chen, 1997).
The first model, called statistical rate monotonic scheduling, is a generalization of the
classical rate monotonic results (see Chapter 2). This approach handles periodic tasks
with highly variable execution times. For each task, a quality of service is defined as
the probability that in an arbitrary long execution history, a randomly selected instance
of this task will meet its deadline. The statistical rate monotonic scheduling consists
of two parts: a job admission and a scheduler. The job admission controller manages
the quality of service delivered to the various tasks through admit/reject and priority
assignment decisions. In particular, it wastes no resource on task instances that will
miss their deadlines, due to overload conditions, resulting from excessive variability
in execution times. The scheduler is a simple, preemptive and fixed-priority scheduler.
This statistical rate monotonic model fits quite well with multimedia applications.
4.2 HANDLING REAL-TIME TASKS WITH VARYING TIMING PARAMETERS 81
t
1
t
2
t
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
t
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Figure 4.1 Execution sequence of an application integrating two tasks: one classical task
τ
1
(0, 1, 5, 5) and one multiframe task
τ
2
(0, (3, 1), 3, 3)
The second model, called the multiframe model, allows the execution time of a task
to vary from one instance to another. In this model, the execution times of successive
instances of a task are specified by a finite array of integer numbers rather than a single
number which is the worst-case execution time commonly assumed in the classical
model. Step by step, the peak utilization bound is derived in a preemptive fixed-
priority scheduling policy under the assumption of the execution of the task instance
time array. This model significantly improves the utilization processor load. Consider,
for example, a set of two tasks with the following four parameters (r
i
,C
i
,D
i
,T
i
): a
classical task τ
1
(0, 1, 5, 5) and a multiframe task τ
2
(0, (3, 1), 3, 3). The two execution
times of the latter task mean that the duration of this task is alternatively 3 and 1. The
two durations of task τ

2
can simulate a program with two different paths which are
executed alternatively. Figure 4.1 illustrates the execution sequence obtained with this
multiframe model and a RM algorithm priority assignment.
4.2.2 On-line adaptive model
In the context of the on-line adaptive model, two approaches have been proposed: the
elastic task model (Buttazzo et al., 1998) and the scheduling adaptive task model (Wang
and Lin, 1994). In the elastic task model, the periods of task are treated as springs, with
given elastic parameters: minimum length, maximum length and a rigidity coefficient.
Under this framework, periodic tasks can intentionally change their execution rate
to provide different quality of service, and the other tasks can automatically adapt
their period to keep the system underloaded. This model can also handle overload
conditions. It is extremely useful for handling applications such as multimedia in which
the execution rates of some computational activities have to be dynamically tuned as
a function of the current system state, i.e. oversampling, etc. Consider, for example, a
set of three tasks with the following four parameters (r
i
,C
i
,D
i
,T
i
): τ
1
(0, 10, 20, 20),
τ
2
(0, 10, 40, 40) and τ
3

(0, 15, 70, 70). With these periods, the task set is schedulable
by EDF since (see Chapter 2):
U =
10
20
+
10
40
+
15
70
= 0.964 < 1
If task τ
3
reduces its execution rate to 50, no feasible schedule exists, since the pro-
cessor load would be greater than 1:
U =
10
20
+
10
40
+
15
50
= 1.05 > 1
82 4 SCHEDULING SCHEMES FOR HANDLING OVERLOAD
d
i,j
e

i,j
s
i,j
r
i,j
d
i,j −1
e
i,j −1
s
i,j −1
r
i,j −1
r
i,j −1
r
i,j +1
d
i,j
e
i,j
s
i,j
r
i,j
d
i,j −1
e
i
,

j −1
s
i,j −1
t
t
Period i − 1
Frame i − 1 Frame i + 1
Frame i
(a)
(b)
Period i
Figure 4.2 Comparison between (a) a classical task model and (b) an adaptive task model
However, the system can accept the higher rate of task τ
3
by slightly decreasing the
execution of the two other tasks. For instance, if we give a period of 22 for task τ
1
and 45 for task τ
2
, we get a processor load lower than 1:
U =
10
22
+
10
45
+
15
50
= 0.977 < 1

The scheduling adaptive model considers that the deadline of an adaptive task is set to
one period interval after the completion of the previous task instance and the release
time can be set anywhere before the deadline. The time domain must be divided
into frames of equal length. The main goal of this model is to obtain constant time
spacing between adjacent task instances. The execution jitter is deeply reduced with
this model while it can vary from zero to twice the period with a scheduling of classical
periodic tasks. Figure 4.2 shows a comparison between a classical task model and an
adaptive task model. The fundamental difference between the two models is in selecting
the release times, which can be set anywhere before the deadline depending on the
individual requirements of the task. So the deadline is defined as one period from the
previous task instance completion.
4.2.3 Fault-tolerant mechanism
The basic idea of the fault-tolerant mechanism, based on an imprecise computation
model, relies on making available results that are of poorer, but acceptable, quality
on a timely basis when results of the desired quality cannot be produced in time.
In this context, two approaches have been proposed: the deadline mechanism model
(Campbell et al., 1979; Chetto and Chetto, 1991) and the imprecise computation model
(Chung et al., 1990). These models are detailed in the next two subsections.
Deadline mechanism model
The deadline mechanism model requires each task τ
i
to have a primary program τ
p
i
and
an alternate one τ
a
i
. The primary algorithm provides a good quality of service which is
in some sense more desirable, but in an unknown length of time. The alternate program

produces an acceptable result, but may be less desirable, in a known and deterministic
4.2 HANDLING REAL-TIME TASKS WITH VARYING TIMING PARAMETERS 83
length of time. In a controlling system that uses the deadline mechanism, the scheduling
algorithm ensures that all the deadlines are met either by the primary program or by
alternate algorithms but in preference by primary codes whenever possible.
To illustrate the use of this model, let us consider an avionics application that con-
cerns the space position of a plane during flight. The more accurate method is to
use satellite communication for the GPS technique. But the program, corresponding
to this function, has an unknown execution duration due to the multiple accesses to
that satellite service by many users. On the other hand, it is possible to get quite a
good position of the plane by using its previous position, given its speed and its direc-
tion during a fixed time step. The first positioning technique with a non-deterministic
execution time corresponds to the primary code of this task and the second method,
which is less precise, is an alternate code for this task. Of course it is necessary that
the precise positioning should be executed from time to time in order to get a good
quality of this crucial function. To achieve the goal of this deadline mechanism, two
strategies can be applied:
• The first-chance technique schedules the alternate programs first and the primary
codes are then scheduled in the remaining times after their associated alternate
programs have completed. If the primary program ends before its deadline, its
results are used in preference to those of the alternate program.
• The last-chance technique schedules the alternate programs in reserved time inter-
vals at the latest time. Primary codes are then scheduled in the remaining time
before their associated alternate programs. By applying this strategy, the sched-
uler preempts a running primary program to execute the corresponding alternate
program at the correct time in order to satisfy deadlines. If a primary program
successfully completes, the execution of the associated alternate program is no
longer necessary.
To illustrate the first-chance technique, we consider a set of three tasks: two classical
tasks τ

1
(0, 2, 16, 16) and τ
2
(0, 6, 32, 32), and a task τ
3
with primary and alternate
programs. The alternate code τ
a
i
is defined by the classical fixed parameters (0, 2, 8, 8).
The primary program τ
p
i
has various computational durations at each instance; assume
that, for the first four instances, the execution times of task τ
p
i
are successively (4,
4, 6, 6). The scheduling is based on an RM algorithm for the three task τ
1
, τ
2
and
the alternate code τ
a
i
. The primary programs τ
p
i
are scheduled with the lowest priority

or during the idle time of the processor. Figure 4.3 shows the result of the simulated
sequence. We can notice that, globally, the success in executing the primary program
is 50%. As we can see, we have the following executions:
• Instance 1: no free time for primary program execution;
• Instance 2: primary program completed;
• Instance 3: not enough free time for primary program execution;
• Instance 4: primary program completed.
In order to illustrate the last-chance technique, we consider a set of three tasks: two
classical tasks τ
1
(0, 4, 16, 16) and τ
2
(0, 6, 32, 32), and task τ
3
with primary and

×