Tải bản đầy đủ (.pdf) (127 trang)

Kiểm thử tự động cho các ứng dụng đa phương thức tương tác

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.65 MB, 127 trang )

MINISTRY OF EDUCATION & TRAINING
THE UNIVERSITY OF DANANG

----------

LE THANH LONG

AUTOMATIC TESTING OF
INTERACTIVE MULTIMODAL
APPLICATIONS

ENGINEERING DOCTORAL THESIS

Da Nang, 11/2017


MINISTRY OF EDUCATION & TRAINING
THE UNIVERSITY OF DANANG

----------

LE THANH LONG

AUTOMATIC TESTING OF
INTERACTIVE MULTIMODAL
APPLICATIONS
Major:

Computer science

Code of Major:



62 48 01 01

ENGINEERING DOCTORAL THESIS
Supervisors:
1. Prof. Dr. Ioannis Parissis
2. Assoc. Prof. Dr. Nguyễn Thanh Bình

Da Nang, 11/2017


REASSURANCES
I hereby certify this thesis done by my work, under the guidance of
Prof. Dr. Ioannis Parissis and Assoc. Prof. Dr. Nguyễn Thanh Bình.
I certify that the research results presented in the thesis are true and are
not copied from any other documents. All quotations origins are recorded
clearly and completely.
PhD. Student

Lê Thanh Long

1


TABLE OF CONTENTS
TABLE OF CONTENTS .............................................................................................................. 1
INTRODUCTION .......................................................................................................................... 9
Chapter 1. INTERACTIVE MULTIMODAL APPLICATIONS ........................................ 12
1.1. Multimodality .................................................................................................................. 12
1.2. Features of Multimodal Interaction ................................................................................ 13

1.3. Multimodal Fusion .......................................................................................................... 14
1.4. Design Spaces and Theoretical Frameworks ................................................................. 14
1.4.1. The TYCOON Theoretical Framework .................................................................... 14
1.4.2. CASE Design Space ................................................................................................... 15
1.4.3. The CARE Properties ................................................................................................. 16
1.5. Intoduction to Software Testing ..................................................................................... 18
1.5.1. Model-Based Testing ................................................................................................. 18
1.5.2. Operational Profile-Based Testing ............................................................................ 19
1.5.3 Requirement-Based Testing ........................................................................................ 20
1.6. Testing Interactive Multimodal Applications ................................................................ 21
1.6.1. ICO Method ................................................................................................................ 21
1.6.2. Event B Method .......................................................................................................... 26
1.6.3. Synchronous Approach .............................................................................................. 30
1.7. Conclusion ....................................................................................................................... 34
Chapter 2. BACKGROUND OF A NEW TEST MODELING LANGUAGE ................... 35
2.1. Task Trees ........................................................................................................................ 35
2.2. The Interactive Multimodal Application Memo ............................................................ 38
2.3. Operational Profiles ......................................................................................................... 42
2.4. Probabilistic Finite State Machines ................................................................................ 44
2.5. Setting a Level of Abstraction for Testing ..................................................................... 45
2.6. Generation of Tests at Dialog Controller Level ............................................................ 46
2.7. Taking into Account Conditional Probabilities ............................................................. 49
2.8. Evaluation of the Results of Extending the CTT with Conditional Probabilities ....... 55
2.9. Conclusion ....................................................................................................................... 57

2


Chapter 3. TTT: A NEW TEST MODELING LANGUAGE FOR TESTING
INTERACTIVE MULTIMODAL APPLICATIONS ............................................................ 59

3.1. Introduction ...................................................................................................................... 59
3.2. The User Actions Traces ................................................................................................. 61
3.3. Definition of the TTT Language .................................................................................... 62
3.4. Basic Structure of a TTT Model ..................................................................................... 62
3.5. Supporting Conditional Probability Specifications for all the CTT Operators ........... 65
3.6. Storing the Traces of the User Actions .......................................................................... 73
3.7. Transformation Rules from CTT to Test Model by Using the TTT Language ........... 76
3.8. Taking into Account Multimodality ............................................................................... 77
3.8.1. Generating Tests for Multimodal Events .................................................................. 77
3.8.2. Checking the Validity of CARE Properties .............................................................. 80
3.8.2.1. Equivalence Property .............................................................................................. 81
3.8.2.2. Redundancy-Equivalence Property ........................................................................ 82
3.8.2.3. Complementarity Property...................................................................................... 84
3.9. Modeling the Interactive Multimodal Application Memo by the TTT Language ...... 86
3.10. Advantages and disadvantages of the TTT Language ............................................... 91
3.11. Conclusion ..................................................................................................................... 91
Chapter 4. TTTEST: THE SUPPORT TOOL FOR TESTING INTERACTIVE
MULTIMODAL APPLICATIONS .......................................................................................... 93
4.1. Introduction ...................................................................................................................... 93
4.2. Test Execution Environment .......................................................................................... 93
4.3. The TTTEST Tool ........................................................................................................... 94
4.4 .Translating TESTCTT Model into C Program .............................................................. 95
4.4.1. Translation Problems.................................................................................................. 95
4.4.2. Automatic Translation Solution................................................................................. 98
4.5. Experimentation ............................................................................................................. 100
4.5.1. Modeling the NotePad Application by the TTT Language ................................... 100
4.5.2. Testing the Memo Application ................................................................................ 103
4.5.3. Testing the Map Navigator Application.................................................................. 105
4.6. Evaluation of the Resulted Test Cases ......................................................................... 112
CONCLUSIONS AND FUTURE WORKS ........................................................................... 116

3


PUBLICATIONS ....................................................................................................................... 118
REFERENCES ........................................................................................................................... 119

4


ACRONYMS
No.

Acronyms

Meaning

1

API

Application Programming Interface

2

AUT

Application Under Test

3


CARE

Complementarity, Assignment, Redundancy and Equivalence

4

CASE

Concurrent, Alternate, Synergistic and Exclusive

5

CTT

ConcurTaskTrees

6

DFA

Deterministic Finite State Automaton

7

FSMs

Finite State Machines

8


HMD

Head Mounted Display

9

IMA

Interactive Multimodal Application

10

ICO

Interactive Cooperative Objects

11

MBT

Model-Based Testing

12

NFA

Nondeterministic Finite State Automaton

13


ObCS

Object Control Structure

14

OPBT

Operational Profile-Based Testing

15

PFSM

Probabilities Finite State Machine

16

RBT

Requirement-Based Testing

17

SQL

Structured Query Language

18


TW

Temporal Window

19

UML

Unified Modeling Language

20

TTT

Task Tree – based Test

21

TTTEST

Testing IMA by means of the TTT language

5


LIST OF FIGURES
Figure 1.1. The TYCOON Theoretical Framework for studying multimodality. .... 15
Figure 1.2. The CASE design space.......................................................................... 16
Figure 1.3. A window of Tuple editor. ..................................................................... 21
Figure 1.4. The class Editor. ..................................................................................... 23

Figure 1.5. The ObCS of the class Editor. ................................................................ 24
Figure 2.1. The interactive multimodal application "Memo". .................................. 38
Figure 2.2. The Memo application struture [4] ......................................................... 40
Figure 2.3. The CTT for the Memo application. ....................................................... 42
Figure 2.4. The CTT with unconditional probabilities for the Memo application. .. 43
Figure 2.5. A multimodal application organized along the PAC-Amodeus model . 45
Figure 2.6. FSM Example for the Memo application [20]........................................ 48
Figure 2.7. The behavior of the choice operator. ...................................................... 51
Figure 2.8. The behavior of the concurrency operator. ............................................ 51
Figure 2.9. The behavior of the deactivation operator. ............................................. 52
Figure 2.10. The behavior of the option operator. .................................................... 52
Figure 2.11. The behavior of the suspend-resume operator. ..................................... 53
Figure 2.12. The extended CTT with conditional probabilities for the Memo
application. ................................................................................................................ 55
Figure 4.1. The TTTEST Testing Environment. ....................................................... 93
Figure 4.2. The TTTEST tool interface. ................................................................... 94
Figure 4.3. Transformation diagrams from TESTCTT model into C program. ....... 98
Figure 4.4. Translating TESTCTT to C program with Lex/Yacc. ............................ 99
Figure 4.5. The extended CTT for application NotePad. ........................................101
Figure 4.6. Multimodal interaction with a map. .....................................................105
Figure 4.7. The extended CTT for the Map navigator application. ........................ 106

6


LIST OF TABLES
Table 2.1. The CTT Operators [27] .......................................................................... 36
Table 2.2. Test data generated with unconditional probabilities and conditional
probabilities ............................................................................................................... 57
Table 2.3. Extensions of CTT operators with conditional probabilities ................... 57

Table 3.1. The TTT Syntax ....................................................................................... 62
Table 3.2. A basic structure of a TESTCTT ............................................................. 63
Table 3.3. A structure of a function .......................................................................... 65
Table 3.4. The CTT Syntax ....................................................................................... 65
Table 3.5. The behavior of choice operator .............................................................. 67
Table 3.6. The behavior of Concurrency operator .................................................... 68
Table 3.7. The behavior of Deactivation operator .................................................... 70
Table 3.8. The behavior of Suspend-resume operator .............................................. 71
Table 3.9. The behavior of Option operator.............................................................. 72
Table 3.10. The SQL-like syntax .............................................................................. 74
Table 3.11. The conditional constructs syntax .......................................................... 74
Table 3.12. Transformation rules from augmented CTT to Test Model .................. 76
Table 3.13. The semantics of modalities operator .................................................... 77
Table 3.14. Events are generated by Modalities operator ........................................ 80
Table 3.15. The behavior of TestEquivalence operator ........................................... 81
Table 3.16. The result of TestEquivalence operator ................................................. 82
Table 3.17. The behavior of TestRedundant_EquivalenceEarly operator ................ 83
Table 3.18. The result of TestRedundant_EquivalenceEarly operator ..................... 84
Table 3.19. The behavior of TestcomplementaryEarly operator .............................. 85
Table 3.20. The result of TestComplementaryEarly operator ................................... 86
Table 3.21. TESTCTT model for the Memo application .......................................... 87
Table 4.1. Lexical substitutions ................................................................................ 95
Table 4.2. Syntactic transformations ......................................................................... 96
Table 4.3. Transformation of choice operator........................................................... 97
7


Table 4.4. Transformations from create table statement in the TTT language into the
C language ................................................................................................................. 97
Table 4.5. High-level NotePad model .....................................................................102

Table 4.6. The result of TestEquivalence operator .................................................103
Table 4.7. The result of TestRedundantEquivalenceEarly operator ....................... 103
Table 4.8. The result of TestComplementaryEarly operator ...................................104
Table 4.9. The TESTCTT model for the Map Navigator application ....................107
Table 4.10. Multimodal Events are generated for the Map Navigator ...................112
Table 4.11. Results of the Experiment 1 .................................................................113
Table 4.12. Results of the Experiment 2 .................................................................113
Table 4.13. Results of the Experiment 3 .................................................................113

8


INTRODUCTION
1. Context and Motivation
Interactive Multimodal Applications (IMAs) support communication with
the user through different modalities such as voice and gesture. They can greatly
improve human-computer interaction, because they can be more intuitive, natural,
efficient, and robust. Multimodality brings an intuitive, natural affinity between the
machine and the user, such as in virtual reality mobile application. Efficiency is
obtained when the user can use equivalent modalities for the same tasks while
robustness can result from the integration of redundant or complementary
inputs [35].
The CARE properties (Complementarity, Assignment, Redundancy and
Equivalence) can be used as a measure to assess the usability of the multimodal
interaction. Equivalence and assignment represent the availability and, respectively,
the absence of choice between multiple modalities for performing a task while
complementarity and redundancy express relationships between modalities. The
flexibility and robustness of interactive multimodal applications result in an
increasing complexity of the design, development and testing. Therefore, ensuring
their correctness requires thorough validation [34].

Approaches based on formal specifications automating the development and
the validation activities have been proposed to deal with this complexity. In [21],
Laya Madani et al. present a technique of test case generation for testing CARE
properties by means of a synchronous approach. According to the proposed
approach, CARE properties are translated into an enhanced version of the Lustre
synchronous language. An improved method presented in [22] uses task trees and a
fusion model to perform test data generation for IMAs.
The above presented approach uses several notations, inspired fom existing
modeling languages, to build test models : a model of the application behavior, a
model of the interactive tasks, operational profiles (annotations on CTT) and

9


modality specifications. The variety of notations makes the modeling process hard.
So, as an additional improvement to this previous research work, in this thesis
“Automatic Testing of Interactive Multimodal Applications”, our objective is to
define a single specification and modeling language, called TTT (Task Tree based
Test) making possible to express, in a single and consistent syntax:
 Scenarios and conditional operational profiles for IMAs,
 Test oracles,
 Expected properties of the IMAs.
We built an automatic test generation approach based on this test modeling
language. The approach allows specifying multimodal events of interactive
multimodal applications and CARE properties as well as checking the validity of
CARE properties. We also develop a tool that automates the test of such interactive
multimodal applications.
2. Main Contributions of the Thesis
The thesis has the following main contributions:
(1) On the basis of analyzing the characteristics of ConcurTaskTrees (CTT), a

well-know notation for specifying interactive application, extensions of CTT
operators with conditional probabilities are proposed in order to take into
account IMA. More precisely, the user behavior on IMA is often influenced
by conditions on the application and its environment.
(2) A new test modeling language is defined for interactive multimodal
applications based on task trees. This language defines all the CTT operators,
supports state definition, multimodality and conditional probabilities for
IMA.
(3) The transformation rules from CTT into a test model in the TTT language are
formally developed.
(4) A solution to generate test data for interactive applications is proposed.

10


(5) A specification of multimodal interactions and CARE properties is integrated
into the TTT language. The specification consists of two different issues
when testing IMA: generating tests for multimodal events and checking the
validity of the CARE properties.
(6) The TTTEST tool is developed for automating the test of such interactive
multimodal applications.

3. Structure of the Thesis
Chapter 1 presents the background of interactive multimodal applications
which includes the definition of multimodality and key features of multimodal
interaction. Testing methods for interactive multimodal applications are also
summarized in this chapter.
In chapter 2, we present, with more details, the testing approach that we
focus on and the related background consisting of task trees, finite state machines,
multimodal interaction, CARE properties, operational profiles, conditional

probabilities.
In chapter 3, we propose a test modeling language for testing interactive
multimodal applications, called TTT, which makes it possible to express scenarios
and conditional operational profiles. The transformation rules from CTT
specifications into TTT test models are also developed.
Chapter 4 presents the TTTEST tool for testing interactive multimodal
applications which includes the test execution environment, and the underlying
implementation of this tool. We also introduce two interactive multimodal
applications and describe how to test them.

11


Chapter 1. INTERACTIVE MULTIMODAL APPLICATIONS
Interactive Multimodal Applications (IMA) [35] ensure the access to various
commercial services and are increasingly involved in many critical domains such as
flight or industrial process control. Designing IMA is a complex and error-prone
activity, because of the importance of the human-computer interaction aspect. This
is especially true when these interactions use multiple modalities (voice, gesture…).
They have the potential to greatly improve human-computer interaction,
because they can be more intuitive, natural, efficient, and robust. Flexibility is
obtained when the user can use equivalent modalities for the same tasks while
robustness can result from the integration of redundant or complementary inputs. As
a result, thoroughly testing such applications is particularly important and requires
more effort than for traditional interactive applications.
In this chapter, we present the background of interactive multimodal
application which includes the definition of multimodality and key features of
multimodal interaction. Testing methods for interactive multimodal applications are
also summarized in this chapter.


1.1. Multimodality
A modality is a channel or path of communication between the human and
the computer. It is one of different senses through which the human can perceive the
output of the computer (audition, vision, touch, smell, and taste); modalities will
also cover the input devices and sensors allowing the computer to receive
information from the human such as speech, pen, touch, manual gestures, gaze and
head and body movements [7]. So, a modality is defined as the representational
application, and the physical I/O device used to convey expressions of the
representational application [34].
Multimodal interaction processes two or more combined user input modes in
a coordinated manner with multimedia application output. Multimodal interaction
aims at recognizing naturally occurring forms of human language and
12


behavior [35]. Multimodal interaction can come either from the user action or from
the environment. Multimodal data can be analyzed concurrently, as well offline or
online, in order to get richer and more robust information over context and
participants.
The advantage of multimodal interaction is increased usability. The
weaknesses of one modality are offset by the strengths of another. On a mobile
device with a small visual interface and keypad, a word may be quite difficult to
type but very easy to say. It can process two or more combined user input modes
and consider these same input modes in a non-combined, exclusive or equivalent
manner.

1.2. Features of Multimodal Interaction
Multimodal interaction can be thought as a sub branch of human-computer
interaction using a set of modalities to achieve communication between users and
machines. Features of multimodal interaction include the following [1]:

 Permitting the flexible use of input modes, including alternation and
integrated use.
 Supporting improved efficiency, especially when manipulating graphical
information.
 Supporting shorter and simpler speech utterances than a speech-only
interface.
 Giving users alternatives in their interaction techniques.
Evaluations [2] have shown that users made 36% fewer errors while using a
multimodal interface in place of a unimodal interface. Furthermore, between 95%
and 100% of all users of these evaluations confirmed their preference for the
multimodal interface over other types of human-machine interfaces. Moreover,
multimodal interfaces, and in particular pen/speech interfaces, have shown greater
expressive power and greater potential precision in visual-spatial tasks.

13


1.3. Multimodal Fusion
The process of integrating information from various input modalities and
combining them into a complete command is referred as multimodal fusion. The
early fusion consists in merging the outcomes of each modal recognizer by using
integration mechanisms, such as statistical integration techniques, agent theory,
hidden Markov models, and artificial neural networks. The late fusion merges the
semantic information extracted by using specific dialogue-driven fusion procedures
to yield the complete interpretation, such as melting pots [28], and semantic
frames [44].

1.4. Design Spaces and Theoretical Frameworks
This section presents a theoretical framework, as well as two design spaces
specifically fashioned for analysis of multimodal interaction. The theoretical

framework which is presented is the TYCOON framework, created by Martin [25].
For the design spaces, the CASE model [25] classifies interactive multimodal
applications and the CARE properties assess the usability of multimodal interaction.

1.4.1. The TYCOON Theoretical Framework
TYCOON (Types and goals of Cooperation between modalities) is a twodimension framework (Figure 1.1), with five basic types of cooperation between
modalities forming the type-axis, and a number of goals defining the second
dimension of the framework.
The five types of cooperation are of particular interest as follows : Transfer,
when a chunk of information produced by a modality is used by another modality;
equivalence, when a chunk of information may be processed as an alternative, by
either of them; specialization, when specific kind of information is always
processed by the same modality; redundancy, when the same information is
processed by a number of modalities; and complementarity, when different chunks
of information are processed by each modality, but need to be merged.

14


GOALS
Translation

X

Recognition

X

Fast interaction


X

Interpretation

X
X

Learn ability

X
X

Transfer

Equivalence

Specialization

Redundancy

Complementarity

TYPES

Figure 1.1. The TYCOON Theoretical Framework for studying multimodality.
Transfer involves translation, for instance, the user may express a request in
one modality (speech) and get relevant information in another modality (video).
Transfer also involves recognition, i.e. mouse click detection may be transferred to
a speech modality in order to ease the recognition of predictable words (here,
that...). Equivalence enables a faster interaction since it allows the system or the

user to select the fastest modality. This specialization may help the user to interpret
the events produced by the computer. This means that the choice of a given
modality adds semantic information and hence helps the interpretation process.
Redundancy enables a faster interaction, for instance, if the user types “quit” on the
keyboard and utters “quit," this redundancy can be used by the system to avoid a
confirmation dialogue. A redundant multimodal output involving both visual
display of a text and speech restitution of the same text enabled faster graphical
interface learning thus enables learnability. Complementarity may improve
interpretation, i.e. where a graphical output is sufficient for an expert but need to be
completed by a textual output for novice users.

1.4.2. CASE Design Space
The CASE design space describes multimodal interfaces according to three
definite dimensions: type of fusion, use of modalities and level of abstraction [33].
The Level of abstraction dimension is used to differentiate between the
multiple levels at which a given input device data can be processed. Use of
modalities expresses the temporal use of different modalities. Finally, fusion of
modalities concerns possible combination of different types of data.
15


Figure 1.2 maps the use of modalities and fusion of modalities dimensions in
order to obtain four different categories characterizing multimodal commands:
Concurrent, Alternate, Synergistic and Exclusive. A “concurrent” command thus
makes a parallel use of modalities, but does not combine them. An “alternate”
command combines modalities, but in a sequential manner. A “synergistic”
command combines modalities in a parallel way. Finally, an “exclusive” command
is composed of sequential and independent modalities.

USE OF MODALITIES

Sequential

Parallel

FUSION OF

Combined

Alternate

Synergistic

MODALITIES

Independent

Exclusive

Concurrent

Figure 1.2. The CASE design space.

1.4.3. The CARE Properties
The CARE properties are proposed by Coutaz, J. et al. [8]. They are used in
usability testing of interactive multimodal applications. The CARE properties can
be applied to the design of input as well as output multimodal interfaces. The four
CARE properties are Complementarity, Assignment, Redundancy and Equivalence.
The complementarity denotes several modalities that convey complementary
chunks of information. The assignment implies that a single modality is assigned to
a task. The redundancy indicates that the same piece of information is conveyed by

several modalities. The equivalence of modalities implies that the user can perform
a task using a modality chosen amongst a set of modalities. The following is a
formal definition for CARE properties.
Equivalence: Modalities of set M are equivalent for reaching s' from s, if it is
necessary and sufficient to use any one of the modalities. M is assumed to contain at
least two modalities. More formally:
16


Equivalence (s, M, s') Card(M) >1) ( mM: Reach (s, m, s'))
Where modality m is an interaction method that an application can use to
reach a goal, Card (M) is the number of modalities in set M, Reach(s, m, s') allow an
application to reach state s' from state s in one step. Equivalence expresses the
availability of choice between multiple modalities but does not impose any form of
temporal constraint on them. For Memo application in Figure 2.1 shows an example
of equivalence between several modalities for specifying “remove a note”. Users
have a choice of speaking “remove this note” or pressing a key “delete”.
Assignment: Modality m is assigned in state s to reach s', if no other
modality is used to reach s' from s. In contrast to equivalence, assignment expresses
the absence of choice: either there is no choice at all to get from one state to
another. Assignment can be defined:
Assignment (s, m, s')  Reach (s, m, s') ^ (m'  M. Reach(s, m',s') => m'=m)

In Figure 2.1, exiting Memo is performed by direct manipulation in keyboard
only. Speech cannot be used as an alternative. Therefore, the application imposes a
assignment for "exit" task.
Redundancy: Modalities of a set M are used redundantly to reach state s'
from state s, if they have the same expressive power (they are equivalent) and if all
of them are used within the same temporal window, tw. In other words, the
application shows repetitive behaviour without increasing its expressive power:

Redundancy(s,M,s',tw) Equivalence(s,M, s')^Sequential (M,tw) v Parallel(M, tw))

Where, Sequential (M,tw) imply that modalities M are used sequentially
within a temporal window tw, Parallel (M, tw) defined the simultaneous use of
modalities of a set M over a finite temporal window tw. For example, the Memo
application is able to support redundancy between speech and mouse acts to get a
note in Figure 2.1.
Complementarity: Modalities of a set M must be used in a complementary
way to reach state s' from state s within a temporal window, if all of them must be

17


used to reach s' from s, i.e., none of them taken individually can cover the target
state.
Complementarity (s, M, s', tw)  (Card(M) >1) ^ (Duration(tw)<> ∞ ^
(M'  PM (M' <> M => ┐REACH (s, M, s'))) ^ (Sequential (M, tw) v Parallel (M, tw))

Where Duration(tw) is the duration of the time interval tw. As shown in
Figure 2.1, a Memo user can say “remove this” and select a note in Memo display.
These two modalities complement each other and must be combined to reach the
intended goal “remove the note”.

1.5. Intoduction to Software Testing
Software Testing is an activity to detect software errors. We execute the
program with specific input values and compare the behavior of the program with
the expected behavior to find failures in software [24].
Software testing is not fully comprehensive because there are a lot of
allowable inputs to each operation in the software. So we must choose a small
number of tests so that we can run the tests in the available time.

After each test execution, we must decide whether the observed behavior of
an application was a failure or not. A failure is an undesired behavior. To do so, we
may build an oracle to check the test output.
Testing is different from other quality improvement techniques such as static
verification, inspections, and reviews. It is also different from the debugging and
error-correction process that happens after testing has detected a failure.

1.5.1. Model-Based Testing
A model is a simplified description of a system based on requirements and
functions and can help us understand and predict the behavior of the system. Modelbased testing is a software testing technique in which test cases are derived from a
model of the application under test (AUT). Model-based testing has been adopted as
an integrated part of the testing process.

18


Model-based testing techniques can generate executable test cases that
include oracle information, such as the expected output values of the AUT, or some
automated checks on the actual output values to see if they are correct. To generate
tests with oracles, the test generator must know enough about the expected behavior
of the AUT to be able to predict or check the AUT output values.
In this thesis, we create a model of the expected IMA behavior, which
captures some of the requirements. Then the model-based testing tools are used to
automatically generate tests from that model.
MBT is considered as an effective solution to reduce costs and increase
efficiency for testing and quality assurance of software products. There are many
benefits of MBT: AUT fault detection, reduced testing cost and time, improved test
quality, requirements defect detection, traceability, and requirements evolution.
However, MBT is not easy to apply in practice. It is hard to build accurate models,
there is a high demand for testers and creating expected output value for the test

cases is a difficult problem.

1.5.2. Operational Profile-Based Testing
Operational profile-based testing (OPBT) [49] is a testing technique focusing
on usage distributions; it is a sort of combination of model-based testing, random
testing, and state-transition-based testing. The operational profile consists of a state
machine and probability distributions. The state machine represents the expected
behavior of AUT. The probability distribution represents the characteristics of the
expected usage of the AUT, and it is derived based on the survey of operational
environments.
The operational profile is used to randomly generate test cases in the form of
state-transition sequences starting from an initial state. The test cases statistically
reflect the characteristics of the expected usage in operational environments, and,
therefore, OPBT is effective in detecting faults that relate to frequent usage in
operational environments and have serious impacts on software reliability.

19


In OPBT, there is a formal model called a test model. The test model is a
probabilistic state machine that provides an overview of test results and the way to
evaluate test stopping criteria. If a fault is detected by test case execution, a special
state transition that represents the occurrence of the fault and its probability are
added to the test model.
OPBT was proposed as a software testing technique mainly for the rapid
improvement of software reliability. Its effectiveness is to reduce the test efforts.
One of the challenges in OPBT is to derive the usage distributions, and it has been
showed that the usage distributions can be systematically derived from software
execution histories captured by a test tool.


1.5.3. Requirement-Based Testing
Requirements-based testing (RBT) [9] is divided in two phases: ambiguity
reviews and cause-effect graphing. An ambiguity review is a technique for
identifying ambiguities in functional requirements to improve the quality of those
requirements. Cause-effect graphing is a test-case design technique that derives the
minimum number of test cases to cover 100 percent of the functional requirements.
The activities of RBT are to define test completion criteria, design test cases
and verify test coverage. The test effort has specific goals and testing is completed
only when the goals have been reached. Logical test cases are defined by four
characteristics: the initial state of the system prior to executing the test, the data, the
inputs, and the expected results.
The RBT methodology delivers maximum coverage with the minimum
number of test cases. RBT also provides quantitative test progress ensuring that
testing is adequately provided. It reduces the time to deliver by allowing testing to
be performed concurrently with the rest of the development activities.

20


1.6. Testing Interactive Multimodal Applications
Designing interactive multimodal applications is a complex activity, because
of the importance of the human-computer interaction aspect. For the same reason,
thoroughly testing such applications is particularly important and requires a lot of
effort. Approaches based on formal specifications automating the development and
the validation activities have been proposed to deal with this complexity.

1.6.1. ICO Method
ICO (Interactive Cooperative Objects) is based on Petri nets to model eventdriven interfaces [37]. ICO cope with concepts borrowed both from the object
oriented approach (classification, inheritance, polymorphism and use relationship)
and from the Petri nets theory [39]. Each object is composed of four components:

data structure, operations, presentation, and behavior. ICO is used to describe the
structural and static aspects of applications. The dynamic or behavioral aspects are
modeled by a high-level Petri net with objects called Object Control Structure
(ObCS) (Figure 1.5).
To specify interactive applications with the ICO, a window is modeled by an
ICO class, as for example the one shown in Figure 1.3; its presentation defines the
window’s layout, and its ObCS the window’s dialogue. The example is an editor for
tuples in a relational database table.

Figure3.1.3. A window of Tuple editor [38].
In the Figure 1.3, three different areas can be distinguished in that window:

21


 The editing area, in which the attributes of a selected tuple may be edited
through the use of standard interface components (radio buttons, check box, simpleline entry field).
 A scrollable list (list box) shows the tuples of the table, presenting them by
their distinctive attribute (a primary key). Items in this list may be selected by
clicking on them with the mouse.
 A command zone in which database operations (creation, deletion, ...) may
be launched by clicking on command pushbuttons.
This editor allows adding new tuples into the database, deleting tuples,
selecting tuples from those already stored and changing their values. The actions
available to the user change through time and depend on the state of the dialogue.
Those dialogue rules are expressed here informally. One of the goals of the ICO
modeling is to make formal and non ambiguous such natural language informal
requirements:
 It is forbidden to Select a tuple from the table when another one is being
edited.

 It is forbidden to Quit the application while the user is editing a tuple. In any
other case it must be possible to quit.
 It is forbidden to Delete a tuple whose value has been modified by the user.
 After a modification of the current tuple, only the actions Add, Replace and
Reset are available.
 The user must be able to act on the items of the editing area at any time.
 Only tuples that satisfy the integrity constraints may be added to the
database.
To model the Tuple Editor with ICO, the class Editor has services, an ObCS
and a presentation part. The description of the class is presented in Figure 1.4. The
ObCS is shown in Figure 1.5.

22


Class Editor
Services
Reset; Replace; Edit; Delete; Quit; Add;
ObCS (see Figure 11)
Presentation

end
Figure4.1.4. The class Editor.
In the Figure 1.5, the
<o>

The

Default


The

The T1
The

edit

o.correct

symbol denotes the precondition.

symbol presents the arc with variable.
symbol denotes the place with initial marking.

symbol denotes the transaction T1 related to the edit service.
symbol denotes the inhibitor arc.

Selected, List, Default: <o: Tuple>; Edited : <o,d : Tuple> are type of places.
The Editor's ObCS must be read in the following way: Transitions are
labeled with variable names that are bound to objects when the transition occurs. A
transition may occur when the input places are populated with required tokens
(objects). At that time, the related transition action is executed. Actions can generate
new objects, delete objects, and update objects. The modified and the new objects
are outputs of the output places. The places are typed, which means that the tokens
inside them should be of the same type.
 Initialization: The initial marking of the ObCS' net depends on the actual
contents of the database at the time the window is opened (when the ICO is
created). Figure 1.5 shows an initial marking: the places list, selected and edited
are empty, and the place default contains the template for the first item to be edited.
If the table was not empty, one tuple would be automatically selected while all the

others would be in the place list.

23


×