Tải bản đầy đủ (.doc) (179 trang)

Giáo trình SoftWare Testing

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.18 MB, 179 trang )

Software Testing
Confidential
Cognizant
Technology
Solutions

Performance Testing Process & Methodology Proprietary & Confidential - 2 -
Table of Contents
1 INTRODUCTION TO SOFTWARE ............................................................................................ 7
1.1 EVOLUTION OF THE SOFTWARE TESTING DISCIPLINE ........................................................................... 7
1.2 THE TESTING PROCESS AND THE SOFTWARE TESTING LIFE CYCLE ....................................................... 7
1.3 BROAD CATEGORIES OF TESTING .................................................................................................... 8
1.4 WIDELY EMPLOYED TYPES OF TESTING ........................................................................................... 8
1.5 THE TESTING TECHNIQUES ............................................................................................................. 9
1.6 CHAPTER SUMMARY ...................................................................................................................... 9
2 BLACK BOX AND WHITE BOX TESTING ............................................................................ 11
2.1 INTRODUCTION ............................................................................................................................ 11
2.2 BLACK BOX TESTING ................................................................................................................... 11
2.3 TESTING STRATEGIES/TECHNIQUES ................................................................................................ 13
2.4 BLACK BOX TESTING METHODS ..................................................................................................... 14
2.5 BLACK BOX (VS) WHITE BOX .................................................................................................... 16
2.6 WHITE BOX TESTING ........................................................................................................ 18
3 GUI TESTING ............................................................................................................................ 23
3.1 SECTION 1 - WINDOWS COMPLIANCE TESTING ............................................................................... 23
3.2 SECTION 2 - SCREEN VALIDATION CHECKLIST ................................................................................ 25
3.3 SPECIFIC FIELD TESTS ................................................................................................................. 29
3.4 VALIDATION TESTING - STANDARD ACTIONS .................................................................................. 30
4 REGRESSION TESTING ........................................................................................................... 33
4.1 WHAT IS REGRESSION TESTING ..................................................................................................... 33
4.2 TEST EXECUTION ....................................................................................................................... 34
4.3 CHANGE REQUEST ...................................................................................................................... 35


4.4 BUG TRACKING ......................................................................................................................... 35
4.5 TRACEABILITY MATRIX ............................................................................................................... 36
5 PHASES OF TESTING ............................................................................................................... 39
5.1 INTRODUCTION ........................................................................................................................... 39
5.2 TYPES AND PHASES OF TESTING .................................................................................................... 39
5.3 THE “V”MODEL ........................................................................................................................ 40
........................................................................................................................................................ 42
6 INTEGRATION TESTING ........................................................................................................ 43
6.1 GENERALIZATION OF MODULE TESTING CRITERIA .............................................................................. 44
......................................................................................................................................................... 46
7 ACCEPTANCE TESTING ......................................................................................................... 49
7.1 INTRODUCTION – ACCEPTANCE TESTING ........................................................................................ 49
7.2 FACTORS INFLUENCING ACCEPTANCE TESTING ................................................................................ 49
7.3 CONCLUSION .............................................................................................................................. 50
8 SYSTEM TESTING ..................................................................................................................... 51
8.1 INTRODUCTION TO SYSTEM TESTING .................................................................................... 51
8.2 NEED FOR SYSTEM TESTING ........................................................................................................ 51

Performance Testing Process & Methodology Proprietary & Confidential - 3 -
8.3 SYSTEM TESTING TECHNIQUES ..................................................................................................... 52
8.4 FUNCTIONAL TECHNIQUES ............................................................................................................. 53
8.5 CONCLUSION: ............................................................................................................................. 53
9 UNIT TESTING .......................................................................................................................... 54
9.1 INTRODUCTION TO UNIT TESTING .................................................................................................. 54
9.2 UNIT TESTING – FLOW: .............................................................................................................. 55
1 RESULTS ..................................................................................................................................... 55
UNIT TESTING – BLACK BOX APPROACH .......................................................................................... 56
UNIT TESTING – WHITE BOX APPROACH ........................................................................................... 56
UNIT TESTING – FIELD LEVEL CHECKS ................................................................................... 56
UNIT TESTING – FIELD LEVEL VALIDATIONS .................................................................................... 56

UNIT TESTING – USER INTERFACE CHECKS ......................................................................................... 56
9.3 EXECUTION OF UNIT TESTS .......................................................................................................... 57
UNIT TESTING FLOW : ..................................................................................................................... 57
DISADVANTAGE OF UNIT TESTING ............................................................................................ 59
METHOD FOR STATEMENT COVERAGE ................................................................................................. 59
RACE COVERAGE ................................................................................................................... 60
9.4 CONCLUSION .............................................................................................................................. 60
10 TEST STRATEGY ..................................................................................................................... 62
10.1 INTRODUCTION ......................................................................................................................... 62
10.2 KEY ELEMENTS OF TEST MANAGEMENT: ...................................................................................... 62
10.3 TEST STRATEGY FLOW : ............................................................................................................ 63
10.4 GENERAL TESTING STRATEGIES .................................................................................................. 65
10.5 NEED FOR TEST STRATEGY ........................................................................................................ 65
10.6 DEVELOPING A TEST STRATEGY .................................................................................................. 66
10.7 CONCLUSION: ........................................................................................................................... 66
11 TEST PLAN ............................................................................................................................... 68
11.1 WHAT IS A TEST PLAN? ............................................................................................................ 68
CONTENTS OF A TEST PLAN .............................................................................................................. 68
11.2 CONTENTS (IN DETAIL) .............................................................................................................. 68
12 TEST DATA PREPARATION - INTRODUCTION ............................................................... 71
12.1 CRITERIA FOR TEST DATA COLLECTION ...................................................................................... 72
12.2 CLASSIFICATION OF TEST DATA TYPES ........................................................................................ 78
12.3 ORGANIZING THE DATA .............................................................................................................. 80
12.4 DATA LOAD AND DATA MAINTENANCE ....................................................................................... 82
12.5 TESTING THE DATA .................................................................................................................. 83
12.6 CONCLUSION ............................................................................................................................ 83
13 TEST LOGS - INTRODUCTION ............................................................................................ 85
13.1 FACTORS DEFINING THE TEST LOG GENERATION ......................................................................... 85
13.2 COLLECTING STATUS DATA ...................................................................................................... 86
14 TEST REPORT ......................................................................................................................... 92

14.1 EXECUTIVE SUMMARY ............................................................................................................... 92

Performance Testing Process & Methodology Proprietary & Confidential - 4 -
15 DEFECT MANAGEMENT ....................................................................................................... 95
15.1 DEFECT ................................................................................................................................... 95
15.2 DEFECT FUNDAMENTALS ........................................................................................................... 95
15.3 DEFECT TRACKING ................................................................................................................... 96
15.4 DEFECT CLASSIFICATION ............................................................................................................ 97
15.5 DEFECT REPORTING GUIDELINES ................................................................................................. 98
16 AUTOMATION ....................................................................................................................... 101
16.1 WHY AUTOMATE THE TESTING PROCESS? .................................................................................. 101
16.2 AUTOMATION LIFE CYCLE ....................................................................................................... 103
16.3 PREPARING THE TEST ENVIRONMENT ......................................................................................... 105
16.4 AUTOMATION METHODS .......................................................................................................... 108
17 GENERAL AUTOMATION TOOL COMPARISON .......................................................... 111
17.1 FUNCTIONAL TEST TOOL MATRIX ............................................................................................. 111
17.2 RECORD AND PLAYBACK .......................................................................................................... 111
17.3 WEB TESTING ........................................................................................................................ 112
17.4 DATABASE TESTS ................................................................................................................... 112
17.5 DATA FUNCTIONS ................................................................................................................... 112
17.6 OBJECT MAPPING ................................................................................................................... 113
17.7 IMAGE TESTING ...................................................................................................................... 114
17.8 TEST/ERROR RECOVERY ........................................................................................................... 114
17.9 OBJECT NAME MAP ................................................................................................................ 114
17.10 OBJECT IDENTITY TOOL ......................................................................................................... 115
17.11 EXTENSIBLE LANGUAGE ......................................................................................................... 115
17.12 ENVIRONMENT SUPPORT ........................................................................................................ 116
17.13 INTEGRATION ........................................................................................................................ 116
17.14 COST .................................................................................................................................. 116
17.15 EASE OF USE ...................................................................................................................... 117

17.16 SUPPORT .............................................................................................................................. 117
17.17 OBJECT TESTS ...................................................................................................................... 117
17.18 MATRIX .............................................................................................................................. 118
17.19 MATRIX SCORE ..................................................................................................................... 118
18 SAMPLE TEST AUTOMATION TOOL ............................................................................... 119
18.1 RATIONAL SUITE OF TOOLS ..................................................................................................... 119
18.2 RATIONAL ADMINISTRATOR ...................................................................................................... 120
18.3 RATIONAL ROBOT ................................................................................................................... 124
18.4 ROBOT LOGIN WINDOW ............................................................................................................ 125
18.5 RATIONAL ROBOT MAIN WINDOW-GUI SCRIPT ............................................................................ 126
18.6 RECORD AND PLAYBACK OPTIONS ............................................................................................. 127
18.7 VERIFICATION POINTS .............................................................................................................. 129
18.8 ABOUT SQABASIC HEADER FILES ........................................................................................... 131
18.9 ADDING DECLARATIONS TO THE GLOBAL HEADER FILE ............................................................... 131
18.10 INSERTING A COMMENT INTO A GUI SCRIPT: ........................................................................... 131
18.11 ABOUT DATA POOLS ............................................................................................................. 132
18.12 DEBUG MENU ....................................................................................................................... 132
18.13 COMPILING THE SCRIPT .......................................................................................................... 133
18.14 COMPILATION ERRORS ............................................................................................................ 134

Performance Testing Process & Methodology Proprietary & Confidential - 5 -
19 RATIONAL TEST MANAGER ............................................................................................. 135
19.1 TEST MANAGER-RESULTS SCREEN ............................................................................................ 136
20 SUPPORTED ENVIRONMENTS .......................................................................................... 138
20.1 OPERATING SYSTEM ................................................................................................................. 138
20.2 PROTOCOLS ............................................................................................................................ 138
20.3 WEB BROWSERS ...................................................................................................................... 138
20.4 MARKUP LANGUAGES .............................................................................................................. 138
20.5 DEVELOPMENT ENVIRONMENTS .................................................................................................. 138
21 PERFORMANCE TESTING .................................................................................................. 139

21.1 WHAT IS PERFORMANCE TESTING? .......................................................................................... 139
21.2 WHY PERFORMANCE TESTING? ............................................................................................... 139
21.3 PERFORMANCE TESTING OBJECTIVES ......................................................................................... 140
21.4 PRE-REQUISITES FOR PERFORMANCE TESTING ............................................................................. 140
21.5 PERFORMANCE REQUIREMENTS ................................................................................................. 141
22 PERFORMANCE TESTING PROCESS ............................................................................... 142
22.1 PHASE 1 – REQUIREMENTS STUDY ............................................................................................ 143
22.2 PHASE 2 – TEST PLAN ............................................................................................................ 144
22.3 PHASE 3 – TEST DESIGN ......................................................................................................... 144
22.4 PHASE 4 –SCRIPTING .............................................................................................................. 145
22.5 PHASE 5 – TEST EXECUTION .................................................................................................... 146
22.6 PHASE 6 – TEST ANALYSIS ...................................................................................................... 146
22.7 PHASE 7 – PREPARATION OF REPORTS ....................................................................................... 147
22.8 COMMON MISTAKES IN PERFORMANCE TESTING .......................................................................... 148
22.9 BENCHMARKING LESSONS ....................................................................................................... 148
23 TOOLS ...................................................................................................................................... 150
23.1 LOADRUNNER 6.5 .................................................................................................................. 150
23.2 WEBLOAD 4.5 ....................................................................................................................... 150
23.3 ARCHITECTURE BENCHMARKING ............................................................................................... 157
23.4 GENERAL TESTS .................................................................................................................... 157
24 PERFORMANCE METRICS ................................................................................................. 158
24.1 CLIENT SIDE STATISTICS .......................................................................................................... 158
24.2 SERVER SIDE STATISTICS ......................................................................................................... 159
24.3 NETWORK STATISTICS ............................................................................................................. 159
24.4 CONCLUSION .......................................................................................................................... 159
25 LOAD TESTING ..................................................................................................................... 161
25.1 WHY IS LOAD TESTING IMPORTANT ? ......................................................................................... 161
25.2 WHEN SHOULD LOAD TESTING BE DONE? .................................................................................... 161
26 LOAD TESTING PROCESS .................................................................................................. 162
26.1 SYSTEM ANALYSIS .................................................................................................................. 162

26.2 USER SCRIPTS ........................................................................................................................ 162
26.3 SETTINGS ............................................................................................................................... 162
26.4 PERFORMANCE MONITORING .................................................................................................... 163

Performance Testing Process & Methodology Proprietary & Confidential - 6 -
26.5 ANALYZING RESULTS .............................................................................................................. 163
26.6 CONCLUSION .......................................................................................................................... 163
27 STRESS TESTING .................................................................................................................. 165
27.1 INTRODUCTION TO STRESS TESTING ........................................................................................... 165
27.2 BACKGROUND TO AUTOMATED STRESS TESTING ........................................................................ 166
27.3 AUTOMATED STRESS TESTING IMPLEMENTATION ......................................................................... 168
27.4 PROGRAMMABLE INTERFACES ................................................................................................... 168
27.5 GRAPHICAL USER INTERFACES .................................................................................................. 169
27.6 DATA FLOW DIAGRAM ............................................................................................................ 169
27.7 TECHNIQUES USED TO ISOLATE DEFECTS ................................................................................... 170
28 TEST CASE COVERAGE ...................................................................................................... 172
28.1 TEST COVERAGE .................................................................................................................... 172
28.2 TEST COVERAGE MEASURES ...................................................................................................... 172
28.3 PROCEDURE-LEVEL TEST COVERAGE ......................................................................................... 173
28.4 LINE-LEVEL TEST COVERAGE .................................................................................................. 173
28.5 CONDITION COVERAGE AND OTHER MEASURES .......................................................................... 173
28.6 HOW TEST COVERAGE TOOLS WORK ....................................................................................... 173
28.7 TEST COVERAGE TOOLS AT A GLANCE ...................................................................................... 175
29 TEST CASE POINTS-TCP ..................................................................................................... 176
29.1 WHAT IS A TEST CASE POINT (TCP) ....................................................................................... 176
29.2 CALCULATING THE TEST CASE POINTS: ..................................................................................... 176
29.3 CHAPTER SUMMARY ................................................................................................................ 178
Low Level
Design
High Level

Design
System Testing

Performance Testing Process & Methodology Proprietary & Confidential - 7 -
1 Introduction to Software
1.1 Evolution of the Software Testing discipline
The effective functioning of modern systems depends on our ability to produce software in
a cost-effective way. The term software engineering was first used at a 1968 NATO
workshop in West Germany. It focused on the growing software crisis! Thus we see that
the software crisis on quality, reliability, high costs etc. started way back when most of
today’s software testers were not even born!
The attitude towards Software Testing underwent a major positive change in the recent
years. In the 1950’s when Machine languages were used, testing is nothing but debugging.
When in the 1960’s, compilers were developed, testing started to be considered a separate
activity from debugging. In the 1970’s when the software engineering concepts were
introduced, software testing began to evolve as a technical discipline. Over the last two
decades there has been an increased focus on better, faster and cost-effective software.
Also there has been a growing interest in software safety, protection and security and
hence an increased acceptance of testing as a technical discipline and also a career
choice!.
Now to answer, “What is Testing?” we can go by the famous definition of Myers, which
says, “Testing is the process of executing a program with the intent of finding errors”
1.2 The Testing process and the Software Testing Life
Cycle
Every testing project has to follow the waterfall model of the testing process.
The waterfall model is as given below
1.Test Strategy & Planning
2.Test Design
3.Test Environment setup
4.Test Execution

5.Defect Analysis & Tracking
6.Final Reporting
According to the respective projects, the scope of testing can be tailored, but the process
mentioned above is common to any testing activity.
Software Testing has been accepted as a separate discipline to the extent that there is a
separate life cycle for the testing activity. Involving software testing in all phases of the
Low Level
Design
High Level
Design
System Testing

Performance Testing Process & Methodology Proprietary & Confidential - 8 -
software development life cycle has become a necessity as part of the software quality
assurance process. Right from the Requirements study till the implementation, there needs
to be testing done on every phase. The V-Model of the Software Testing Life Cycle along
with the Software Development Life cycle given below indicates the various phases or
levels of testing.
1.3 Broad Categories of Testing
Based on the V-Model mentioned above, we see that there are two categories of testing
activities that can be done on software, namely,
 Static Testing
 Dynamic Testing
The kind of verification we do on the software work products before the process of
compilation and creation of an executable is more of Requirement review, design review,
code review, walkthrough and audits. This type of testing is called Static Testing. When we
test the software by executing and comparing the actual & expected results, it is called
Dynamic Testing
1.4 Widely employed Types of Testing
From the V-model, we see that are various levels or phases of testing, namely, Unit testing,

Integration testing, System testing, User Acceptance testing etc.
Let us see a brief definition on the widely employed types of testing.
Unit Testing: The testing done to a unit or to a smallest piece of software. Done to verify if
it satisfies its functional specification or its intended design structure.
Integration Testing: Testing which takes place as sub elements are combined (i.e.,
integrated) to form higher-level elements
Regression Testing: Selective re-testing of a system to verify the modification (bug fixes)
have not caused unintended effects and that system still complies with its specified
requirements
Requirement
Study
Low Level
Design
High Level
Design
Unit
Testing
Integration Testing
System Testing
User Acceptance
Testing
Production Verification
Testing
SDLC - STLC

Performance Testing Process & Methodology Proprietary & Confidential - 9 -
System Testing : Testing the software for the required specifications on the intended
hardware
Acceptance Testing: Formal testing conducted to determine whether or not a system
satisfies its acceptance criteria, which enables a customer to determine whether to accept

the system or not.
Performance Testing: To evaluate the time taken or response time of the system to
perform it’s required functions in comparison
Stress Testing: To evaluate a system beyond the limits of the specified requirements or
system resources (such as disk space, memory, processor utilization) to ensure the system
do not break unexpectedly
Load Testing: Load Testing, a subset of stress testing, verifies that a web site can handle
a particular number of concurrent users while maintaining acceptable response times
Alpha Testing: Testing of a software product or system conducted at the developer’s site
by the customer
Beta Testing: Testing conducted at one or more customer sites by the end user of a
delivered software product system.
1.5 The Testing Techniques
To perform these types of testing, there are two widely used testing techniques. The above
said testing types are performed based on the following testing techniques.
Black-Box testing technique:
This technique is used for testing based solely on analysis of requirements
(specification, user documentation.). Also known as functional testing.
White-Box testing technique:
This technique us used for testing based on analysis of internal logic (design,
code, etc.)(But expected results still come requirements). Also known as Structural testing.
These topics will be elaborated in the coming chapters
1.6 Chapter Summary
This chapter covered the Introduction and basics of software testing mentioning
about

Performance Testing Process & Methodology Proprietary & Confidential - 10 -
 Evolution of Software Testing
 The Testing process and lifecycle
 Broad categories of testing

 Widely employed Types of Testing
 The Testing Techniques

Performance Testing Process & Methodology Proprietary & Confidential - 11 -
2 Black Box and White Box testing
2.1 Introduction
Test Design refers to understanding the sources of test cases, test coverage, how to
develop and document test cases, and how to build and maintain test data. There are 2
primary methods by which tests can be designed and they are:
- BLACK BOX
- WHITE BOX
Black-box test design treats the system as a literal "black-box", so it doesn't explicitly use
knowledge of the internal structure. It is usually described as focusing on testing functional
requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and
closed-box.
White-box test design allows one to peek inside the "box", and it focuses specifically on
using internal knowledge of the software to guide the selection of test data. It is used to
detect errors by means of execution-oriented test cases. Synonyms for white-box include:
structural, glass-box and clear-box.
While black-box and white-box are terms that are still in popular use, many people prefer
the terms "behavioral" and "structural". Behavioral test design is slightly different from
black-box test design because the use of internal knowledge isn't strictly forbidden, but it's
still discouraged. In practice, it hasn't proven useful to use a single test design method.
One has to use a mixture of different methods so that they aren't hindered by the limitations
of a particular one. Some call this "gray-box" or "translucent-box" test design, but others
wish we'd stop talking about boxes altogether!!!
2.2 Black box testing
Black Box Testing is testing without knowledge of the internal workings of the item being
tested. For example, when black box testing is applied to software engineering, the tester
would only know the "legal" inputs and what the expected outputs should be, but not how

the program actually arrives at those outputs. It is because of this that black box testing
can be considered testing with respect to the specifications, no other knowledge of the
program is necessary. For this reason, the tester and the programmer can be independent
of one another, avoiding programmer bias toward his own work. For this testing, test
groups are often used,
Though centered around the knowledge of user requirements, black box tests do not
necessarily involve the participation of users. Among the most important black box tests
that do not involve users are functionality testing, volume tests, stress tests, recovery
testing, and benchmarks . Additionally, there are two types of black box test that involve
users, i.e. field and laboratory tests. In the following the most important aspects of these
black box tests will be described briefly.

Performance Testing Process & Methodology Proprietary & Confidential - 12 -
2.2.1.1 Black box testing - without user involvement
The so-called ``functionality testing'' is central to most testing exercises. Its primary
objective is to assess whether the program does what it is supposed to do, i.e. what is
specified in the requirements. There are different approaches to functionality testing. One
is the testing of each program feature or function in sequence. The other is to test module
by module, i.e. each function where it is called first.
The objective of volume tests is to find the limitations of the software by processing a huge
amount of data. A volume test can uncover problems that are related to the efficiency of a
system, e.g. incorrect buffer sizes, a consumption of too much memory space, or only
show that an error message would be needed telling the user that the system cannot
process the given amount of data.
During a stress test, the system has to process a huge amount of data or perform many
function calls within a short period of time. A typical example could be to perform the same
function from all workstations connected in a LAN within a short period of time (e.g.
sending e-mails, or, in the NLP area, to modify a term bank via different terminals
simultaneously).
The aim of recovery testing is to make sure to which extent data can be recovered after a

system breakdown. Does the system provide possibilities to recover all of the data or part
of it? How much can be recovered and how? Is the recovered data still correct and
consistent? Particularly for software that needs high reliability standards, recovery testing is
very important.
The notion of benchmark tests involves the testing of program efficiency. The efficiency of
a piece of software strongly depends on the hardware environment and therefore
benchmark tests always consider the soft/hardware combination. Whereas for most
software engineers benchmark tests are concerned with the quantitative measurement of
specific operations, some also consider user tests that compare the efficiency of different
software systems as benchmark tests. In the context of this document, however,
benchmark tests only denote operations that are independent of personal variables.
2.2.1.2 Black box testing - with user involvement
For tests involving users, methodological considerations are rare in SE literature. Rather,
one may find practical test reports that distinguish roughly between field and laboratory
tests. In the following only a rough description of field and laboratory tests will be given.
E.g. Scenario Tests. The term ``scenario'' has entered software evaluation in the early
1990s . A scenario test is a test case which aims at a realistic user background for the
evaluation of software as it was defined and performed It is an instance of black box testing
where the major objective is to assess the suitability of a software product for every-day
routines. In short it involves putting the system into its intended use by its envisaged type
of user, performing a standardised task.
In field tests users are observed while using the software system at their normal working
place. Apart from general usability-related aspects, field tests are particularly useful for
assessing the interoperability of the software system, i.e. how the technical integration of
the system works. Moreover, field tests are the only real means to elucidate problems of
the organisational integration of the software system into existing procedures. Particularly
in the NLP environment this problem has frequently been underestimated. A typical

Performance Testing Process & Methodology Proprietary & Confidential - 13 -
example of the organisational problem of implementing a translation memory is the

language service of a big automobile manufacturer, where the major implementation
problem is not the technical environment, but the fact that many clients still submit their
orders as print-out, that neither source texts nor target texts are properly organised and
stored and, last but not least, individual translators are not too motivated to change their
working habits.
Laboratory tests are mostly performed to assess the general usability of the system. Due to
the high laboratory equipment costs laboratory tests are mostly only performed at big
software houses such as IBM or Microsoft. Since laboratory tests provide testers with many
technical possibilities, data collection and analysis are easier than for field tests.
2.3 Testing Strategies/Techniques
• Black box testing should make use of randomly generated inputs (only a test range
should be specified by the tester), to eliminate any guess work by the tester as to
the methods of the function
• Data outside of the specified input range should be tested to check the robustness
of the program
• Boundary cases should be tested (top and bottom of specified range) to make sure
the highest and lowest allowable inputs produce proper output
• The number zero should be tested when numerical data is to be input
• Stress testing should be performed (try to overload the program with inputs to see
where it reaches its maximum capacity), especially with real time systems
• Crash testing should be performed to see what it takes to bring the system down
• Test monitoring tools should be used whenever possible to track which tests have
already been performed and the outputs of these tests to avoid repetition and to
aid in the software maintenance
• Other functional testing techniques include: transaction testing, syntax testing,
domain testing, logic testing, and state testing.
• Finite state machine models can be used as a guide to design functional tests
• According to Beizer the following is a general order by which tests should be
designed:
1. Clean tests against requirements.

2. Additional structural tests for branch
coverage, as needed.
3. Additional tests for data-flow coverage as
needed.
4. Domain tests not covered by the above.
5. Special techniques as appropriate--
syntax, loop, state, etc.
6. Any dirty tests not covered by the above.

Performance Testing Process & Methodology Proprietary & Confidential - 14 -
2.4 Black box testing Methods
2.4.1 Graph-based Testing Methods
• Black-box methods based on the nature of the relationships (links) among the
program objects (nodes), test cases are designed to traverse the entire graph
• Transaction flow testing (nodes represent steps in some transaction and links
represent logical connections between steps that need to be validated)
• Finite state modeling (nodes represent user observable states of the software and
links represent transitions between states)
• Data flow modeling (nodes are data objects and links are transformations from one
data object to another)
• Timing modeling (nodes are program objects and links are sequential connections
between these objects, link weights are required execution times)
2.4.2 Equivalence Partitioning
• Black-box technique that divides the input domain into classes of data from which
test cases can be derived
• An ideal test case uncovers a class of errors that might require many arbitrary test
cases to be executed before a general error is observed
• Equivalence class guidelines:
1. If input condition specifies a range, one valid and two invalid equivalence
classes are defined

2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class is defined
4. If an input condition is Boolean, one valid and one invalid equivalence
class is defined
2.4.3 Boundary Value Analysis
• Black-box technique that focuses on the boundaries of the input domain rather
than its center
• BVA guidelines:
1. If input condition specifies a range bounded by values a and b, test cases
should include a and b, values just above and just below a and b
2. If an input condition specifies and number of values, test cases should be
exercise the minimum and maximum numbers, as well as values just
above and just below the minimum and maximum values
3. Apply guidelines 1 and 2 to output conditions, test cases should be
designed to produce the minimum and maxim output reports
4. If internal program data structures have boundaries (e.g. size limitations),
be certain to test the boundaries

Performance Testing Process & Methodology Proprietary & Confidential - 15 -
2.4.4 Comparison Testing
• Black-box testing for safety critical systems in which independently developed
implementations of redundant systems are tested for conformance to
specifications
• Often equivalence class partitioning is used to develop a common set of test cases
for each implementation
2.4.5 Orthogonal Array Testing
• Black-box technique that enables the design of a reasonably small set of test
cases that provide maximum test coverage

• Focus is on categories of faulty logic likely to be present in the software
component (without examining the code)
• Priorities for assessing tests using an orthogonal array
1. Detect and isolate all single mode faults
2. Detect all double mode faults
3. Multimode faults
2.4.6 Specialized Testing
• Graphical user interfaces
• Client/server architectures
• Documentation and help facilities
• Real-time systems
1. Task testing (test each time dependent task independently)
2. Behavioral testing (simulate system response to external events)
3. Intertask testing (check communications errors among tasks)
4. System testing (check interaction of integrated system software and
hardware)
2.4.7 Advantages of Black Box Testing
• More effective on larger units of code than glass box testing
• Tester needs no knowledge of implementation, including specific programming
languages
• Tester and programmer are independent of each other
• Tests are done from a user's point of view
• Will help to expose any ambiguities or inconsistencies in the specifications
• Test cases can be designed as soon as the specifications are complete
2.4.8 Disadvantages of Black Box Testing
• Only a small number of possible inputs can actually be tested, to test every
possible input stream would take nearly forever
• Without clear and concise specifications, test cases are hard to design
• There may be unnecessary repetition of test inputs if the tester is not informed of
test cases the programmer has already tried

• May leave many program paths untested

Performance Testing Process & Methodology Proprietary & Confidential - 16 -
• Cannot be directed toward specific segments of code which may be very complex
(and therefore more error prone)
• Most testing related research has been directed toward glass box testing
2.5 Black Box (Vs) White Box
An easy way to start up a debate in a software testing forum is to ask the difference
between black box and white box testing. These terms are commonly used, yet everyone
seems to have a different idea of what they mean.

Black box testing begins with a metaphor. Imagine you’re testing an electronics system. It’s
housed in a black box with lights, switches, and dials on the outside. You must test it
without opening it up, and you can’t see beyond its surface. You have to see if it works just
by flipping switches (inputs) and seeing what happens to the lights and dials (outputs). This
is black box testing. Black box software testing is doing the same thing, but with software.
The actual meaning of the metaphor, however, depends on how you define the boundary
of the box and what kind of access the “blackness” is blocking.

An opposite test approach would be to open up the electronics system, see how the
circuits are wired, apply probes internally and maybe even disassemble parts of it. By
analogy, this is called white box testing,
To help understand the different ways that software testing can be divided between black
box and white box techniques, consider the Five-Fold Testing System. It lays out five
dimensions that can be used for examining testing:

1.People(who does the testing)
2. Coverage (what gets tested)
3. Risks (why you are testing)
4.Activities(how you are testing)

5. Evaluation (how you know you’ve found a bug)

Let’s use this system to understand and clarify the characteristics of black box and white
box testing.
People: Who does the testing?
Some people know how software works (developers) and others just use it (users).
Accordingly, any testing by users or other non-developers is sometimes called “black box”
testing. Developer testing is called “white box” testing. The distinction here is based on
what the person knows or can understand.

Coverage: What is tested?
If we draw the box around the system as a whole, “black box” testing becomes another
name for system testing. And testing the units inside the box becomes white box testing.

Performance Testing Process & Methodology Proprietary & Confidential - 17 -
This is one way to think about coverage. Another is to contrast testing that aims to cover all
the requirements with testing that aims to cover all the code. These are the two most
commonly used coverage criteria. Both are supported by extensive literature and
commercial tools. Requirements-based testing could be called “black box” because it
makes sure that all the customer requirements have been verified. Code-based testing is
often called “white box” because it makes sure that all the code (the statements, paths, or
decisions) is exercised.
Risks: Why are you testing?
Sometimes testing is targeted at particular risks. Boundary testing and other attack-based
techniques are targeted at common coding errors. Effective security testing also requires a
detailed understanding of the code and the system architecture. Thus, these techniques
might be classified as “white box”. Another set of risks concerns whether the software will
actually provide value to users. Usability testing focuses on this risk, and could be termed
“black box.”
Activities: How do you test?

A common distinction is made between behavioral test design, which defines tests based
on functional requirements, and structural test design, which defines tests based on the
code itself. These are two design approaches. Since behavioral testing is based on
external functional definition, it is often called “black box,” while structural testing—based
on the code internals—is called “white box.” Indeed, this is probably the most commonly
cited definition for black box and white box testing. Another activity-based distinction
contrasts dynamic test execution with formal code inspection. In this case, the metaphor
maps test execution (dynamic testing) with black box testing, and maps code inspection
(static testing) with white box testing. We could also focus on the tools used. Some tool
vendors refer to code-coverage tools as white box tools, and tools that facilitate applying
inputs and capturing inputs—most notably GUI capture replay tools—as black box tools.
Testing is then categorized based on the types of tools used.
Evaluation: How do you know if you’ve found a bug?
There are certain kinds of software faults that don’t always lead to obvious failures. They
may be masked by fault tolerance or simply luck. Memory leaks and wild pointers are
examples. Certain test techniques seek to make these kinds of problems more visible.
Related techniques capture code history and stack information when faults occur, helping
with diagnosis. Assertions are another technique for helping to make problems more
visible. All of these techniques could be considered white box test techniques, since they
use code instrumentation to make the internal workings of the software more visible. These
contrast with black box techniques that simply look at the official outputs of a program.
White box testing is concerned only with testing the software product, it cannot guarantee
that the complete specification has been implemented. Black box testing is concerned only
with testing the specification, it cannot guarantee that all parts of the implementation have
been tested. Thus black box testing is testing against the specification and will discover
faults of omission, indicating that part of the specification has not been fulfilled. White box
testing is testing against the implementation and will discover faults of commission,
indicating that part of the implementation is faulty. In order to fully test a software product
both black and white box testing are required.
White box testing is much more expensive than black box testing. It requires the source

code to be produced before the tests can be planned and is much more laborious in the

Performance Testing Process & Methodology Proprietary & Confidential - 18 -
determination of suitable input data and the determination if the software is or is not
correct. The advice given is to start test planning with a black box test approach as soon as
the specification is available. White box planning should commence as soon as all black
box tests have been successfully passed, with the production of flowgraphs and
determination of paths. The paths should then be checked against the black box test plan
and any additional required test runs determined and applied.
The consequences of test failure at this stage may be very expensive. A failure of a white
box test may result in a change which requires all black box testing to be repeated and the
re-determination of the white box paths
To conclude, apart from the above described analytical methods of both glass and black
box testing, there are further constructive means to guarantee high quality software end
products. Among the most important constructive means are the usage of object-oriented
programming tools, the integration of CASE tools, rapid prototyping, and last but not least
the involvement of users in both software development and testing procedures

Summary :
Black box testing can sometimes describe user-based testing (people); system or
requirements-based testing (coverage); usability testing (risk); or behavioral testing or
capture replay automation (activities). White box testing, on the other hand, can sometimes
describe developer-based testing (people); unit or code-coverage testing (coverage);
boundary or security testing (risks); structural testing, inspection or code-coverage
automation (activities); or testing based on probes, assertions, and logs (evaluation).
2.6 WHITE BOX TESTING
Software testing approaches that examine the program structure and derive test data from
the program logic. Structural testing is sometimes referred to as clear-box testing since
white boxes are considered opaque and do not really permit visibility into the code.
Synonyms for white box testing

• Glass Box testing
• Structural testing
• Clear Box testing
• Open Box Testing
Types of White Box testing
A typical rollout of a product is shown in figure 1 below.


Performance Testing Process & Methodology Proprietary & Confidential - 19 -

The purpose of white box testing
Initiate a strategic initiative to build quality throughout the life cycle of a software product or
service.
Provide a complementary function to black box testing.
Perform complete coverage at the component level.
Improve quality by optimizing performance.
Practices :
This section outlines some of the general practices comprising white-box testing process.
In general, white-box testing practices have the
following considerations:
1. The allocation of resources to perform class and method analysis and to document
and review the same.
2. Developing a test harness made up of stubs, drivers and test object libraries.
3. Development and use of standard procedures, naming conventions and libraries.
4. Establishment and maintenance of regression test suites and procedures.
5. Allocation of resources to design, document and manage a test history library.
6. The means to develop or acquire tool support for automation of
capture/replay/compare, test suite execution, results verification and
documentation capabilities.
1 Code Coverage Analysis


1.1 Basis Path Testing
A testing mechanism proposed by McCabe whose aim is to derive a logical
complexity measure of a procedural design and use this as a guide for defining a
basic set of execution paths. These are test cases that exercise basic set will
execute every statement at least once.

Performance Testing Process & Methodology Proprietary & Confidential - 20 -
1.1.1 Flow Graph Notation
A notation for representing control flow similar to flow charts and UML
activity diagrams.

1.1.2 Cyclomatic Complexity
The cyclomatic complexity gives a quantitative measure of 4the logical
complexity. This value gives the number of independent paths in the basis
set, and an upper bound for the number of tests to ensure that each
statement is executed at least once. An independent path is any path
through a program that introduces at least one new set of processing
statements or a new condition (i.e., a new edge). Cyclomatic complexity
provides upper bound for number of tests required to guarantee coverage
of all program statements.
1.2 Control Structure testing
1.2.1 Conditions Testing
Condition testing aims to exercise all logical conditions in a program
module. They may define:
• Relational expression: (E1 op E2), where E1 and E2 are
arithmetic expressions.
• Simple condition: Boolean variable or relational expression,
possibly proceeded by a NOT operator.
• Compound condition: composed of two or more simple conditions,

Boolean operators and parentheses.
• Boolean expression : Condition without Relational expressions.
1.2.2 Data Flow Testing
Selects test paths according to the location of definitions and use of
variables.

1.2.3 Loop Testing
Loops fundamental to many algorithms. Can define loops as simple,
concatenated, nested, and unstructured.
Examples:

Performance Testing Process & Methodology Proprietary & Confidential - 21 -
Note that unstructured loops are not to be tested . rather, they are
redesigned.

2 Design by Contract (DbC)
DbC is a formal way of using comments to incorporate specification information into the
code itself. Basically, the code specification is expressed unambiguously using a formal
language that describes the code's implicit contracts. These contracts specify such
requirements as:
• Conditions that the client must meet before a method is invoked.
• Conditions that a method must meet after it executes.
• Assertions that a method must satisfy at specific points of its
execution

Tools that check DbC contracts at runtime such as JContract
[ are used to perform this function.

3 Profiling
Profiling provides a framework for analyzing Java code performance for speed and heap

memory use. It identifies routines that are consuming the majority of the CPU time so that
problems may be tracked down to improve performance.
These include the use of Microsoft Java Profiler API and Sun’s profiling tools that are
bundled with the JDK. Third party tools such as JaViz
[ may also be used to perform this
function.

4 Error Handling

Performance Testing Process & Methodology Proprietary & Confidential - 22 -
Exception and error handling is checked thoroughly are simulating partial and complete
fail-over by operating on error causing test vectors. Proper error recovery, notification and
logging are checked against references to validate program design.

5 Transactions

Systems that employ transaction, local or distributed, may be validated to ensure that ACID
(Atomicity, Consistency, Isolation, Durability). Each of the individual parameters is tested
individually against a reference data set.

Transactions are checked thoroughly for partial/complete commits and rollbacks
encompassing databases and other XA compliant transaction processors.
Advantages of White Box Testing
• Forces test developer to reason carefully about implementation
• Approximate the partitioning done by execution equivalence
• Reveals errors in "hidden" code
• Beneficent side-effects
Disadvantages of White Box Testing
• Expensive
• Cases omitted in the code could be missed out.


Performance Testing Process & Methodology Proprietary & Confidential - 23 -
3 GUI Testing
What is GUI Testing?
GUI is the abbreviation for Graphic User Interface. It is absolutely essential that any
application has to be user-friendly. The end user should be comfortable while using all the
components on screen and the components should also perform their functionality with
utmost clarity. Hence it becomes very essential to test the GUI components of any
application. GUI Testing can refer to just ensuring that the look-and-feel of the application
is acceptable to the user, or it can refer to testing the functionality of each and every
component involved.
The following is a set of guidelines to ensure effective GUI Testing and can be used even
as a checklist while testing a product / application.
3.1 Section 1 - Windows Compliance Testing
3.1.1 Application
Start Application by Double Clicking on its ICON. The Loading message should show the
application name, version number, and a bigger pictorial representation of the icon. No
Login is necessary. The main window of the application should have the same caption as
the caption of the icon in Program Manager. Closing the application should result in an "Are
you Sure" message box Attempt to start application twice. This should not be allowed - you
should be returned to main window. Try to start the application twice as it is loading. On
each window, if the application is busy, then the hour glass should be displayed. If there is
no hour glass, then some enquiry in progress message should be displayed. All screens
should have a Help button (i.e.) F1 key should work the same.

If Window has a Minimize Button, click it. Window should return to an icon on the bottom of
the screen. This icon should correspond to the Original Icon under Program Manager.
Double Click the Icon to return the Window to its original size. The window caption for
every application should have the name of the application and the window name -
especially the error messages. These should be checked for spelling, English and clarity,

especially on the top of the screen. Check does the title of the window make sense. If the
screen has a Control menu, then use all un-grayed options.
Check all text on window for Spelling/Tense and Grammar.
Use TAB to move focus around the Window. Use SHIFT+TAB to move focus backwards.
Tab order should be left to right, and Up to Down within a group box on the screen. All
controls should get focus - indicated by dotted box, or cursor. Tabbing to an entry field with
text in it should highlight the entire text in the field. The text in the Micro Help line should
change - Check for spelling, clarity and non-updateable etc. If a field is disabled (grayed)
then it should not get focus. It should not be possible to select them with either the mouse
or by using TAB. Try this for every grayed control.

Performance Testing Process & Methodology Proprietary & Confidential - 24 -
Never updateable fields should be displayed with black text on a gray background with a
black label. All text should be left justified, followed by a colon tight to it. In a field that may
or may not be updateable, the label text and contents changes from black to gray
depending on the current status. List boxes are always white background with black text
whether they are disabled or not. All others are gray.
In general, double-clicking is not essential. In general, everything can be done using both
the mouse and the keyboard. All tab buttons should have a distinct letter.
3.1.2 Text Boxes
Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow
to Insert Bar. If it doesn't then the text in the box should be gray or non-updateable. Refer
to previous page. Enter text into Box Try to overflow the text by typing to many characters -
should be stopped Check the field width with capitals W. Enter invalid characters - Letters
in amount fields, try strange characters like + , - * etc. in All fields. SHIFT and Arrow should
Select Characters. Selection should also be possible with mouse. Double Click should se-
lect all text in box.
3.1.3 Option (Radio Buttons)
Left and Right arrows should move 'ON' Selection. So should Up and Down. Select with
mouse by clicking.

3.1.4 Check Boxes
Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE
should do the same.
3.1.5 Command Buttons
If Command Button leads to another Screen, and if the user can enter or change details on
the other screen then the Text on the button should be followed by three dots. All Buttons
except for OK and Cancel should have a letter Access to them. This is indicated by a letter
underlined in the button text. Pressing ALT+Letter should activate the button. Make sure
there is no duplication. Click each button once with the mouse - This should activate Tab to
each button - Press SPACE - This should activate
Tab to each button - Press RETURN - This should activate The above are VERY
IMPORTANT, and should be done for EVERY command Button. Tab to another type of
control (not a command button). One button on the screen should be default (indicated by
a thick black border). Pressing Return in ANY no command button control should activate
it.
If there is a Cancel Button on the screen, then pressing <Esc> should activate it. If
pressing the Command button results in uncorrectable data e.g. closing an action step,
there should be a message phrased positively with Yes/No answers where Yes results in
the completion of the action.
3.1.6 Drop Down List Boxes
Pressing the Arrow should give list of options. This List may be scrollable. You should not
be able to type text in the box. Pressing a letter should bring you to the first item in the list
with that start with that letter. Pressing ‘Ctrl - F4’ should open/drop down the list box.

Performance Testing Process & Methodology Proprietary & Confidential - 25 -
Spacing should be compatible with the existing windows spacing (word etc.). Items should
be in alphabetical order with the exception of blank/none, which is at the top or the bottom
of the list box. Drop down with the item selected should be display the list with the selected
item on the top. Make sure only one space appears, shouldn't have a blank line at the
bottom.

3.1.7 Combo Boxes
Should allow text to be entered. Clicking Arrow should allow user to choose from list
3.1.8 List Boxes
Should allow a single selection to be chosen, by clicking with the mouse, or using the Up
and Down Arrow keys. Pressing a letter should take you to the first item in the list starting
with that letter. If there is a 'View' or 'Open' button besides the list box then double clicking
on a line in the List Box, should act in the same way as selecting and item in the list box,
then clicking the command button. Force the scroll bar to appear, make sure all the data
can be seen in the box.

3.2 Section 2 - Screen Validation Checklist
3.2.1 Aesthetic Conditions:
1. Is the general screen background the correct color?
2. Are the field prompts the correct color?
3. Are the field backgrounds the correct color?
4. In read-only mode, are the field prompts the correct color?
5. In read-only mode, are the field backgrounds the correct color?
6. Are all the screen prompts specified in the correct screen font?
7. Is the text in all fields specified in the correct screen font?
8. Are all the field prompts aligned perfectly on the screen?
9. Are all the field edit boxes aligned perfectly on the screen?
10. Are all group boxes aligned correctly on the screen?
11. Should the screen be resizable?
12. Should the screen be allowed to minimize?
13. Are all the field prompts spelt correctly?
14. Are all character or alphanumeric fields left justified? This is the default unless
otherwise specified.
15. Are all numeric fields right justified? This is the default unless otherwise specified.
16. Is all the micro-help text spelt correctly on this screen?
17. Is all the error message text spelt correctly on this screen?

18. Is all user input captured in UPPER case or lowercase consistently?
19. Where the database requires a value (other than null) then this should be
defaulted into fields. The user must either enter an alternative valid value or leave
the default value intact.
20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×