Tải bản đầy đủ (.pdf) (36 trang)

Tool support for testing (CAST)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (746.13 KB, 36 trang )

Tool support for testing (CAST)
Software Testing
ISEB Foundation Certificate Course
1 Principles 2 Lifecycle
4 Dynamic test
techniques
3 Static testing
5 Management 6 Tools
1
Types of CAST tool
Why capture/replay is not test automation
Automating and testing are separate skills
Best practice
Contents
ISEB Foundation Certificate Course
Tool support
1 2
4 5
3
6
2
Testing tool classification
Requirements testing tools
Static analysis tools
Test design tools
Test data preparation tools
Test running tools - character-based, GUI
Comparison tools
Test harnesses and drivers
Performance test tools
Dynamic analysis tools


Debugging tools
Test management tools
Coverage measurement
3
Static
analysis
Test management tools
Test design
Test data
preparation
Coverage
measures
Test running
Dynamic
analysis
Debug
Performance
measurement
Comp. Test
Where tools fit
Req Anal
Code
Function
Design
Sys Test
Int Test
Acc Test
Requirements
testing
Test harness

& drivers
Comparison
4
Requirements testing tools
n Automated support for verification and
validation of requirements models
consistency checkingconsistency checking
animationanimation
Tool information available from:
Ovum Evaluates Software Testing Tools (subscription service)
CAST Report, 1999
World Wide Web
5
Static analysis tools
n Provide information about the quality of
software
n Code is examined, not executed
n Objective measures
cyclomatic complexitycyclomatic complexity
others: nesting levels, sizeothers: nesting levels, size
6
Test design tools
n Generate test inputs
from a formal specification or CASE repositoryfrom a formal specification or CASE repository
from code (e.g. code not covered yet) from code (e.g. code not covered yet)
7
Test data preparation tools
n Data manipulation
selected from existing databases or filesselected from existing databases or files
created according to some rulescreated according to some rules

edited from other sources edited from other sources
8
Test running tools 1
n Interface to the software being tested
n Run tests as though run by a human tester
n Test scripts in a programmable language
n Data, test inputs and expected results held in
test repositories
n Most often used to automate regression
testing
9
Test running tools 2
n Character-based
simulates user interaction from dumb terminalssimulates user interaction from dumb terminals
capture keystrokes and screen responsescapture keystrokes and screen responses
n GUI (Graphical User Interface)
simulates user interaction for WIMP applications simulates user interaction for WIMP applications
(Windows, Icons, Mouse, Pointer)(Windows, Icons, Mouse, Pointer)
capture mouse movement, button clicks, and capture mouse movement, button clicks, and
keyboard inputskeyboard inputs
capture screens, bitmaps, characters, object statescapture screens, bitmaps, characters, object states
10
Comparison tools
n Detect differences between actual test results
and expected results
screens, characters, bitmapsscreens, characters, bitmaps
masking and filteringmasking and filtering
n Test running tools normally include
comparison capability
n Stand-alone comparison tools for files or

databases
11
Test harnesses and drivers
n Used to exercise software which does not
have a user interface (yet)
n Used to run groups of automated tests or
comparisons
n Often custom-build
n Simulators (where testing in real environment
would be too costly or dangerous)
12
Performance testing tools
n Load generation
drive application via user interface or test harnessdrive application via user interface or test harness
simulates realistic load on the system & logs the simulates realistic load on the system & logs the
number of transactionsnumber of transactions
n Transaction measurement
response times for selected transactions via user response times for selected transactions via user
interfaceinterface
n Reports based on logs, graphs of load versus
response times
13
Dynamic analysis tools
n Provide run-time information on software
(while tests are run)
allocation, use and deallocation, use and de allocation of resources, e.g. allocation of resources, e.g.
memory leaksmemory leaks
flag unassigned pointers or pointer arithmetic faultsflag unassigned pointers or pointer arithmetic faults
14
Debugging tools

n Used by programmers when investigating,
fixing and testing faults
n Used to reproduce faults and examine
program execution in detail
singlesingle steppingstepping
breakpoints or watchpoints at any statementbreakpoints or watchpoints at any statement
examine contents of variables and other dataexamine contents of variables and other data
15
Test management tools
n Management of testware: test plans,
specifications, results
n Project management of the test process, e.g.
estimation, schedule tests, log results
n Incident management tools (may include
workflow facilities to track allocation,
correction and retesting)
n Traceability (of tests to requirements,
designs)
16
Coverage measurement tools
n Objective measure of what parts of the
software structure was executed by tests
n Code is instrumented in a static analysis pass
n Tests are run through the instrumented code
n Tool reports what has and has not been
covered by those tests, line by line and
summary statistics
n Different types of coverage: statement,
branch, condition, LCSAJ, et al
17

Content
Types of CAST tool
Why capture/replay is not test automation
Automating and testing are separate skills
Best practice
Tool support
1 2
4 5
3
6
ISEB Foundation Certificate Course
18
Advantages of recording manual tests
n documents what the tester actually did
useful for capturing ad hoc tests (e.g. end users)useful for capturing ad hoc tests (e.g. end users)
may enable software failures to be reproducedmay enable software failures to be reproduced
n produces a detailed “script”
records actual inputsrecords actual inputs
can be used by a technical person to implement a can be used by a technical person to implement a
more maintainable automated testmore maintainable automated test
n ideal for one-off tasks
such as long or complicated data entrysuch as long or complicated data entry
19
Captured test scripts
n will not be very understandable
it is a programming language after all!it is a programming language after all!
during maintenance will need to know more than can during maintenance will need to know more than can
ever be ‘automatically commented’ever be ‘automatically commented’
n will not be resilient to many software changes
a simple interface change can impact many scriptsa simple interface change can impact many scripts

n do not include verification
may be easy to add a few simple screen based may be easy to add a few simple screen based
comparisonscomparisons
20
Compare seldom vs. compare often
Storage space
Failure analysis effort
Miss faults
Implementation effort
Susceptibility to changes
Robust Tests Sensitive Tests
21
Too much sensitivity = redundancy
If all tests are
robust, the
unexpected
change is
missed
If all tests are
sensitive, they
all show the
unexpected change
Three tests,
each changes
a different field
Unexpected
change occurs
for every test
Test output
22

Automated verification
n there are many choices to be made
dynamic / post execution, compare lots / compare dynamic / post execution, compare lots / compare
little, resilience to change / bug finding effectivelittle, resilience to change / bug finding effective
n scripts can soon become very complex
more susceptible to change, harder to maintainmore susceptible to change, harder to maintain
n there is a lot of work involved
speed and accuracy speed and accuracy of tool useof tool use is very importantis very important
n usually there is more verification that can
(and perhaps should) be done
automation can lead to better testing (not automation can lead to better testing (not
guaranteed!)guaranteed!)
23
Content
Types of CAST tool
Why capture/replay is not test automation
Automating and testing are separate skills
Best practice
Tool support
1 2
4 5
3
6
ISEB Foundation Certificate Course
24
Effort to automate
n The effort required to automate any one test varies
greatly
typically between 2 and 10 times the manual test efforttypically between 2 and 10 times the manual test effort
n and depends on:

tool, skills, environment and software under testtool, skills, environment and software under test
existing manual test process which may be:existing manual test process which may be:
•• unscripted manual testingunscripted manual testing
•• scripted (vague) manual testingscripted (vague) manual testing
•• scripted (detailed) manual testingscripted (detailed) manual testing
25

×