Tải bản đầy đủ (.docx) (27 trang)

Test report

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (247.55 KB, 27 trang )

1 Test Report
A Test Report is a document that is prepared once the testing of a software product is
complete and the delivery is to be made to the customer. This document would contain a
summary of the entire project and would have to be presented in a way that any person
who has not worked on the project would also get a good overview of the testing effort.
Contents of a Test Report
The contents of a test report are as follows:
Executive Summary
Overview
Application Overview
Testing Scope
Test Details
Test Approach
Types of testing conducted
Test Environment
Tools Used
Metrics
Test Results
Test Deliverables
Recommendations
These sections are explained as follows:
1.1 Executive Summary
This section would comprise of general information regarding the project, the
client, the application, tools and people involved in such a way that it can be taken
as a summary of the Test Report itself (i.e.) all the topics mentioned here would be
elaborated in the various sections of the report.
1. Overview
This comprises of 2 sections – Application Overview and Testing Scope.
Application Overview – This would include detailed information on the application
under test, the end users and a brief outline of the functionality as well.
Testing Scope – This would clearly outline the areas of the application that would /


would not be tested by the QA team. This is done so that there would not be any
misunderstandings between customer and QA as regards what needs to be tested
and what does not need to be tested.
This section would also contain information of Operating System / Browser
combinations if Compatibility testing is included in the testing effort.

2. Test Details
This section would contain the Test Approach, Types of Testing conducted, Test
Environment and Tools Used.
Test Approach – This would discuss the strategy followed for executing the
project. This could include information on how coordination was achieved between
Onsite and Offshore teams, any innovative methods used for automation or for
reducing repetitive workload on the testers, how information and daily / weekly
deliverables were delivered to the client etc.
Types of testing conducted – This section would mention any specific types of
testing performed (i.e.) Functional, Compatibility, Performance, Usability etc along
with related specifications.
Test Environment – This would contain information on the Hardware and
Software requirements for the project (i.e.) server configuration, client machine
configuration, specific software installations required etc.
Tools used – This section would include information on any tools that were used
for testing the project. They could be functional or performance testing automation
tools, defect management tools, project tracking tools or any other tools which
made the testing work easier.
3. Metrics
This section would include details on total number of test cases executed in the
course of the project, number of defects found etc. Calculations like defects found
per test case or number of test cases executed per day per person etc would also
be entered in this section. This can be used in calculating the efficiency of the
testing effort.

4. Test Results
This section is similar to the Metrics section, but is more for showcasing the salient
features of the testing effort. Incase many defects have been logged for the
project, graphs can be generated accordingly and depicted in this section. The
graphs can be for Defects per build, Defects based on severity, Defects based on
Status (i.e.) how many were fixed and how many rejected etc.
5. Test Deliverables
This section would include links to the various documents prepared in the course
of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release Report
etc.

6. Recommendations
This section would include any recommendations from the QA team to the client on
the product tested. It could also mention the list of known defects which have been
logged by QA but not yet fixed by the development team so that they can be taken
care of in the next release of the application.

2 Defect Management
2.1 Defect
A mismatch in the application and its specification is a defect. A software error is present
when the program does not do what its end user expects it to do.
2.2 Defect Fundamentals
A Defect is a product anomaly or flaw. Defects include such things as omissions and
imperfections found during testing phases. Symptoms (flaws) of faults contained in
software that is sufficiently mature for production will be considered as defects. Deviations
from expectation that is to be tracked and resolved is also termed a defect.
An evaluation of defects discovered during testing provides the best indication of software
quality. Quality is the indication of how well the system meets the requirements. So in this
context defects are identified as any failure to meet the system requirements.
Defect evaluation is based on methods that range from simple number count to rigorous

statistical modeling.
Rigorous evaluation uses assumptions about the arrival or discovery rates of defects
during the testing process. The actual data about defect rates are then fit to the model.
Such an evaluation estimates the current system reliability and predicts how the reliability
will grow if testing and defect removal continue. This evaluation is described as system
reliability growth modelling

2.2.1 Defect Life Cycle
2.3 Defect Tracking
After a defect has been found, it must be reported to development so that it can be
fixed.
 The Initial State of a defect will be ‘New’.
 The Project Lead of the development team will review the defect and set it to one
of the following statuses:
Open – Accepts the bug and assigns it to a developer.
Invalid Bug – The reported bug is not valid one as per the requirements/design
As Designed – This is an intended functionality as per the requirements/design

Deferred –This will be an enhancement.
Duplicate – The bug has already been reported.
Document – Once it is set to any of the above statuses apart from Open, and the
testing team does not agree with the development team it is set to document
status.
 Once the development team has started working on the defect the status is set to
WIP ((Work in Progress) or if the development team is waiting for a go ahead or
some technical feedback, they will set to Dev Waiting
 After the development team has fixed the defect, the status is set to FIXED, which
means the defect is ready to re-test.
 On re-testing the defect, and the defect still exists, the status is set to REOPENED,
which will follow the same cycle as an open defect.

 If the fixed defect satisfies the requirements/passes the test case, it is set to
Closed.
2.4 Defect Classification
The severity of bugs will be classified as follows:
Critical The problem prevents further processing and testing. The Development Team
must be informed immediately and they need to take corrective action
immediately.
High The problem affects selected processing to a significant degree, making it
inoperable, Cause data loss, or could cause a user to make an incorrect
decision or entry. The Development Team must be informed that day, and they
need to take corrective action within 0 – 24 hours.
Medium The problem affects selected processing, but has a work-around that allows
continued processing and testing. No data loss is suffered. These may be
cosmetic problems that hamper usability or divulge client-specific information.
The Development Team must be informed within 24 hours, and they need to
take corrective action within 24 - 48 hours.
Low The problem is cosmetic, and/or does not affect further processing and testing.
The Development Team must be informed within 48 hours, and they need to
take corrective action within 48 - 96 hours.


2.5 Defect Reporting Guidelines
The key to making a good report is providing the development staff with as much
information as necessary to reproduce the bug. This can be broken down into 5
points:
1) Give a brief description of the problem
2) List the steps that are needed to reproduce the bug or problem
3) Supply all relevant information such as version, project and data used.
4) Supply a copy of all relevant reports and data including copies of the
expected

results.
5) Summarize what you think the problem is.
When you are reporting a defect the more information you supply, the easier it will
be for the developers to determine the problem and fix it.
Simple problems can have a simple report, but the more complex the problem– the
more information the developer is going to need.
For example: cosmetic errors may only require a brief description of the screen,
how to get it and what needs to be changed.
However, an error in processing will require a more detailed description, such as:
1) The name of the process and how to get to it.
2) Documentation on what was expected. (Expected results)
3) The source of the expected results, if available. This includes spread
sheets, an earlier version of the software and any formulas used)
4) Documentation on what actually happened. (Perceived results)
5) An explanation of how the results differed.
6) Identify the individual items that are wrong.
7) If specific data is involved, a copy of the data both before and after the
process should be included.
8) Copies of any output should be included.
As a rule the detail of your report will increase based on a) the severity of the bug,
b) the level of the processing, c) the complexity of reproducing the bug.

Anatomy of a bug report
Bug reports need to do more than just describe the bug. They have to give
developers something to work with so that they can successfully reproduce the
problem.
In most cases the more information– correct information– given the better. The
report should explain exactly how to reproduce the problem and an explanation of
exactly what the problem is.
The basic items in a report are as follows:

Version: This is very important. In most cases the product is not static,
developers will have been working on it and if they’ve found a bug–
it may already have been reported or even fixed. In either case, they
need to know which version to use when testing out the bug.
Product: If you are developing more than one product– Identify the product
in question.
Data: Unless you are reporting something very simple, such as a cosmetic
error on a screen, you should include a dataset that exhibits the error.
If you’re reporting a processing error, you should include two
versions of the dataset, one before the process and one after. If the
dataset from before the process is not included, developers will be
forced to try and find the bug based on forensic evidence. With the
data, developers can trace what is happening.
Steps: List the steps taken to recreate the bug. Include all proper menu
names, don’t abbreviate and don’t assume anything.
After you’ve finished writing down the steps, follow them - make
sure you’ve included everything you type and do to get to the
problem. If there are parameters, list them. If you have to enter any
data, supply the exact data entered. Go through the process again
and see if there are any steps that can be removed.

When you report the steps they should be the clearest steps to
recreating the bug.
Description: Explain what is wrong - Try to weed out any extraneous
information, but detail what is wrong. Include a list of what was
expected. Remember report one problem at a time, don’t combine
bugs in one report.
Supporting documentation:
If available, supply documentation. If the process is a report,
include a copy of the report with the problem areas highlighted.

Include what you expected. If you have a report to compare against,
include it and its source information (if it’s a printout from a
previous version, include the version number and the dataset used)
This information should be stored in a centralized location so that
Developers and Testers have access to the information. The
developers need it to reproduce the bug, identify it and fix it.
Testers will need this information for later regression testing and
verification.
2.5.1 Summary
A bug report is a case against a product. In order to work it must supply all
necessary information to not only identify the problem but what is needed to fix it
as well.
It is not enough to say that something is wrong. The report must also say what the
system should be doing.
The report should be written in clear concise steps, so that someone who has never
seen the system can follow the steps and reproduce the problem. It should include
information about the product, including the version number, what data was used.
The more organized information provided the better the report will be.

3 Automation
What is Automation
Automated testing is automating the manual testing process currently in use
3.1 Why Automate the Testing Process?
Today, rigorous application testing is a critical part of virtually all software development
projects. As more organizations develop mission-critical systems to support their business
activities, the need is greatly increased for testing methods that support business
objectives. It is necessary to ensure that these systems are reliable, built according to
specification, and have the ability to support business processes. Many internal and
external factors are forcing organizations to
ensure a high level of software quality and reliability.

In the past, most software tests were performed using manual methods. This required a
large staff of test personnel to perform expensive, and time-consuming manual test
procedures. Owing to the size and complexity of today’s advanced software applications,
manual testing is no longer a viable option for most testing situations.
Every organization has unique reasons for automating software quality activities, but
several reasons are common across industries.
Using Testing Effectively
By definition, testing is a repetitive activity. The very nature of application software
development dictates that no matter which methods are employed to carry out testing
(manual or automated), they remain repetitious throughout the development lifecycle.
Automation of testing processes allows machines to complete the tedious, repetitive work
while human personnel perform other tasks.
Automation allows the tester to reduce or eliminate the required “think time” or “read time”
necessary for the manual interpretation of when or where to click the mouse or press the
enter key.
An automated test executes the next operation in the test hierarchy at machine speed,
allowing
tests to be completed many times faster than the fastest individual. Furthermore, some
types of
testing, such as load/stress testing, are virtually impossible to perform manually.
Reducing Testing Costs
The cost of performing manual testing is prohibitive when compared to automated
methods. The
reason is that computers can execute instructions many times faster, and with fewer errors
than

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×