Tải bản đầy đủ (.ppt) (84 trang)

Kiem thu phan mem English

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (611.66 KB, 84 trang )

<span class='text_page_counter'>(1)</span><div class='page_container' data-page=1>

<b>SOFTWARE TESTING</b>



“Testing is the process of exercising or evaluating a system or
system component by manual or automated means to verify that it satisfies
specified requirements”


Testing is a process used to help identify the correctness,
completeness and quality of developed computer software.


On a whole, testing objectives could be summarized as:


Testing is a process of executing a program with the intent of finding an
error.


· A good test is one that has a high probability of finding an as yet
undiscovered error.


</div>
<span class='text_page_counter'>(2)</span><div class='page_container' data-page=2>

Testing is required to ensure that the applications meets the objectives related
to the applications’ functionality, performance, reliability, flexibility,


ease of use, and timeliness of delivery.


• Developers hide their mistakes


• To reduce the cost of rework by detecting defects at an early stage.
• Avoid project overruns by following a defined test methodology.
• Ensure the quality and reliability of the software to the users.


</div>
<span class='text_page_counter'>(3)</span><div class='page_container' data-page=3>

•Test early and test often.


•Integrate the application development and testing life cycles. You'll get better results


and you won't have to mediate between two armed camps in your IT shop.


•Formalize a testing methodology; you'll test everything the same way and you'll get
uniform results.


•Develop a comprehensive test plan; it forms the basis for the testing methodology.
•Use both static and dynamic testing.


•Define your expected results.


•Understand the business reason behind the application. You'll write a better
application and better testing scripts.


•Use multiple levels and types of testing (regression, systems, integration, stress and
load).


•Review and inspect the work, it will lower costs.


•Don't let your programmers check their own work; they'll miss their own errors


</div>
<span class='text_page_counter'>(4)</span><div class='page_container' data-page=4>

 A Good Test Engineer has a ‘test to break’ attitude.
 An ability to take the point of view of the customer.
 A strong desire for Quality.


 Gives more attention to minor details.


 Tact & diplomacy are useful in maintaining a co-operative relationship


with developers



 Ability to communicate with both technical & non-technical people is


useful.


 Judgment skills are needed to assess high-risk areas of an application on


which to focus testing efforts when time is limited

.



</div>
<span class='text_page_counter'>(5)</span><div class='page_container' data-page=5>

 


A Project company survives on the number of contacts that the company has
and the number of Projects that the company gets from other forms. Whereas a
Product based company’s existence depends entirely on how it’s product does in
the market.


A Project Company will have the specifications made from the customer


as to how the Application should be. Since a Project company will be doing the same
kind of Project for some other Companies, they get to be better and know


what are the issues and can handle them.


A Product company needs to develop it’s own specification and make
sure that they are generic. Also it has to be made sure that the Application is
compatible with other Applications. In a Product company, the application


created will always be new in some way or the other, causing the application to be more
vulnerable in terms of bugs. When upgrades are made for the different functionalities,
care has to be taken that it will not cause any other module to not function.



</div>
<span class='text_page_counter'>(6)</span><div class='page_container' data-page=6>

Automated Vs Manual


Testing



<b>Manual Testing</b> <b>Automated Testing</b>


 Prone to human errors More reliable


 Time Consuming Time Conserving


 Skilled man power required No human intervention


required once started


 Tests have to be performed Batch testing can be done


</div>
<span class='text_page_counter'>(7)</span><div class='page_container' data-page=7>

<b>WHEN TO STOP </b>


<b>TESTING</b>



This can be difficult to determine. Many modern software applications are
so complex and run in such an interdependent environment, that complete
testing can never be done. Common factors in deciding when to stop are...
· Deadlines, e.g. release deadlines, testing deadlines;


· Test cases completed with certain percentage passed;
· Test budget has been depleted;


· Coverage of code, functionality, or requirements reaches a specified
point;


</div>
<span class='text_page_counter'>(8)</span><div class='page_container' data-page=8>

• ISO – International Standards for Organisation



• SEI CMM –Software Engineering Institute Capability maturity
module


• CMMI - Capability maturity module Integration
• TMM – Testing Maturity Model (Testing Dept)


• PCMM – People Capability maturity module (Hr. Dept)
• SIX SIGMA – Zero defect oriented product. (out of 1 million
product 3.4% can be defect tolerance) Presently in india WIPRO
holds the certification


<b>SOME BRANDED </b>



</div>
<span class='text_page_counter'>(9)</span><div class='page_container' data-page=9></div>
<span class='text_page_counter'>(10)</span><div class='page_container' data-page=10>

Software QA involves the entire software development PROCESS -
monitoring and improving the process, making sure that any agreed-upon
standards and procedures are followed, and ensuring that problems are
found and dealt with.


It is oriented to 'prevention'.


In simple words, it is a review with a goal of improving the process as
well as the deliverable


QA:for entire life cycle


<b>QC</b> activities focus on finding defects in specific deliverables - e.g.,
are the defined requirements the right requirements. Testing is one
example of a QC activity,.



QC is corrective process.
QC:for testing part in SDLC


</div>
<span class='text_page_counter'>(11)</span><div class='page_container' data-page=11>

Coherent sets of activities for specifying, designing,
implementing and testing software systems


Objectives


• To introduce software lifecycle models


• To describe a number of different lifecycle models
and when they may be used


• To describe outline process models for


requirements engineering, software development,
testing and evolution


</div>
<span class='text_page_counter'>(12)</span><div class='page_container' data-page=12>

A project using the waterfall model moves down a series of steps starting
from an initial idea to a final product. At the end of each step the project team
holds a review to determine if they are ready to move to the next step. If the
product isn’t ready to progress, it stays at that level until it’s ready.


</div>
<span class='text_page_counter'>(13)</span><div class='page_container' data-page=13>

<b>Notice three important things about the waterfall model:</b>


· There’s no way to back up. As soon as you’re on a step,
you need to complete the tasks for that step and then move
on-you can’t go back.


· The steps are discrete; there’s no overlap



· Note that development or coding is only a single block.


<b>Disadvantages :</b> are More rework and changes will be more, if any error
occurs, Time frame will be more, More people will be idle during the initial
time Inflexible partitioning of the project into distinct stages makes it


</div>
<span class='text_page_counter'>(14)</span><div class='page_container' data-page=14>

DEFINITION - The spiral model, also known as the spiral lifecycle model, is a
systems development method (SDM) used in information technology (IT).


This model of development combines the features of the prototyping model and the
waterfall model.


<b>The spiral model is favored for large, expensive, and complicated </b>
<b>projects </b>


<b>SPIRAL MODEL</b>



ADVANTAGES :


Estimates (i.e. budget, schedule, etc.) get more realistic as work
progresses, because important issues are discovered earlier.


It is more able to cope with the (nearly inevitable) changes that software
development generally entails.


</div>
<span class='text_page_counter'>(15)</span><div class='page_container' data-page=15></div>
<span class='text_page_counter'>(16)</span><div class='page_container' data-page=16>

<b>Each time around the spiral involves six steps:</b>


1. Determine the objectives, alternatives and constraints



<b>2. </b>Identify and Resolve Risks


<b>3. </b>Evaluate alternatives


<b>4. </b>Develop and Test the Current level


<b>5. </b>Plan the Next level


</div>
<span class='text_page_counter'>(17)</span><div class='page_container' data-page=17>

The V shows the typical sequence of development activities on the left-hand


(downhill) side and the corresponding sequence of test execution activities on the
right-hand (uphill) side.


In fact, the V Model emerged in reaction to some waterfall models that showed
testing as a single phase following the traditional development phases of


requirements analysis, high-level design, detailed design and coding. The waterfall
model did considerable damage by supporting the common impression that testing is
merely a brief detour after most of the mileage has been gained by mainline


development activities. Many managers still believe this, even though testing usually
takes up half of the project time


'V' shape model describes about the process about the construting the application at a
time all the Analysing, designing, coding and testing will be done at a time. i.e once
coding finishes it'll go to tester to test for bugs if we got OK form tester we can


immediately start coding after coding again send to tester he'll check for BUGS and
will send back to programmer then he,programmer can finish up by implementing
the project.



</div>
<span class='text_page_counter'>(18)</span><div class='page_container' data-page=18>

it is the model what is using by most of the companies.
v model is model in which testing is done prallelly with


development.left side of v model ,reflect development input for
the corresponding testing activities.


It is a parallel activity which would give the tester the domain
knowledge and perform more value added,high quality testing
with greater efficiency. Also it reduces time since the test


</div>
<span class='text_page_counter'>(19)</span><div class='page_container' data-page=19></div>
<span class='text_page_counter'>(20)</span><div class='page_container' data-page=20>

Extreme Programming.



Extreme Programming.



New approach to development based on the development and
delivery of very small increments of functionality


</div>
<span class='text_page_counter'>(21)</span><div class='page_container' data-page=21></div>
<span class='text_page_counter'>(22)</span><div class='page_container' data-page=22>

Static testing, the review, inspection and validation of development requirements, is
the most effective and cost efficient way of testing. A structured approach to testing
should use both dynamic and static testing techniques.


Static testing is the most effective and cost efficient way of testing


A structured approach to testing should use both dynamic and static testing
techniques


Dynamic Testing


Testing that is commonly assumed to mean executing software and finding


errors is dynamic testing.


Two types : Structural and Functional Testing.


</div>
<span class='text_page_counter'>(23)</span><div class='page_container' data-page=23>

<b>Unit Testing</b>



<b>Unit Testing</b>



Require knowledge of code



High level of detail



Deliver thoroughly tested components to



integration



Stopping criteria



Code Coverage



</div>
<span class='text_page_counter'>(24)</span><div class='page_container' data-page=24>

Strategies



Bottom-up, start from bottom and add one at a



time



Top-down, start from top and add one at a time


Big-bang, everything at once



Simulation of other components




Stubs receive output from test objects


Drivers generate input to test objects



<b>Integration Testing</b>



</div>
<span class='text_page_counter'>(25)</span><div class='page_container' data-page=25>

Driver: It is a calling program. It provides facility to invoke a sub
module instead of main modules.


Stub: It is a called program. This temporary program called by main
module instead of sub module.


<b>Top down Approach:</b>


MAIN


Sub1 stub


Sub2


<b>Bottom up Approach</b>


Main
Driver
Sub1


</div>
<span class='text_page_counter'>(26)</span><div class='page_container' data-page=26>

Functional testing



Test end to end functionality, Testing against




complete requirement.



Requirement focus



Test cases derived from specification



Use-case focus



Test selection based on user profile



<b>System Testing</b>



</div>
<span class='text_page_counter'>(27)</span><div class='page_container' data-page=27>

User (or customer) involved



Environment as close to field use as


possible



Focus on:



Building confidence



Compliance with defined acceptance



criteria in the contract



<b>Acceptance Testing</b>



</div>
<span class='text_page_counter'>(28)</span><div class='page_container' data-page=28>

<b>WHITE BOX TESTING </b>


<b>TECHNIQUES</b>




• <b>Statement coverage</b> : Execute each & every statement of the
code is called Statement coverage.


• <b>Decision Coverage : </b>Execute each decision direction at least
once


</div>
<span class='text_page_counter'>(29)</span><div class='page_container' data-page=29>

<b>Definition</b>


This technique is used to


ensure that every statement /
decision in the program is


executed at least once.


<b>Program Sample</b>


//statement 1
//statement 2


If((A > 1) and (B=0))
//sub-statement 1
Else
//sub-statement 2
<b>Test Conditions</b>
Statement1
Statement2


1. (A > 1) and (B = 0)



2. (A<=1) and (B NOT = 0)
3. (A<=1) and (B=0)


4. (A>1) and (B NOT= 0)


<b>Description</b>


Statement coverage requires only
that the if … else statement be
executed once – not that
sub-statements 1 and 2 be executed.


 Minimum level of Structural Coverage achieved


 Helps to identify unreachable Code and its removal if required
 “Null else” problem: It does not ensure exercising the


statements completely. Example: ..if x<5 then x= x+3;
x>5 decision not enforced. Paths not covered


</div>
<span class='text_page_counter'>(30)</span><div class='page_container' data-page=30>

<b>Definition</b>


A test case design technique in
which test cases are designed to
execute all the outcomes of every
decision


<b>Program Sample</b>


IF Y > 1 THEN Y = Y + 1


IF Y > 9 THEN Y = Y + 1
ELSE


Y = Y + 3
END


Y = Y + 2
ELSE


Y = Y + 4
END


<b>Decision Coverage</b>



No. Of Paths = 3
Test Cases:


1 (Y > 1) and (Y > 9)
2 (Y > 1) and (Y <= 9)
3 (Y < = 1)


<b>Graph</b>


Y = Y + 1


Y > 9


<b>T</b> <b>F</b>


<b>T</b> <b>F</b>



Y = Y + 1


Y = Y + 3
Y > 1


Y = Y + 4


Y = Y + 2


<b>1</b> <b>3</b>


</div>
<span class='text_page_counter'>(31)</span><div class='page_container' data-page=31>

<b>Definition</b>


 <sub>Both parts of the predicate are </sub>


tested


 <sub>Program Sample shows that all </sub>


4 test conditions are tested


<b>Conditions</b> <b>Table ( 2 n )</b>


<b>Condition Coverage - AND</b>



<b>Program Sample</b>


If((A > 1) AND (B=0)
{


//sub-statement 1
}
Else
{
//sub-statement 2
}
<b>Test Conditions</b>


1. (A > 1) AND (B = 0)


</div>
<span class='text_page_counter'>(32)</span><div class='page_container' data-page=32>

Definition


 <sub>Both parts of the predicate are </sub>


tested


 <sub>Program Sample shows that all </sub>


4 test conditions are tested


<b>Conditions</b> <b>Table ( 2 n)</b>


<b>Condition Coverage - OR</b>



<b>Program Sample</b>


If((A > 1) OR (B=0)
{
//sub-statement 1
}


Else
{
//sub-statement 2
}
<b>Test Conditions</b>


1. (A > 1) OR (B = 0)


2. (A<=1) OR (B NOT = 0)
3. (A<=1) OR (B=0)


4. (A>1) OR (B NOT= 0)


<b>A > 1</b>

<b>B = 0</b>

<b>RESULT</b>



<b>TRUE</b>

<b>OR</b>

<b>TRUE</b>

<b>TRUE</b>



<b>TRUE</b>

<b>OR</b>

<b>FALSE</b>

<b>TRUE</b>



<b>FALSE</b>

<b>OR</b>

<b>FALSE</b>

<b>FALSE</b>



</div>
<span class='text_page_counter'>(33)</span><div class='page_container' data-page=33>

<b>Loop Coverage</b>


<sub>Simple </sub>


<sub> Nested Loops</sub>


<sub>Serial / Concatenated Loops</sub>
<sub> Unstructured Loops (Goto)</sub>



<b>Coverage</b>


 <sub>Boundary value tests</sub>
 <sub>Cyclomatic Complexity</sub>


<b>Loop Coverage</b>



<b>I < N</b>


<b>END</b>


<b>I ++</b>
<b>T</b>
<b>F</b>


<b>I = 1</b>


<b>Print</b>


<b>Example of CC</b>


<b>for ( I=1 ; I<n ; I++ )</b>



<b>printf (“Simple Loop”);</b>



</div>
<span class='text_page_counter'>(34)</span><div class='page_container' data-page=34>

Types of Testing



 Black Box Testing
 White Box Testing
 Unit Testing



 Incremental Integration Testing
 Integration Testing


 Functional Testing
 System Testing


 End-to-End Testing
 Sanity Testing


 Regression Testing
 Acceptance Testing
 Load Testing


 Stress Testing


 Performance Testing


List of the different types of testing that can be implemented are listed below which
will be followed by explanations of the same


Usability Testing


Install / Uninstall Testing
Recovery Testing
Security Testing
Compatibility Testing
Exploratory Testing
Ad-hoc Testing
Comparison Testing


Alpha Testing
Beta Testing
Mutation Testing
Conformance Testing


</div>
<span class='text_page_counter'>(35)</span><div class='page_container' data-page=35>

Black Box Testing



 It can also be termed as functional testing


 Tests that examine the observable behavior of software as evidenced by its outputs


without referencing to internal functions is black box testing


 It is not based on any knowledge of internal design or code and tests are based on


requirements and functionality


 In object oriented programming environment, automatic code generation and code


re-use becomes more prevalent, analysis of source code itself becomes less
important and functional tests become more important


</div>
<span class='text_page_counter'>(36)</span><div class='page_container' data-page=36>

White Box Testing



 It can also be termed as Structural Testing


 Tests that verify the structure of the software and require complete access to the


object’s source code is white box testing



 It is known as white box as all internal workings of the code can be seen


 White-box tests make sure that the software structure itself contributes to proper


and efficient program execution


 It is based in of the internal logic of an applications’ code and tests are based on


coverage of code statements, branches, paths, conditions


</div>
<span class='text_page_counter'>(37)</span><div class='page_container' data-page=37>

Unit testing



 This is the ‘micro’ scale testing and tests particular functions or code modules
 It is always a combination of structural and functional tests and typically done by


programmers and not by testers


 Requires detailed knowledge of the internal program design and code and may


require test driver modules or test harnesses


 Unit tests are not always done easily done unless the application has a well designed


</div>
<span class='text_page_counter'>(38)</span><div class='page_container' data-page=38>

Incremental 



Integration Testing



 This is continuous testing of an application as new functionality is added


 These tests require that the various aspects of an application’s functionality be



independent enough to work separately before all parts of the program are
completed




</div>
<span class='text_page_counter'>(39)</span><div class='page_container' data-page=39>

Integration Testing



 This is testing of combined parts of an application to ensure that they function


together correctly


 The parts can be code modules, individual applications, client and server


applications on a network, etc.


</div>
<span class='text_page_counter'>(40)</span><div class='page_container' data-page=40>

Functional 


Testing



 It is black-box testing geared to functional requirements and should be done by


testers


 Testing done to ensure that the product functions the way it is designed to


according to the design specifications and documentation


 This testing can involve testing of product’s user interface, database management,


</div>
<span class='text_page_counter'>(41)</span><div class='page_container' data-page=41>

System 



Testing



 This is like black-box testing that is based on over-all requirements specifications
 This testing begins once the modules are integrated enough to perform tests in a


whole system environment


</div>
<span class='text_page_counter'>(42)</span><div class='page_container' data-page=42>

End-to-End testing



 This is the ‘macro’ end of the test scale and similar to system testing


 This would involve testing of a complete application environment as in a real world


</div>
<span class='text_page_counter'>(43)</span><div class='page_container' data-page=43>

Sanity 


Testing



Initial testing effort to determine if a new software version is performing well
enough to accept it for a major testing effort.


</div>
<span class='text_page_counter'>(44)</span><div class='page_container' data-page=44>

Regression Testing



 This is re-testing of the product/software to ensure that all reported bugs have been


fixed and implementation of changes has not affected other functions


 It is always difficult to the amount of re-testing required, especially when the


software is at the end of the development cycle


 These tests apply to all phases wherever changes are being made



 This testing also ensures reported product defects have been corrected for each new


</div>
<span class='text_page_counter'>(45)</span><div class='page_container' data-page=45>

Acceptance 


Testing



 This can be told as the final testing which is based on specifications of the end-user


or the customer


 It can also be based on use by end-users/customers over some limited period of time
 This testing is more used in Web environment, where “virtual clients” perform


typical tasks such as browsing, purchasing items and searching databases contained
within your web site


 “probing clients” start recording the exact server response times, where this testing


</div>
<span class='text_page_counter'>(46)</span><div class='page_container' data-page=46>

Load 



Testing



 Testing an application under heavy loads


 For example, testing of a Web site under a range of loads to determine at what point


the system’s response time degrades or fails


</div>
<span class='text_page_counter'>(47)</span><div class='page_container' data-page=47>

Stress Testing




 This term is more often used interchangeably with ‘load’ and ‘performance’ testing.
 It is system functional testing while under unusually heavy loads, heavy repetition


of certain actions or inputs, input of large numerical values, large complex queries
to a database system


 Always aimed at finding the limits at which the system will fail through abnormal


quantity or frequency of inputs.


 Examples could


be:- higher rates of inputs


 data rates an order of magnitude above ‘normal’


 test cases that require maximum memory or other resources
 test cases that cause ‘thrashing’ in a virtual operating system


 test cases that cause excessive ‘hunting’ for data on disk systems


 This testing can also attempt to determine combinations of otherwise normal inputs


</div>
<span class='text_page_counter'>(48)</span><div class='page_container' data-page=48>

Performance 


Testing



 This term is more often used interchangeably with ‘stress’ and ‘load’ testing


 To understand the applications’ scalability, or to benchmark the performance in a



environment or to identify the bottlenecks in high hit-rate Web sites, this testing can
be used


 This testing checks the run-time performance in the context of the integrated system
 This may require special software instrumentation


 Ideally, these types of testing are defined in requirements documentation or QA or


</div>
<span class='text_page_counter'>(49)</span><div class='page_container' data-page=49>

Usability Testing



 This testing is testing for ‘user-friendliness’


 The target will always be the end-user or customer


 Techniques such as interviews, surveys, video recording of user sessions can be


used in this type of testing


</div>
<span class='text_page_counter'>(50)</span><div class='page_container' data-page=50>

Install / Uninstall 


testing



</div>
<span class='text_page_counter'>(51)</span><div class='page_container' data-page=51>

Recovery testing



 Testing that is performed to know how well a system recovers from crashes,


hardware failures or other catastrophic problems


 This is the forced failure of the software in a variety of ways to verify for the


recovery



 Systems need to be fault tolerant - at the same time processing faults should not


</div>
<span class='text_page_counter'>(52)</span><div class='page_container' data-page=52>

Security 


Testing



 This testing is performed to know how well the system protects against


unauthorized internal or external access, willful damage, etc; this can include :


 attempted penetration of the system by ‘outside’ individuals for fun or


personal gain


 disgruntled or dishonest employees


 During this testing the tester plays the role of the individual trying to penetrate


into the system.


 Large range of methods include:


 attempt to acquire passwords through external clerical means
 use custom software to attack the system


 overwhelm the system with requests


</div>
<span class='text_page_counter'>(53)</span><div class='page_container' data-page=53>

Compatibility Testing



 Testing whether the software is compatible in particular hardware / software /



</div>
<span class='text_page_counter'>(54)</span><div class='page_container' data-page=54>

Exploratory testing



 Tests based on creativity


</div>
<span class='text_page_counter'>(55)</span><div class='page_container' data-page=55>

Ad-hoc Testing



 Similar to Exploratory testing


 The only difference is that, these tests are taken to mean that the testers have


</div>
<span class='text_page_counter'>(56)</span><div class='page_container' data-page=56>

Comparison Testing



 This testing is comparing software weaknesses and strengths to competing


products


 For some applications reliability is critical, redundant hardware and software may


be used, independent versions can be used


 Testing is conducted for each version with same test data to ensure all provide


identical output


 All the versions are run with a real-time comparison of results


 When outputs of versions differ, investigations are made to determine if there is a


</div>
<span class='text_page_counter'>(57)</span><div class='page_container' data-page=57>

Alpha Testing




 This is testing of an application when development is nearing completion;mostly


testing conducted at the developer’s site by a customer


 The customer uses the software with the developer ‘looking over the shoulder’


and recording errors and usage problems


 Testing is conducted in a controlled environment


 Minor design changes can be still made as a result of this testing


</div>
<span class='text_page_counter'>(58)</span><div class='page_container' data-page=58>

Beta Testing



 Testing conducted when development and testing are completed and bugs and


problems need to be found before final release


 It is ‘live’ testing in an environment not controlled by the developer.


 Customer records the errors / problems reports difficulties at regular intervals
 Testing is conducted at one or more customer sites


</div>
<span class='text_page_counter'>(59)</span><div class='page_container' data-page=59>

Mutation Testing



 Method of determining if a set of test date or test cases is useful


 Various code changes (‘bugs’) are deliberately introduced and retested with the



original test date/cases to determine whether the bugs are detected


 Proper implementation requires large computational resources
 A mutated program differs from the original


 The mutants are tested until the results differ from those obtained from the


original program


</div>
<span class='text_page_counter'>(60)</span><div class='page_container' data-page=60>

Conformance Testing



 Testing conducted to verify the implementation in conformance to the industry


standards


 Producing tests for the behavior of an implementation to be sure that it provides the


</div>
<span class='text_page_counter'>(61)</span><div class='page_container' data-page=61>

Economics of Continuous Testing



Traditional Testing

Continuous Testing



Accumulated Accumulated Accumulated Accumulated
Test Cost Error Remaining Error Remaining Cost


0

20

     10

   $10



0

40

     15

    $25



0

60

     18

   $42




$480

12

      4        $182



$1690

0

      0 

   $582



</div>
<span class='text_page_counter'>(62)</span><div class='page_container' data-page=62>

<b>Error:</b>


"Is an undesirable deviation from requirements?"


Any problem or cause for many problems which stops the system to perform
its functionality is referred as Error


<b>Bug:</b>


Any Missing functionality or any action that is performed by the system
which is not supposed to be performed is a Bug.


"Is an error found BEFORE the application goes into production?"
Any of the following may be the reason for birth of Bug


1. Wrong functionality
2. Missing functionality


</div>
<span class='text_page_counter'>(63)</span><div class='page_container' data-page=63>

<b>Defect:</b>


A defect is a variance from the desired attribute of a system or application.
"Is an error found AFTER the application goes into production?"


Defect will be commonly categorized into two types:
1. Defect from product Specification



2. Variance from customer/user expectation.


<b>Failure:</b>


Any Expected action that is supposed to happen if not can be referred as
failure or we can say absence of expected response for any request.


<b>Fault:</b>


</div>
<span class='text_page_counter'>(64)</span><div class='page_container' data-page=64>



/>

/>

www.sureshkumar.net



/>

/>





</div>
<span class='text_page_counter'>(65)</span><div class='page_container' data-page=65>

<b>STLC (Testing Life cycle)</b>



Test Plan



Test Design



Test Execution



Test Log



Defect Tracking



</div>
<span class='text_page_counter'>(66)</span><div class='page_container' data-page=66>

A set of test data and test programs (test scripts) and their expected results. A


test case validates one or more system requirements and generates a pass or
fail


<b>Test Case</b>


<b>Test Scenario</b>


A set of test cases that ensure that the business process flows are
tested from end to end. They may be independent tests or a series of
tests that follow each other, each dependent on the output of the


</div>
<span class='text_page_counter'>(67)</span><div class='page_container' data-page=67>

Equivalence Partitioning: An approach where classes of inputs are categorized for
product or function validation. This usually does not include combinations of input, but
rather a single state value based by class. For example, with a given function there may
be several classes of input that may be used for positive testing. If function expects an
integer and receives an integer as input, this would be considered as positive test


assertion. On the other hand, if a character or any other input class other than integer is
provided, this would be considered a negative test assertion or condition.


E.g.: Verify a credit limit within a given range(1,000 – 2,000). Here we can identify 3
conditions


1. < 1000


2. Between 1,000 and 2,000
3. >2000


Error Guessing



E.g.: Date Input – February 30, 2000
Decimal Digit – 1.99.


Boundary Value Analysis


</div>
<span class='text_page_counter'>(68)</span><div class='page_container' data-page=68>

Age


<b>BVA (Boundary Value Analysis )( Here we can define </b>


<b>for Size and Range)</b>



<b>Size: three</b>


Range:



Min

Pass



Min-1

Fail



Min+1

Pass



Max

Pass



Max-1

Pass



Max+1

Fail



</div>
<span class='text_page_counter'>(69)</span><div class='page_container' data-page=69>

<b>Test</b>

<b>Scenarios - Sample</b>



<b>FS Reference: 3.2.1.Deposit </b>


<b>An order capture for deposit contains fields like Client Name, Amount, Tenor </b>


<b>and interest for the deposit.</b>


<b>Business Rule: </b>


 <b><sub>If tenor is great than 10 months interest rate should be greater than 10% </sub></b>
<b>else a warning should be given by application.</b>


 <b><sub>If Tenor greater than 12 months, then the order should not proceed.</sub></b>


<b>FS Reference: 3.2.1.Deposit </b>


<b>An order capture for deposit contains fields like Client Name, Amount, Tenor </b>
<b>and interest for the deposit.</b>


<b>Business Rule: </b>


 <b><sub>If tenor is great than 10 months interest rate should be greater than 10% </sub></b>


<b>else a warning should be given by application.</b>


 <b>Test Scenario ID<sub>If Tenor greater than 12 months, then the order should not proceed.</sub>Client Name</b> <b>Amount</b> <b>Tenor</b> <b>Interest</b> <b>Warning</b>


<b>Dep/01</b> <b>123</b> <b>>0</b> <b>12 months</b> <b>0<interest<10%</b> <b>Warning</b>


<b>Dep/02</b> <b>abc</b> <b><0</b> <b>6 months</b> <b><0</b> <b>Nogo</b>
<b>Dep/03</b> <b>12ab</b> <b>With two decimal</b> <b>11 months</b> <b>With two decimal </b>


<b>and rate to be 11 </b>
<b>%</b>



<b>No Warning</b>


<b>Dep/04</b> <b>Ab.Pvt</b> <b>With </b> <b>four </b>
<b>decimal</b>


<b>1.5 months</b> <b>With </b> <b>four </b>
<b>decimal</b>


<b>No Warning</b>


<b>Dep/05</b> <b>abc</b> <b>Character</b> <b>Blank</b> <b>Character</b> <b>No Warning</b>
<b>Dep/06</b> <b>abc</b> <b>>0</b> <b>Invalid date</b> <b>>100</b> <b>No Warning</b>
<b>Dep/07</b> <b>abc</b> <b>>0</b> <b><system date</b> <b>>0</b> <b>No Warning</b>


</div>
<span class='text_page_counter'>(70)</span><div class='page_container' data-page=70>

<b>Test</b>

<b>Cases</b>



Test Cases will be defined, which will form the basis for mapping


the Test cases to the actual transaction types that will be used for
the integrated testing.


Test cases gives values / qualifiers to the attributes that the
test condition can have.


Test cases is the end state of a test condition, i.e., it cannot be
decomposed or broken down further.


Test Cases contains the Navigation Steps, Instructions, Data
and Expected Results required to execute the test case(s).
It covers transfer of control between components.



It covers transfer of data between components (in both
directions)


</div>
<span class='text_page_counter'>(71)</span><div class='page_container' data-page=71>

<b>Test</b>

<b>Data</b>



Test Data could be related to both inputs and maintenance that


are required to execute the application. Data for executing the
test scenarios should be clearly defined.


Test Team can prepare this with the database team and Domain
experts support or Revamp the existing production Data.


<i>Example:</i>


Business rule, if the Interest to be Paid is more than 8 % <b>and</b> the Tenor of the
deposit exceeds one month, then the system should give a warning.


To populate an Interest to be Paid field of a deposit, we can give 9.5478 and
make the Tenor as two months for a particular deposit.


This will trigger the warning in the application.
<i>Example:</i>


Business rule, if the Interest to be Paid is more than 8 % <b>and</b> the Tenor of the
deposit exceeds one month, then the system should give a warning.


To populate an Interest to be Paid field of a deposit, we can give 9.5478 and
make the Tenor as two months for a particular deposit.



</div>
<span class='text_page_counter'>(72)</span><div class='page_container' data-page=72>

<b>Test</b>

<b>Conditions</b>



A Test Condition is all possible combinations and validations
that can be attributed to a requirement in the


specification.The importance’s of determining the conditions
are:


1. Deciding on the architecture of testing approach
2. Evolving design of the test scenarios


3. Ensuring Test coverage


The possible condition types that can be built are


• Positive condition: Polarity of the value given for test is to


comply with the condition existence.


• <sub>Negative condition:</sub><sub> Polarity of the value given for test is not </sub>


to comply with the condition existence.


• Boundary condition: Polarity of the value given for test is to


assess the extreme values of the condition


• <sub>User Perspective condition:</sub><sub> Polarity of the value given for </sub>



</div>
<span class='text_page_counter'>(73)</span><div class='page_container' data-page=73>

A defect is an improper program condition that is generally the result of an
error. Not all errors produce program defects, as with incorrect comments or
some documentation errors. Conversely, a defect could result from such
nonprogrammer causes as improper program packaging or handling


<b>Software Defects</b>



<b>Software Defects</b>



<b>Defect Categories</b>


<b>Wrong</b> <b>Missing</b> <b>Extra</b>


The specifications have been
implemented incorrectly.


A requirement incorporated
into the product that was not


specified.
A specified requirement


</div>
<span class='text_page_counter'>(74)</span><div class='page_container' data-page=74>

Step 1:Identify the module for which the Use Case belongs.


Step 2:Identify the functionality of the Use Case with respect to the overall
functionality of the system.


Step 3:Identify the Actors involved in the Use Case.


Step 4:Identify the pre-conditions.



Step 5:Understand the Business Flow of the Use Case.


Step 6:Understand the Alternate Business Flow of the Use Case.


Step 7:Identify the any post-conditions and special requirements.


Step 8:Identify the Test Conditions from Use Case / Business Rule’s and make a Test
Condition Matrix Document – Module Wise for each and every Use Case.


Step 9:Identify the main functionality of the module and document a complete Test
scenario Document for the Business Flow (include any actions made in the alternate
business flow if applicable)


Step 10:For every test scenarios, formulate the test steps based on a navigational flow
of the application with the test condition matrix in a specific test case template.


</div>
<span class='text_page_counter'>(75)</span><div class='page_container' data-page=75>

Role of Documentation in Testing



 Testing practices should be documented so that they are repeatable


 Specifications, designs, business rules, inspection reports, configurations, code


changes, test plans, test cases, bug reports, user manuals, etc. should all be
documented


 Change management for documentation should be used if possible


 Ideally a system should be developed for easily finding and obtaining documents



</div>
<span class='text_page_counter'>(76)</span><div class='page_container' data-page=76>

<b>Under the condition where the bug report is invalid</b>.
The question is


what are the comments that the developer left to indicate that it is
invalid? If there are none, you need to discuss this with the developer.
The reasons that they may have are many:


1) You didn't understand the system under test correctly because
1a) the requirements have changed


1b) you don't get the whole picture


2) You were testing against the wrong version of software, or
configuration, with the wrong OS, or wrong browser


3) You made an assumption that was incorrect


4) Your bug was not repeatable (in which case they may mark it as
"works for me"), or if it was repeatable it was because the memory
was already corrupted after the first instance, but you can't


reproduce it on a clean machine (again, could be a "works for me" bug).
Just remember that a bug report isn't you writing a law that the


</div>
<span class='text_page_counter'>(77)</span><div class='page_container' data-page=77>

Traceability


Matrix



Traceability Matrix ensures that each requirement has been


traced to a specification in the Use Cases and Functional


Specifications to a test condition/case in the test scenario and
Defects raised during Test Execution, thereby achieving
one-to-one test coverage.


The entire process of Traceability is a time consuming process. In
order to simplify, Rational Requisite Pro / Test Director a tool,
which will maintain the specifications of the documents. Then
these are mapped correspondingly. The specifications have to be
loaded into the system by the user.


Even though it is a time consuming process, it helps in finding the
‘ripple’ effect on altering a specification. The impacts on test
conditions can immediately be identified using the trace matrix.
Traceability matrix should be prepared between requirements to
Test cases.


</div>
<span class='text_page_counter'>(78)</span><div class='page_container' data-page=78>

What is Test Management?



Test management is a method of organizing application test assets
and artifacts — such as


Test requirements
Test plans


Test documentation
Test scripts


Test results


To enable easy accessibility and reusability.Its aim is to deliver


quality applications in less time.


</div>
<span class='text_page_counter'>(79)</span><div class='page_container' data-page=79>

<b>Test Strategy</b>



Scope of Testing
Types of Testing
Levels of Testing
Test Methodology
Test Environment
Test Tools


Entry and Exit Criteria
Test Execution


Roles and Responsibilities
Risks and Contingencies
Defect Management


</div>
<span class='text_page_counter'>(80)</span><div class='page_container' data-page=80>

<b>Test Requirements</b>



Test Team gathers the test requirements from the following Base Lined documents.
Customer Requirements Specification(CRS)


Functional Specification (FS) – Use Case, Business Rule, System Context
Non – Functional Requirements (NFR)


High Level Design Document (HLD)
Low Level Design Document (LLD)
System Architecture Document



Prototype of the application
Database Mapping Document
Interface Related Document


Other Project related documents such as e-mails, minutes of meeting.
Knowledge Transfer Sessions from the Development Team


</div>
<span class='text_page_counter'>(81)</span><div class='page_container' data-page=81>

<b>Configuration Management</b>


<b>Configuration Management</b>


Software Configuration management is an umbrella activity that is applied
throughout the software process. SCM identifies controls, audits and reports


modifications that invariably occur while software is being developed and after it has
been released to a customer. All information produced as part of software


engineering becomes of software configuration. The configuration is organized in a
manner that enables orderly control of change.


The following is a sample list of Software Configuration Items:


 <sub> Management plans (Project Plan, Test Plan, etc.) </sub>


 <sub> Specifications (Requirements, Design, Test Case, etc.) </sub>


 <sub> Customer Documentation (Implementation Manuals, User Manuals, Operations </sub>


Manuals, On-line help Files)



 <sub> Source Code (PL/1 Fortran, COBOL, Visual Basic, Visual C, etc.) </sub>
 <sub> Executable Code (Machine readable object code, exe's, etc.) </sub>


 <sub> Libraries (Runtime Libraries, Procedures, %include Files, API's, DLL's, etc.) </sub>
 <sub> Databases (Data being Processed, Data a program requires, test data, Regression </sub>


test data, etc.)


</div>
<span class='text_page_counter'>(82)</span><div class='page_container' data-page=82>

Automated Testing Tools



 Win Runner, Load Runner, Test Director from Mercury Interactive
 QARun ,QA Load from Compuware


 Rational Robot, Site Load and SQA Manager from Rational
 SilkTest, SilkPerformer from Segue


</div>
<span class='text_page_counter'>(83)</span><div class='page_container' data-page=83>

<i><b>Test attributes</b></i>


<b>To different degrees, good tests have these attributes:</b>


• <b>Power</b>. When a problem exists, the test will reveal it.


• <b>Valid</b>. When the test reveals a problem, it is a genuine problem.


• <b>Value</b>. It reveals things your clients want to know about the product or project.


• <b>Credible</b>. Your client will believe that people will do the things that are done in this test.


• <b>Representative </b>of events most likely to be encountered by the user. (xref. Musa's <i>Software</i>



<i>Reliability Engineering).</i>


• <b>Non-redundant. </b>This test represents a larger group that address the same risk.


• <b>Motivating</b>. Your client will want to fix the problem exposed by this test.


• <b>Performable</b>. It can be performed as designed.


• <b>Maintainable</b>. Easy to revise in the face of product changes.


• <b>Repeatable</b>. It is easy and inexpensive to reuse the test.


• <b>Pop</b>. (<i>short for Karl Popper</i>) It reveal things about our basic or critical assumptions.


• <b>Coverage</b>. It exercises the product in a way that isn't already taken care of by other tests.


• <b>Easy to evaluate</b>.


• <b>Supports troubleshooting. </b>Provides useful information for the debugging programmer.


• <b>Appropriately complex. </b>As the program gets more stable, you can hit it with more complex


tests


and more closely simulate use by experienced users.


• <b>Accountable</b>. You can explain, justify, and prove you ran it.


• <b>Cost</b>. This includes time and effort, as well as direct costs.



</div>
<span class='text_page_counter'>(84)</span><div class='page_container' data-page=84>

<b>Test Project Manager</b>


<b>Test Project Manager</b>


Customer Interface
Master Test Plan
Test Strategy


Project Technical Contact


Interaction with Development Team
Review Test Artifacts


Defect Management


<b>Test Lead</b>


<b>Test Lead</b>


Module Technical Contact
Test Plan Development


Interaction with Module Team
Review Test Artifacts


Defect Management


Test Execution Summary
Defect Metrics Reporting



<b>Test Engineers</b>


<b>Test Engineers</b>


Prepare Test Scenarios


Develop Test Conditions/Cases
Prepare Test Scripts


Test Coverage Matrix


Execute Tests as Scheduled
Defect Log


<b>Test Tool Specialist</b>


<b>Test Tool Specialist</b>


<b> Prepare Automation Strategy</b>
Capture and Playback Scripts
Run Test Scripts


Defect Log


<b>Roles & Responsibilities</b>



<b>Support Group for Testing</b>


<b>Support Group for Testing</b>



</div>

<!--links-->

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×