Static
vs. dynamic testing
There
are many approaches to software testing. Reviews, walkthroughs, or inspections
are considered as static testing, whereas actually executing programmed code
with a given set of test cases is referred to as dynamic testing. Static
testing can be (and unfortunately in practice often is) omitted. Dynamic
testing takes place when the program itself is used for the first time (which
is generally considered the beginning of the testing stage). Dynamic testing
may begin before the program is 100% complete in order to test particular
sections of code (modules or discrete functions). Typical techniques for this are
either using stubs/drivers or execution from a debugger environment. For
example, spreadsheet programs are, by their very nature, tested to a large
extent interactively ("on the fly"), with results displayed
immediately after each calculation or text manipulation.
The
box approach
Software
testing methods are traditionally divided into white- and black-box testing.
These two approaches are used to describe the point of view that a test
engineer takes when designing test cases.
White-Box
testing
White-box
testing (also known as clear box testing, glass box testing, and transparent
box testing and structural testing) tests internal structures or workings of a
program, as opposed to the functionality exposed to the end-user. In white-box
testing an internal perspective of the system, as well as programming skills,
are used to design test cases. The tester chooses inputs to exercise paths
through the code and determine the appropriate outputs. This is analogous to
testing nodes in a circuit, e.g. in-circuit testing (ICT).
While
white-box testing can be applied at the unit, integration and system levels of
the software testing process, it is usually done at the unit level. It can test
paths within a unit, paths between units during integration, and between
subsystems during a system–level test. Though this method of test design can
uncover many errors or problems, it might not detect unimplemented parts of the
specification or missing requirements.
Techniques
used in white-box testing include:
API
testing (application programming interface) - testing of the application using
public and private APIs
Code
coverage - creating tests to satisfy some criteria of code coverage (e.g., the
test designer can create tests to cause all statements in the program to be
executed at least once)
Fault
injection methods - intentionally introducing faults to gauge the efficacy of
testing strategies
Mutation
testing methods
Static
testing methods
Code
coverage tools can evaluate the completeness of a test suite that was created
with any method, including black-box testing. This allows the software team to
examine parts of a system that are rarely tested and ensures that the most
important function points have been tested.Code coverage as software metric can
be reported as a percentage for:
Function
coverage, which reports on functions executed
Statement
coverage, which reports on the number of lines executed to complete the test
100%
statement coverage ensures that all code paths, or branches (in terms of
control flow) are executed at least once. This is helpful in ensuring correct
functionality, but not sufficient since the same code may process different
inputs correctly or incorrectly.
Black-box
testing
Main
article: Black-box testing
Black
box diagram
Black-box
testing treats the software as a "black box", examining functionality
without any knowledge of internal implementation. The tester is only aware of
what the software is supposed to do, not how it does it. Black-box testing
methods include: equivalence partitioning, boundary value analysis, all-pairs
testing, state transition tables, decision table testing, fuzz testing,
model-based testing, use case testing, exploratory testing and
specification-based testing.
Specification-based
testing aims to test the functionality of software according to the applicable
requirements. This level of testing usually requires thorough test cases to be
provided to the tester, who then can simply verify that for a given input, the
output value (or behavior), either "is" or "is not" the
same as the expected value specified in the test case. Test cases are built
around specifications and requirements, i.e., what the application is supposed
to do. It uses external descriptions of the software, including specifications,
requirements, and designs to derive test cases. These tests can be functional
or non-functional, though usually functional.
Specification-based
testing may be necessary to assure correct functionality, but it is
insufficient to guard against complex or high-risk situations]
One
advantage of the black box technique is that no programming knowledge is
required. Whatever biases the programmers may have had, the tester likely has a
different set and may emphasize different areas of functionality. On the other
hand, black-box testing has been said to be "like a walk in a dark labyrinth
without a flashlight.Because they do not examine the source code, there are
situations when a tester writes many test cases to check something that could
have been tested by only one test case, or leaves some parts of the program
untested.
This
method of test can be applied to all levels of software testing: unit,
integration, system and acceptance. It typically comprises most if not all
testing at higher levels, but can also dominate unit testing as well.
Grey-box
testing
Main
article: Gray box testing
Grey-box
testing (American spelling: gray-box testing) involves having knowledge of
internal data structures and algorithms for purposes of designing tests, while
executing those tests at the user, or black-box level. The tester is not
required to have full access to the software's source code not in citation
given] Manipulating input data and formatting output do not qualify as
grey-box, because the input and output are clearly outside of the "black
box" that we are calling the system under test. This distinction is
particularly important when conducting integration testing between two modules
of code written by two different developers, where only the interfaces are
exposed for test. However, modifying a data repository does qualify as
grey-box, as the user would not normally be able to change the data outside of
the system under test. Grey-box testing may also include reverse engineering to
determine, for instance, boundary values or error messages.
By
knowing the underlying concepts of how the software works, the tester makes
better-informed testing choices while testing the software from outside.
Typically, a grey-box tester will be permitted to set up his testing
environment; for instance, seeding a database; and the tester can observe the
state of the product being tested after performing certain actions. For
instance, in testing a database product he/she may fire an SQL query on the
database and then observe the database, to ensure that the expected changes
have been reflected. Grey-box testing implements intelligent test scenarios,
based on limited information. This will particularly apply to data type
handling, exception handling, and so on.
Software Quality Assurance
Software Quality Assurance
Software Testing FAQs
QA Software
Er Ratnesh Porwal
Software Engineer
www.AeroSoftCorp.com
www.AeroSoft.in
www.AeroSoft.co.in
www.AeroSoftseo.com
On Line Assistence :
Gtalk : ratnesh.aerosoft@gmail.com
Y! Messenger : ratnesh.AeroSoft@yahoo.com
Rediff Bol ratnesh.AeroSoft@rediffmail. com
Software Engineer
www.AeroSoftCorp.com
www.AeroSoft.in
www.AeroSoft.co.in
www.AeroSoftseo.com
On Line Assistence :
Gtalk : ratnesh.aerosoft@gmail.com
Y! Messenger : ratnesh.AeroSoft@yahoo.com
Rediff Bol ratnesh.AeroSoft@rediffmail.
No comments:
Post a Comment