SOFTWARE TESTING GLOSSARY
28 branch testing: A test case design technique for a component in which test cases are
designed to execute branch outcomes. |
27 branch point: See decision. |
|
30 bug seeding: See error seeding. |
31 C-use: See computation data use. |
32 capture/playback tool: A test
tool that records test input as it is sent to the software under test. The
input cases stored can then be used to reproduce the test at a later time. |
33 capture/replay tool: See capture/playback tool. |
34 CAST: Acronym
for computer-aided software testing. |
35 cause-effect graph: A
graphical representation of inputs or
stimuli (causes) with their associated outputs (effects),
which can be used to design test cases.
|
36 cause-effect graphing: A test case design technique in which test cases are
designed by consideration of cause-effect graphs. |
37 certification: The
process of confirming that a system or component complies with its specified requirements and is
acceptable for operational use. From [IEEE]. |
38 Chow's coverage metrics: See N-switch
coverage. [Chow] |
39 code coverage: An
analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been executed and
therefore may require additional attention. |
40 code-based testing: Designing
tests based on objectives derived from the implementation (e.g., tests that
execute specific control flow paths or use specific data items). |
41 compatibility testing: Testing whether
the system is compatible with other systems with which it should communicate.
|
42 complete path testing: See exhaustive testing. |
|
43 component: A minimal
software item for which a separate specification is available. |
46 condition: A Boolean
statement containing no Boolean operators. For instance, A<B is a condition but A and B is not. |
45 computation data use: A data use not
in a condition. Also called C-use. |
48 condition outcome: The
evaluation of a condition to TRUE or FALSE. |
47 condition coverage: See branch condition coverage. |
49 conformance criterion: Some
method of judging whether or not the component's action on a particular specified input value conforms to the specification. |
50 conformance testing: The
process of testing that an implementation conforms to the specification on which it is based. |
51 control flow: An
abstract representation of all possible sequences of events in a program's
execution. |
52 control flow graph: The
diagrammatic representation of the possible alternative control flow paths through a component. |
53 control flow path: See path.
|
54 conversion testing: Testing
of programs or procedures used to convert data from existing systems for use
in replacement systems. |
55 correctness: The degree
to which software conforms to its specification. |
56 coverage: The
degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite. |
57 coverage item: An entity
or property used as a basis for testing.
|
58 data definition: An executable statement where a variable is assigned a value. |
59 data definition C-use coverage:
The percentage of data definition C-use pairs in a component that are exercised by a test case suite. |
60 data definition C-use pair: A data definition and
computation data use, where the data use uses
the value defined in the data definition.
|
61 data definition P-use coverage:
The percentage of data definition
P-use pairs in a component that are exercised by a test case suite. |
62 data definition P-use pair: A data definition and
predicate data use, where the data use uses
the value defined in the data definition.
|
63 data definition-use coverage: The
percentage of data definition-use pairs in a component that are exercised by a test case suite. |
64 data definition-use pair: A data definition and
data use, where the data use uses
the value defined in the data definition.
|
65 data definition-use testing: A test case design technique for a component in which test cases are
designed to execute data definition-use pairs. |
66 data flow coverage: Test coverage measure based on variable usage within the
code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc. |
67 data flow testing: Testing
in which test cases are designed based on variable usage within the
code. |
68 data use: An executable statement where the value of a variable is
accessed. |
69 debugging: The
process of finding and removing the causes of failures
in software. |
70 decision: A program
point at which the control flow has two or more alternative routes. |
71 Decision condition: A condition within a decision.
|
72 decision coverage: The
percentage of decision outcomes that have been exercised by a test case suite. |
73 decision outcome: The
result of a decision (which therefore determines the control flow alternative taken). |
74 design-based testing: Designing
tests based on objectives derived from the architectural or detail design of
the software (e.g., tests that execute specific invocation paths or probe the
worst case behaviour of algorithms). |
75 desk checking: The testing
of software by the manual simulation of its execution. |
76 dirty testing: See negative testing.
[Beizer] |
77 documentation testing: Testing
concerned with the accuracy of documentation. |
78 domain: The set
from which values are selected. |
79 domain testing: See equivalence partition testing. |
80 dynamic analysis: The
process of evaluating a system or component based upon its behaviour during execution. |
81 emulator: A device,
computer program, or system that accepts the same inputs
and produces the same outputs
as a given system. |
82 entry point: The first
executable statement within a component. |
83 equivalence class: A portion
of the component's input
or output domains for which the component's behaviour is assumed to be the same from the component's specification. |
84 equivalence partition: See equivalence class. |
85 equivalence partition coverage: The
percentage of equivalence classes generated for the component, which have been exercised by a test case suite. |
86 equivalence partition testing: A test case design technique for a component in which test cases are
designed to execute representatives from equivalence classes. |
87 error: A human
action that produces an incorrect result. [IEEE] |
88 error guessing: A test case design technique where the experience of the
tester is used to postulate what faults might
occur, and to design tests specifically to expose them. |
89 error seeding: The
process of intentionally adding known faults
to those already in a computer program for the purpose of monitoring the rate
of detection and removal, and estimating the number of faults
remaining in the program. |
90 executable statement: A statement which, when compiled, is translated into object
code, which will be executed procedurally when the program is running and may
perform an action on program data. |
91 exercised: A program
element is exercised by a test case when
the input value causes the execution of that element, such as a statement, branch,
or other structural element. |
92 exhaustive testing: A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables. |
93 exit point: The last executable statement within a component. |
94 expected outcome: See predicted outcome. |
95 facility testing: See functional test case design. |
96 failure: Deviation
of the software from its expected delivery or service. |
97 fault: A
manifestation of an error
in software. A fault, if encountered may cause a failure.
|
98 feasible path: A path
for which there exists a set of input values and execution conditions which causes it to be
executed. |
99 feature testing: See functional test case design. |
100 functional specification: The
document that describes in detail the characteristics of the product with
regard to its intended capability. [BS 4778, Part2] |
101 functional test case design: Test case selection
that is based on an analysis of the specification of the component without reference to its internal workings. |
102 glass box testing: See structural test case design. |
103 incremental testing: Integration testing where system components are integrated into the system one at a time
until the entire system is integrated. |
104 independence:
Separation of responsibilities which ensures the accomplishment of objective
evaluation. After [do178b]. |
105 infeasible path: A path
which cannot be exercised by any set of possible input values. |
106 input: A variable
(whether stored within a component or outside it) that is read by the component. |
107 input domain: The set
of all possible inputs. |
108 input value: An
instance of an input. |
109 inspection: A group review
quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both
document production and inspection). After [Graham] |
110 installability testing: Testing
concerned with the installation procedures for the system. |
111 instrumentation: The
insertion of additional code into the program in order to collect information
about program behaviour during program execution. |
112 instrumenter: A software
tool used to carry out instrumentation. |
113 integration: The
process of combining components into larger assemblies. |
114 integration testing: Testing
performed to expose faults
in the interfaces and in the interaction between integrated components. |
143 path testing: A test case design technique in which test cases are
designed to execute paths
of a component. |
144 performance testing: Testing
conducted to evaluate the compliance of a system or component with specified performance requirements. [IEEE] |
145 portability testing: Testing
aimed at demonstrating the software can be ported to specified hardware or
software platforms. |
146 precondition: Environmental
and state conditions which must be fulfilled before the component can be executed with a particular input value. |
147 predicate: A logical
statement which evaluates to TRUE or FALSE, normally to direct the execution path
in code. |
148 predicate data use: A data use in
a predicate. |
149 predicted outcome: The behaviour predicted by the specification of an object under specified conditions. |
150 program instrumenter: See instrumenter. |
151 progressive testing: Testing
of new features after regression testing of previous features. [Beizer] |
152 pseudo-random: A series
which appears to be random but is in fact generated according to some
prearranged sequence. |
153 recovery testing: Testing
aimed at verifying the system's ability to recover from varying degrees of failure.
|
154 regression testing: Retesting
of a previously tested program following modification to ensure that faults
have not been introduced or uncovered as a result of the changes made. |
155 requirements-based testing: Designing
tests based on objectives derived from requirements for the software
component (e.g., tests that exercise specific functions or probe the
non-functional constraints such as performance or security). See functional test case design. |
156 result: See outcome.
|
157 review: A process
or meeting during which a work product, or set of work products, is presented
to project personnel, managers, users or other interested parties for comment
or approval. [ieee] |
158 security testing: Testing
whether the system meets its specified security objectives. |
159 serviceability testing: See maintainability testing. |
160 simple subpath: A subpath
of the control flow graph in which no program part is executed more
than necessary. |
161 simulation: The
representation of selected behavioural characteristics of one physical or
abstract system by another system. [ISO 2382/1]. |
162 simulator: A device,
computer program or system used during software verification, which behaves or operates like a given system
when provided with a set of controlled inputs.
[IEEE,do178b] |
163 source statement: See statement. |
164 specification: A
description of a component's function in terms of its output values for specified input values under specified preconditions. |
165 specified input: An input for
which the specification predicts an outcome.
|
166 state transition: A
transition between two allowable states of a system or component. |
167 state transition testing: A test case design technique in which test cases are
designed to execute state transitions. |
168 statement: An entity
in a programming language which is typically the smallest indivisible unit of
execution. |
169 statement coverage: The
percentage of executable statements in a component that have been exercised by a test case suite. |
170 statement testing: A test case design technique for a component in which test cases are
designed to execute statements. |
171 static analysis: Analysis
of a program carried out without executing the program. |
172 static analyzer: A tool
that carries out static analysis. |
173 static testing: Testing
of an object without execution on a computer. |
174 statistical testing: A test case design technique in which a model is used of the
statistical distribution of the input to
construct representative test cases.
|
175 storage testing: Testing whether
the system meets its specified storage objectives. |
176 stress testing: Testing
conducted to evaluate a system or component at or beyond the limits of its
specified requirements. [IEEE] |
177 structural coverage: Coverage
measures based on the internal structure of the component. |
178 structural test case design: Test case selection
that is based on an analysis of the internal structure of the component. |
179 structural testing: See structural test case design. |
180 structured basis testing: A test case design technique in which test cases are
derived from the code logic to achieve 100% branch coverage. |
181 structured walkthrough: See walkthrough. |
182 stub: A skeletal
or special-purpose implementation of a software module, used to develop or
test a component that calls or is otherwise dependent on it. After
[IEEE]. |
183 subpath: A
sequence of executable statements within a component. |
184 symbolic evaluation: See symbolic
execution. |
185 symbolic execution: A static analysis technique that derives a symbolic statement
for program paths. |
186 syntax testing: A test case design technique for a component or system in which test case design
is based upon the syntax of the input.
|
187 system testing: The
process of testing an integrated system to verify that it meets
specified requirements. [Hetzel] |
188 technical requirements
testing: See non-functional requirements testing. |
189 test automation: The use
of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting
functions. |
190 test case: A set of inputs,
execution preconditions, and expected
outcomes developed for a particular objective, such as to exercise
a particular program path or
to verify compliance with a specific requirement. After [IEEE,do178b] |
191 test case design technique: A method
used to derive or select test cases.
|
192 test case suite: A
collection of one or more test cases for
the software under test. |
193 test comparator: A test
tool that compares the actual outputs
produced by the software under test with the expected outputs
for that test case. |
194 test completion criterion: A
criterion for determining when planned testing
is complete, defined in terms of a test measurement technique. |
195 test coverage: See coverage.
|
196 test driver: A program
or test tool used to execute software against a test case suite. |
197 test environment: A
description of the hardware and software environment in which the tests will
be run, and any other software with which the software under test interacts
when under test including stubs
and test drivers. |
198 test execution: The
processing of a test case suite by the software under test, producing an outcome. |
199 test execution technique: The
method used to perform the actual test execution, e.g. manual, capture/playback tool, etc. |
200 test generator: A program
that generates test cases in
accordance to a specified strategy or heuristic. |
201 test harness: A testing tool
that comprises a test driver and a test comparator. |
202 test measurement technique: A method
used to measure test coverage items. |
203 test outcome: See outcome.
|
204 test plan: A record
of the test planning process detailing the degree of tester indedendence, the
test environment, the test case design techniques and test measurement techniques to be used, and the rationale
for their choice. |
205 test procedure: A document
providing detailed instructions for the execution of one or more test cases. |
206 test records: For each
test, an unambiguous record of the identities and versions of the component under test, the test
specification, and actual outcome. |
207 test script: Commonly
used to refer to the automated test procedure used
with a test harness. |
208 test specification: For each test case,
the coverage item, the initial state of the software under test,
the input, and the predicted outcome. |
209 test target: A set of test completion criteria. |
210 testing: The process of exercising
software to verify that it satisfies specified requirements and to detect errors.
|
211 thread testing: A
variation of top-down testing where the progressive integration of components follows the implementation of subsets of the
requirements, as opposed to the integration of components by successively lower levels. |
212 top-down testing: An approach
to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. |
213 unit testing: See component testing. |
214 usability testing: Testing
the ease with which users can learn and use a product. |
215 validation:
Determination of the correctness of the products of software development with
respect to the user needs and requirements. |
216 verification: The
process of evaluating a system or component to determine whether the products of the given
development phase satisfy the conditions imposed at the start of that phase.
[IEEE] |
217 volume testing: Testing
where the system is subjected to large volumes of data. |
218 walkthrough: A review
of requirements, designs or code characterized by the author of the object
under review guiding the progression of the review.
|
No comments:
Post a Comment