Tuesday, August 2, 2011

ISTQB GLOSSARY

Chapter No 1


1.1 Bug, defect, error, failure, fault, mistake, quality, risk


Bug :defect: A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect, if
encountered during execution, may cause a failure of the component or system.


Error:Mistake A human action that produces an incorrect result

Failure: Deviation of the component or system from its expected delivery, service or result.

Fault= Defect

Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]

Risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.



1.2 Debugging, requirement, review, test case, testing, test objective.


Debugging: The process of finding, analyzing and removing the causes of failures in
software.


Requirement: A condition or capability needed by a user to solve a problem or achieve an
objective that must be met or possessed by a system or system component to satisfy a
contract, standard, specification, or other formally imposed document. [After IEEE 610]


Review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]

Test case: A set of input values, execution preconditions, expected results and execution
postconditions, developed for a particular objective or test condition, such as to exercise a
particular program path or to verify compliance with a specific requirement. [After IEEE
610]

Testing: The process consisting of all life cycle activities, both static and dynamic, concerned
with planning, preparation and evaluation of software products and related work products
to determine that they satisfy specified requirements, to demonstrate that they are fit for
purpose and to detect defects.


Test objective: A reason or purpose for designing and executing a test.


1.3 Exhaustive testing

Exhaustive testing: A test approach in which the test suite comprises all combinations of
input values and preconditions


1.4 Confirmation testing, retesting, exit criteria, incident, regression testing, test basis, test condition,test coverage, test data, test execution, test log, test plan, test procedure, test policy, test strategy,test suite, test summary report, testware.


Confirmation testing = retesting :

Testing that runs test cases that failed the last time they were run, in order to
verify the success of corrective actions.

Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]

Incident: Any event occurring that requires investigation. [After IEEE 1008]

Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Test basis: All documents from which the requirements of a component or system can be
inferred. The documentation on which the test cases are based. If a document can be
amended only by way of formal amendment procedure, then the test basis is called a frozen test basis. [After TMap]

Test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.



Test coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

Test data: Data that exists (for example, in a database) before a test is executed, and that
affects or is affected by the component or system under test.

Test execution: The process of running a test on the component or system under test,
producing actual result(s).


Test log: A chronological record of relevant details about the execution of tests. [IEEE 829]

Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
[After IEEE 829]

Test procedure specification: Test procedure: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]

Test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.

Test strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).

Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]

Testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up
procedures, files, databases, environment, and any additional software or utilities used in
testing. [After Fewster and Graham]





1.5: Error guessing, independence.

Error guessing: A test design technique where the experience of the tester is used to
anticipate what defects might be present in the component or system under test as a result
of errors made, and to design tests specifically to expose them.

Independence of testing: Separation of responsibilities, which encourages the
accomplishment of objective testing. [After DO-178b]


Chapter 2

2.1: Commercial off-the-shelf (COTS), iterative-incremental development model, validation, verification, V-model.


COTS: Acronym for Commercial Off-The-Shelf software. See off-the-shelf software.

Off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.


Iterative development model: A development life cycle where a project is broken into a
usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.


Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]

Verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]

V-model: A framework to describe the software development life cycle activities from
requirements specification to maintenance. The V-model illustrates how testing activities
can be integrated into each phase of the software development life cycle.



2.2 : Alpha testing, beta testing, component testing (also known as unit, module or program testing), driver, field testing, functional requirement, integration, integration testing, non-functional requirement, robustness testing, stub, system testing, test level, test-driven development, test environment, user acceptance testing.



Alpha testing: Simulated or actual operational testing by potential users/customers or an
independent test team at the developers’ site, but outside the development organization.
Alpha testing is often employed for off-the-shelf software as a form of internal acceptance
testing.

Beta testing:Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.



Component testing (also known as unit, module or program testing):

The testing of individual software components. [After IEEE 610]


Driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]

Field testing: See beta testing.

Functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE 610]

Integration: The process of combining components or systems into larger assemblies.

Integration testing: Testing performed to expose defects in the interfaces and in the
interactions between integrated components or systems. See also component integration
testing, system integration testing.

Non-functional requirement: A requirement that does not relate to functionality, but to
attributes such as reliability, efficiency, usability, maintainability and portability.


Robustness testing: Testing to determine the robustness of the software product.

Stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called
component. [After IEEE 610]

System testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]

Test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test. [After TMap]

Test driven development: A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.

Test environment: An environment containing hardware, instrumentation, simulators,
software tools, and other support elements needed to conduct a test. [After IEEE 610]

User acceptance testing: See acceptance testing.

Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610]


2.3 : Black-box testing, code coverage, functional testing, interoperability testing, load testing,
maintainability testing, performance testing, portability testing, reliability testing, security testing, specification-based testing, stress testing, structural testing, usability testing, white-box testing.



Black-box testing: Testing, either functional or non-functional, without reference to the
internal structure of the component or system.

Code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

Functional testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.

Interoperability testing: The process of testing to determine the interoperability of a
software product. See also functionality testing.

Functionality testing: The process of testing to determine the functionality of a software
product.

Load testing: A type of performance testing conducted to evaluate the behavior of a
component or system with increasing load, e.g. numbers of parallel users and/or numbers
of transactions, to determine what load can be handled by the component or system. See
also performance testing, stress testing.

Performance testing: The process of testing to determine the performance of a software
product. See also efficiency testing.
Stress testing: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers. [After IEEE 610] See also performance testing, load testing.

Maintainability testing: The process of testing to determine the maintainability of a software product.

Portability testing: The process of testing to determine the portability of a software product.

Reliability testing: The process of testing to determine the reliability of a software product.

Security: Attributes of software products that bear on its ability to prevent unauthorized
access, whether accidental or deliberate, to programs and data. [ISO 9126] See also
functionality.

Security testing: Testing to determine the security of the software product. See also
functionality testing.


Specification-based testing: See black box testing.

Structural testing: See white box testing.

Usability testing: Testing to determine the extent to which the software product is
understood, easy to learn, easy to operate and attractive to the users under specified
conditions. [After ISO 9126]

White-box testing: Testing based on an analysis of the internal structure of the component or system.


2.4: Impact analysis, maintenance testing.

Impact analysis: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified
requirements.

Maintenance testing: Testing the changes to an operational system or the impact of a
changed environment to an operational system.







Chapter 3:

3.1 :Dynamic testing, static testing, static technique.

Dynamic testing: Testing that involves the execution of the software of a component or
system.

Static testing: Testing of a component or system at specification or implementation level
without execution of that software, e.g. reviews or static code analysis.


3.2 : Entry criteria, formal review, informal review, inspection, metric, moderator/inspection leader, peer review, reviewer, scribe, technical review, walkthrough.


Entry criteria: The set of generic and specific conditions for permitting a process to go
forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a
task from starting which would entail more (wasted) effort compared to the effort needed
to remove the failed entry criteria. [Gilb and Graham]

Formal review: A review characterized by documented procedures and requirements, e.g.
inspection.

Informal review: A review not based on a formal (documented) procedure.

Inspection: A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a
documented procedure. [After IEEE 610, IEEE 1028] See also peer review.

Metric: A measurement scale and the method used for measurement. [ISO 14598]

Moderator/inspection leader: The leader and main person responsible for an inspection or other review process.

Peer review: A review of a software work product by colleagues of the producer of the
product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.

Reviewer: The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

Scribe: The person who records each defect mentioned and any suggestions for process
improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.


Technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham, IEEE 1028] See also peer review.

Walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg, IEEE 1028] See also peer review.

3.3 Compiler, complexity, control flow, data flow, static analysis


Compiler: A software tool that translates programs expressed in a high order language into their machine language equivalents. [IEEE 610]

Complexity: The degree to which a component or system has a design and/or internal
structure that is difficult to understand, maintain and verify. See also cyclomatic
complexity.

Cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine)
[After McCabe]

Data flow: An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction. [Beizer]

Static analysis: Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.

No comments: