Het vakgebied Software Testen maakt gebruik van een internationaal jargon, waar de International Software Testing Qualifications Board (ISTQB) een rol speelt in het handhaven van een consistente uitleg van de termen en begrippen. We hebben voor u een doorzoekbaar mechanisme gerealiseerd waarmee u niet alleen de woorden kunt vinden, maar ook de definities ervan kunt doorzoeken.
Mocht u een begrip of definitie missen, laat het ons dan weten.
Standard Glossary of Terms used in Software Testing
Er zijn 31 termen in deze lijst die beginnen met de letter E.
The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. [ISO 9126]
The process of testing to determine the efficiency of a software product.
EFQM (European Foundation for Quality Management) excellence model
A non-prescriptive framework for an organisation’s quality management system, defined and owned by the European Foundation for Quality Management, based on five ‘Enabling’ criteria (covering what an organisation does), and four ‘Results’ criteria (covering what an organisation achieves).
elementary comparison testing
A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]
The ability, capacity, and skill to identify, assess, and manage the emotions of one’s self, of others, and of groups.
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator.
The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]
An executable statement or process step which defines a point at which a given process is intended to begin..
See equivalence partition.
A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
equivalence partition coverage
The percentage of equivalence partitions that have been exercised by a test suite.
A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
A human action that produces an incorrect result. [After IEEE 610]
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
See fault seeding.
error seeding tool
See fault seeding tool.
The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].
The phase within the IDEAL model where the specifics of how an organization will reach its destination are planned. The establishing phase consists of the activities: set priorities, develop approach and plan actions. See also IDEAL.
Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.
A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.
A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.
A test approach in which the test suite comprises all combinations of input values and preconditions.
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]
An executable statement or process step which defines a point at which a given process is intended to cease..
See expected result.
The behavior predicted by the specification, or another source, of the component or system under specified conditions.
See experience-based test design technique.
experience-based test design technique
Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.
An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]
A software engineering methodology used within agile software development whereby core practices are programming in pairs, doing extensive code review, unit testing of all code, and simplicity and clarity in code. See also agile software development.