Het vakgebied Software Testen maakt gebruik van een internationaal jargon, waar de International Software Testing Qualifications Board (ISTQB) een rol speelt in het handhaven van een consistente uitleg van de termen en begrippen. We hebben voor u een doorzoekbaar mechanisme gerealiseerd waarmee u niet alleen de woorden kunt vinden, maar ook de definities ervan kunt doorzoeken.
Mocht u een begrip of definitie missen, laat het ons dan weten.
Standard Glossary of Terms used in Software Testing
Er zijn 48 termen in deze lijst die beginnen met de letter P.
A test is deemed to pass if its actual result matches its expected result.
A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.
A data item that specifies the location of another data item; for example, a data item that specifies the address of the next employee record to be processed. [IEEE 610]
A set of interrelated activities, which transform inputs into outputs. [ISO 12207]
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]
The level of (business) importance assigned to an item, e.g. defect.
A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.
Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.
A black box test design technique in which test cases are designed to execute all possible discrete combinations of each pair of input parameters. See also orthogonal array testing.
A statistical technique in decision making that is used for selection of a limited number of factors that produce significant overall effect. In terms of quality improvement, a large majority of problems (80%) are produced by a few key causes (20%).
See equivalence partitioning. [Beizer]
Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]
The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.
Choosing a set of input values to force the execution of a given path.
A white box test design technique in which test cases are designed to execute paths.
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.
The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See also efficiency.
A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. lead-time slip for software development. [CMMI]
Definition of user profiles in performance, load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system, and hence the expected workload. See also load profile, operational profile.
The process of testing to determine the performance of a software product. See also efficiency testing.
performance testing tool
A tool to support performance testing that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.
phase test plan
A test plan that typically addresses one test phase. See also test plan.
The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
The process of testing to determine the portability of a software product.
Comparison of actual and expected results, performed after the software has finished running.
See retrospective meeting.
Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.
Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.
See expected result.
The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.
See defect management.
See defect report.
Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users’ business procedures or operational procedures.
A disciplined evaluation of an organization’s software processes against a reference model. [after ISO 15504]
process cycle test
A black box test design technique in which test cases are designed to execute business procedures and processes. [TMap] See also procedure testing.
A program of activities designed to improve the performance and maturity of the organization’s processes, and the result of such a program. [CMMI]
A framework wherein processes of the same nature are classified into a overall model, e.g. a test improvement model.
A risk directly related to the test object. See also risk.
A view of quality, wherein quality is based on a well-defined set of quality attributes. These attributes must be measured in an objective and quantitative way. Differences in the quality of products of the same type can be traced back to the way the specific quality attributes have been implemented. [After Garvin] See also manufacturing-based quality, quality attribute, transcendent-based quality, user-based quality, value-based quality.
production acceptance testing
See operational acceptance testing.
See component testing.
A structured way to capture lessons learned and to create specific action plans for improving on the next project or next project phase.
A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc. See also risk.
project test plan
See master test plan.
A series which appears to be random but is in fact generated according to some prearranged sequence.