29 juli 2021

Vakjargon woordenlijst

Het vakgebied Software Testen maakt gebruik van een internationaal jargon, waar de International Software Testing Qualifications Board (ISTQB) een rol speelt in het handhaven van een consistente uitleg van de termen en begrippen. We hebben voor u een doorzoekbaar mechanisme gerealiseerd waarmee u niet alleen de woorden kunt vinden, maar ook de definities ervan kunt doorzoeken.

Mocht u een begrip of definitie missen, laat het ons dan weten.

Standard Glossary of Terms used in Software Testing

Alles | A B C D E F G H I K L M N O P Q R S T U V W
Er zijn 48 termen in deze lijst die beginnen met de letter P.
P
pass
A test is deemed to pass if its actual result matches its expected result.
path
A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.
pointer
A data item that specifies the location of another data item; for example, a data item that specifies the address of the next employee record to be processed. [IEEE 610]
pretest
See intake test.
problem
See defect.
process
A set of interrelated activities, which transform inputs into outputs. [ISO 12207]
project
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]
priority
The level of (business) importance assigned to an item, e.g. defect.
pair programming
A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.
pair testing
Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.
pairwise testing
A black box test design technique in which test cases are designed to execute all possible discrete combinations of each pair of input parameters. See also orthogonal array testing.
Pareto analysis
A statistical technique in decision making that is used for selection of a limited number of factors that produce significant overall effect. In terms of quality improvement, a large majority of problems (80%) are produced by a few key causes (20%).
partition testing
See equivalence partitioning. [Beizer]
pass/fail criteria
Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]
path coverage
The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.
path sensitizing
Choosing a set of input values to force the execution of a given path.
path testing
A white box test design technique in which test cases are designed to execute paths.
peer review
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.
performance
The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See also efficiency.
performance indicator
A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. lead-time slip for software development. [CMMI]
performance profiling
Definition of user profiles in performance, load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system, and hence the expected workload. See also load profile, operational profile.
performance testing
The process of testing to determine the performance of a software product. See also efficiency testing.
performance testing tool
A tool to support performance testing that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.
phase test plan
A test plan that typically addresses one test phase. See also test plan.
portability
The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
portability testing
The process of testing to determine the portability of a software product.
post-execution comparison
Comparison of actual and expected results, performed after the software has finished running.
post-project meeting
See retrospective meeting.
postcondition
Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.
precondition
Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.
predicted outcome
See expected result.
probe effect
The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.
problem management
See defect management.
problem report
See defect report.
procedure testing
Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users’ business procedures or operational procedures.
process assessment
A disciplined evaluation of an organization’s software processes against a reference model. [after ISO 15504]
process cycle test
A black box test design technique in which test cases are designed to execute business procedures and processes. [TMap] See also procedure testing.
process improvement
A program of activities designed to improve the performance and maturity of the organization’s processes, and the result of such a program. [CMMI]
process model
A framework wherein processes of the same nature are classified into a overall model, e.g. a test improvement model.
product risk
A risk directly related to the test object. See also risk.
product-based quality
A view of quality, wherein quality is based on a well-defined set of quality attributes. These attributes must be measured in an objective and quantitative way. Differences in the quality of products of the same type can be traced back to the way the specific quality attributes have been implemented. [After Garvin] See also manufacturing-based quality, quality attribute, transcendent-based quality, user-based quality, value-based quality.
production acceptance testing
See operational acceptance testing.
program instrumenter
See instrumenter.
program testing
See component testing.
project retrospective
A structured way to capture lessons learned and to create specific action plans for improving on the next project or next project phase.
project risk
A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc. See also risk.
project test plan
See master test plan.
pseudo-random
A series which appears to be random but is in fact generated according to some prearranged sequence.