ISTQB Foundation Learning Objectives


Learning Objectives for Fundamentals of Testing:

Summary Points

1.1 What is Testing?

FL-1.1.1 (K1) Identify typical objectives of testing

For any given project, the objectives of testing may include:
- To prevent defects by evaluating work products such as requirements, user stories, design, and code
- To verify whether the test object is complete and validate if it works as the users and other stakeholders expect
- To build confidence in the level of quality of the test object
- To find defects and failures that reduce the level of inadequate software quality
- To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the quality of their test object
- To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object's compliance with such requirements or standards

During component testing, find as many failures as possible so that the underlying defects are identified and fixed early
Increase code coverage of component tests

During acceptance testing, confirm that the system works as expected and satisfies requirements.
Give information to stakeholders about the risk of releasing the system at a given time.

FL-1.1.2 (K2) Differentiate testing from debugging

Executing tests can show failures that are caused by defects in the software
Debugging is the development activity that finds, analyzes, and fixes such defects.

Testers are responsible for initial and confirmation tests.
Developer do the debugging, component, and component integration testing

1.2 Why is Testing Necessary?

FL-1.2.1 (K2) Give examples of why testing is necessary

This reduces the risk of defects within the code and the tests.\n- Having testers verify and validate software prior to release detects failures that might otherwise have been missed, and supports the process of removing the defects that caused the failures. This increases the likelihood that software meets stakeholder needs and satisfies requirements."}'>Using appropriate test technques can reduce the frequency of problematic deliveries, with appropriate expertise, test levels, and at appropriate points in the SDLC

- Testers could identify defects in requirement reviews and user storeis to reduce the risk of incorrect or untestable features being developed.
- Testers work closely with designers to get increased understanding and reduce the risk of fundamentaly design defects, and enable tests to be identified at an early stage.
- Having testers work with developers while code is under development increases understanding of the code and how to test ig. This reduces the risk of defects within the code and the tests.
- Having testers verify and validate software prior to release detects failures that might otherwise have been missed, and supports the process of removing the defects that caused the failures. This increases the likelihood that software meets stakeholder needs and satisfies requirements.

FL-1.2.2 (K2) Describe the relationship between testing and quality assurance and give examples of how testing contributes to higher quality

Quality Management includes all activities that direct and control an organization with regard to quality.
Quality Assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved.
Quality Control invovles various activities, including test activities, that support the achievement of appropriate levels of qualty.
Testng contributes to the aachievement of qualty in a variety of ways.

FL-1.2.3 (K2) Distinguish between error, defect, and failure

A person can make an ERROR (MISTAKE)
This can lead to the introduction of a DEFECT (FAULT, BUG)
If a defect in the code is executed, this may cause a FAILURE.

Errors may occur due to time pressure, human fallibility, insufficient skills, miscommunication, musunderstanings.
Failures can be caused by defects or environmental conditions (i.e. hardware, infrastructure)

False Positives are reported as defects but are not defects; False Negatives are tests that do not detect defects they should have detected

FL-1.2.4 (K2) Distinguish between the root cause of a defect and its effects

ROOT CAUSES of defects are the earliest acions or conditions that contributed to creating the defecs.
Root Cause Analysis can lead to process improvements that prevent a significant number of future defects from being introduced.

Effects are the business outcomes (i.e. Customer Complaints) resulting from the defects and root causes.

1.3 Seven Testing Principles

FL-1.3.1 (K2) Explain the seven testing principles

1. Testing shows the presence of defects, not ther absence
2. Exhaustive testing is impossible
3. Early testing saves time and money
4. Defects cluster together
5. Beware of the pesticide paradox
6. Testing is context dependent
7. Absence of errors is a fallacy

1.4 Test Process

FL-1.4.1 (K2) Explain the impact of context on the test process

Contextual factors that influence the test process for an organization incldue, but are not limited to:
- Software dev lifecycle model and project methodologies used
- Test levels and test types being considered
- Product and project risks
- Busienss domain
- Operational constraints (budgets, time, complexity, contractual and regulatory requirements)
- Organizational policies and practices
- Required internal and external standards

Test Basis (i.e. requirements, supported devices) should have measurable coverage criteria to support KPIs and demonstrate achievement of software test objectives.

FL-1.4.2 (K2) Describe the test activities and respective tasks within the test process

Test monitoring and control are supported by the evaluation of exit criteria (a.k.a. Definition of Done).\nTest Analysis is when the Test Basis is analyzed to identify testable features and define associated test conditions or \"What To Test\"\nTest Analysis defect types include ambiguities, omissions, inconsistencies, inaccuracies, contradictions, superfluous statements\nTest Design elaborates the Test Conditions into high-level Test Cases or \"How To Test\"\nTest Design includes designing and prioritizing test cases; Identifying test data for test conditions and test cases; designing test environment and specifying infrasturcture and tools; and caturing bi-directional traceability between test basis, test conditions, and test cases.\nTest Implementation is for the testware to be created or completed, including sequencing test cases into test procedures.\nTest Implementation includes test procedures, automated test scripts, test suites, test environment with harnesses and simulators; preparing test data, and bi-directional traceabilty including test procedures and test suites.\nTest Execution is where tests are run in accordance with their test execution schedule\nTest Execution activities invlude recording test items and test objects, executing tests, comparing actual with expected results, analyzing anomalies, reporting defects, logging outcomes, and validating bi-directional traceability.\nTest Completion activities collect data from completed test activities to consolidate experience, testware, and relevant information.\nTest Completion activities occure at project milestones.\nTest completion activities include closed defect reports, entering change requests, creating test summary report, finalizing and archiving test assets, handing testware over to maintenance tesms, analyzing lessons learned, and improving test process maturity."}'>Test Planning involves the activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context.
Test Monitoring involves the on-going comparison of actual progress against planned progress,
Test Control involves taking actons necessary to meet the objectves of the test plan. Test monitoring and control are supported by the evaluation of exit criteria (a.k.a. Definition of Done).
Test Analysis is when the Test Basis is analyzed to identify testable features and define associated test conditions or "What To Test"
Test Analysis defect types include ambiguities, omissions, inconsistencies, inaccuracies, contradictions, superfluous statements
Test Design elaborates the Test Conditions into high-level Test Cases or "How To Test"
Test Design includes designing and prioritizing test cases; Identifying test data for test conditions and test cases; designing test environment and specifying infrasturcture and tools; and caturing bi-directional traceability between test basis, test conditions, and test cases.
Test Implementation is for the testware to be created or completed, including sequencing test cases into test procedures.
Test Implementation includes test procedures, automated test scripts, test suites, test environment with harnesses and simulators; preparing test data, and bi-directional traceabilty including test procedures and test suites.
Test Execution is where tests are run in accordance with their test execution schedule
Test Execution activities invlude recording test items and test objects, executing tests, comparing actual with expected results, analyzing anomalies, reporting defects, logging outcomes, and validating bi-directional traceability.
Test Completion activities collect data from completed test activities to consolidate experience, testware, and relevant information.
Test Completion activities occure at project milestones.
Test completion activities include closed defect reports, entering change requests, creating test summary report, finalizing and archiving test assets, handing testware over to maintenance tesms, analyzing lessons learned, and improving test process maturity.

FL-1.4.3 (K2) Differentiate the work products that support the test process

Test Planning Work Products typically include one or more test plans and the Test Basis
Test Monitoring and Control Work Products typically include various types of test reports (i.e. progress reports)
Test Analysis Work Products include defined and prioritized test conditions, and defects in the Test Basis
Test Design Work Products include high-level test cases, identification of necessary test data, design of test environment, and identification of infrastructure and tools.
Test Implementation Work Products include Test Procedures, Test Suites, and a Test Execution schedule
Test Execution Work Product s include documentation of status of individual test cases or test procedures, defect reports, and documentation about which test items, test objects, test tools, and testware were involved in teh testing.
Test Completion Work Products include Test summary Reports, action items for improvement, change requests, product backlog items, and finalized testware.

FL-1.4.4 (K2) Explain the value of maintaining traceability between the test basis and test work products

Good traceability supports:
- Analyzing the impact of changes
- Making testing auditable
- Meeting IT governance criteria
- mproving the understandability of test progress reports and test summary reports to include status of elements of the test basis
- Relating technical aspects of testing to stakeholders in terms that they can understand
- Providng information to assess product quality, process capability, and project progress against business goals

1.5 The Psychology of Testing

FL-1.5.1 (K1) Identify the psychological factors that influence the success of testing

Psychological factors: criticism of product and author, confirmation bias, other cognitive biases, blaming the bearer of bad news
- Start with collaboration rather than battles
- Emphasize the benefits of testing
- Communicate test results and findings in a neutral, fact-based way without criticizing the person
- Try to understand how the other person feels
- Confirm that the other person has understood what has been said and vice versa

Most people tend to align thier plans and behaviors with the objectives set by the team, management, and other stakeholders, with minimal personal bias.

FL-1.5.2 (K2) Explain the difference between the mindset required for test activities and the mindset required for development activities

The primary objective of development is to design and build a product.
The objectives of testing include verifying and validating the product, finding defecs prior to release.
A tester's mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and motivation for good and positive communications and relationships.
A developer's mindset may include tester's mindset, but are more interested in designing and building solutions.
Confrmaton bias makes it difficul to be aware of errors
Independent testers increase defect detection effectiveness, and bring a perspective different from work product authors due to different cognitive biases.

Learning Objectives for Testing Throughout the Software Development Lifecycle

2.1 Software Development Lifecycle Models

FL-2.1.1 (K2) Explain the relationships between software development activities and test activities in the software development lifecycle

There are several characteristics of good testing:
- For every development activity, there is a corresponding test activity
- Each test level has test objectives specific to that level
- Test analysis and design for a given test level begin during the corresponding develpment activity
- Testers participate in discussions and review work products as soon as drafts are available

Sequential: linear, sequential flow of activities where any phase begins when the previous phase is compleed with no overlap
Incremental: Development in pieces, growing incrementally
Iterative: Groups of features are specified, designe, build, and tested in a series of cycles of a fixed duration (Rational Unified Process, Scrum, Kanban, Spiral)

FL-2.1.2 (K1) Identify reasons why software development lifecycle models must be adapted to the context of project and product characteristics

Software lifecycle models should be selected and adapted to the context of project and product characteristics, and based on the project goal, the product developed, business priorities, and product and project risks.
Software development lify cycle models may be combined (V-model for system,. agile for front-end)
Reasons for SDLC models context:
- Difference in product risks (complex or simple product)
- Part of a project or program (sequential and agile development)
- Short time to deliver a product to market

2.2 Test Levels

FL-2.2.1 (K2) Compare the different test levels from the perspective of objectives, test basis, test objects, typical defects and failures, and approaches and responsibilities

Test levels are test activities that are organized and managed together:
- Component Testing
- Integration Testing
- System Testing
- Acceptance Testing

Test levels are characterized by the following attributes
- Specific objectives
- Test basis, referenced to derive test cases
- Test object (what is being tested)
- Typical defects and failures
- Specific approaches and responsibilities

2.3 Test Types

FL-2.3.1 (K2) Compare functional, non-functional, and white-box testing

\nPerform at all test levels with black-box techniques for functional coverage, based on specialized skllls and knowledge"}'>Functional testing evaluates functions that the system shoudl perform, as described in requirements, specifications, user stories.
Perform at all test levels with black-box techniques for functional coverage, based on specialized skllls and knowledge

FL-2.3.2 (K1) Recognize that functional, non-functional, and white-box tests occur at any test level

\nCode coverage (percentage of code) and integration tests of interfaces and architecture are applied."}'>Non-functional testing evaluates characteristics such as usability, efficiency, security, to show how well a system behaves (per ISO 25010).
White-box testing derives tests on the internal structure or implementation and includes code, architecture, workflows, and data flows.
Code coverage (percentage of code) and integration tests of interfaces and architecture are applied.

FL-2.3.3 (K2) Compare the purposes of confirmation testing and regression testing

Confirmation testing: after a defect s fixed, the steps to reproduce the failure are re-executed to confirm whether the original defect has been successfully fixed.
Regression testing: tests to detect uninended side-effecs from a change to other parts of the code.
Regression and confirmation testing are performed at all test levels.

2.4 Maintenance Testing

FL-2.4.1 (K2) Summarize triggers for maintenance testing

Triggers for maintenance testing include planned and unplanned changes:
- Modification which includes planned enhancements, corrective and emergency changes, changes or operational environment, upgrades, and patches for defects and vulnerabiliies.
- Migration due to retirement for end of life or archiving data.
- Testing restore/ retrieve procedures after archiving for long retention periods
- Regression testing to ensure that any functionality that remains in service still works

FL-2.4.2 (K2) Describe the role of impact analysis in maintenance testing

Impact analysis evaluates changes that were made for a maintenance release to identify the intended consequences and potential side effects of a change.
Impact analysis may be done before a change is made to help decide if the change should be made based on the potential consquences to other parts of the system

Impact analysis can be affected by:
- Specifications (out of date or missing)
- Test cases not documented or out of date
- Bi-directional traceability between tests and test basis not maintained
- Tools support weak
- People lackng domain or system knowledge
- Insufficient software maintainability

Learning Objectives for Static Testing

3.1 Static Testing Basics

FL-3.1.1 (K1) Recognize types of software work product that can be examined by the different static testing techniques

Work products can be examined using static testing including:
- Specifications, requirements,
- Epics, User Stories, acceptance criteria
- Architecture and design specifications
- Code
- Testware, test plans, test cases, test procedures, automated test scripts
- User guides
- Web pages
- Conracts, project plans, schedules, budget planning
- Configuration setup and infrastructure setup
- Models - activity diagrams

FL-3.1.2 (K2) Use examples to describe the value of static testing

Static testing enables the early detection of defects prior to dynamic testing.
Defects found early are cheaper to remove, lower costs by avoidng having to update other work products
Additional benefits include:
- Detecting and correcting defects more efficiently, and prior to dynamic test execution
- Identifying defects not easily found by dynamic testing
- Preventing defects in design or coding (inconsistencies, ambiguities, contradictions, omissions, redundancies, etc.)
- Increasing development productivity
- Reducing development cost and time
- Reducing testing cost and time
- Reucint total cost of quality
- Improving communication between team members

FL-3.1.3 (K2) Explain the difference between static and dynamic techniques, considering objectives, types of defects to be identified, and the role of these techniques within the software lifecycle

Static testing finds defects in work products directly, with much less effort, focusing on consistency and internal quality of work products
Dynamic testing identifies failures when the software is run, depending on externally visible behaviors.

Defects that are easier and cheaper to find through static testing include:
- Requirement defects
- Design defects
- Coding defects
- Deviations from standards
- Incorrect interface specifications
- Security vulnerabilities
- Gaps or inaccuracies in test basis traceability or coverage
- Most types of maintainability defects (i.e. improper modularization, poor reusability of components, code that is difficult to analyze and modify)

3.2 Review Process

FL-3.2.1 (K2) Summarize the activities of the work product review process

The review process comprises the following main activities:
- Planning
- Initiate Review
- Individual Review (Individual Preparation)
- Issue Communication and Analysis
- Fixing and Reporting

FL-3.2.2 (K1) Recognize the different roles and responsibilities in a formal review

A formal review will include the roles below:
- Author: creates work, fixes defects
- Management: review planning and execution, monitors and controls
- Facilitator (Moderator): runs review meetings, mediates, determines success
- Review Leader: responsible, decides who is involved
- Reviewers: subject experts, stakeholders, identify potential defects
- Scribe: collates and records defects and decsions

FL-3.2.3 (K2) Explain the differences between different review types: informal review, walkthrough, technical review, and inspection

Scribe is mandatory and checklists are optional.\nTechnical Review: purpose is to gain consensus and detect potential defects. Also generate new ideans and improvements. Reviewers hsould be peers of the author, scribe is mandatory, and reviews should be facilitated. Checklists are optional.\nInspections: purpsoe id for finding defects and follows clearly defined roles, required individual preparation, and formal processes with rules and checklists"}'>Informal Review: purpose is to detect potential defects, but does not follow a formal process (pairing, peer review)
Walkthrough: find defects and improve product. Scribe is mandatory and checklists are optional.
Technical Review: purpose is to gain consensus and detect potential defects. Also generate new ideans and improvements. Reviewers hsould be peers of the author, scribe is mandatory, and reviews should be facilitated. Checklists are optional.
Inspections: purpsoe id for finding defects and follows clearly defined roles, required individual preparation, and formal processes with rules and checklists

FL-3.2.4 (K3) Apply a review technique to a work product to find defects

Review techniques can uncover defects:
- Ad hoc
- Checklist-based
- Scenarios and dry runs (structured guidelines)
- Perspective-based
- Role-based

FL-3.2.5 (K2) Explain the factors that contribute to a successful review

Organizational success factors include:
- Clear objectives as measurable exit criteria
- Suitable and appropriate review types
- Adequate notice, time, and workload
- Management support and quality integration

People-related success factors include:
- Right people to meet review objectives (i.e. skill sets)
- Testers are valid reviewers
- Adequate time and attention to detail
- Small chunks of workload
- Defects handled objectively
- Respect for time
- Atmosphere of trust and engagement
- Culture of learning and process improvement

Learning Objectives for Test Techniques

4.1 Categories of Test Techniques

FL-4.1.1 (K2) Explain the characteristics, commonalities, and differences between black-box test techniques, white-box test techniques, and experience-based test techniques

Black-box test techniques include:
- Test conditions, test cases, and test data are derived from a test basis (requirements, specifications, user stories)
- Test cases detect gaps between reqirements and the implementation or deviations
- Coverage is measured by the items tests in the test basis

White-box test techniques include:
- Test conditions, test cases, and test data are derived from code, architecture, design, or structure of the software
- Coverage is measured on items tested within a selected structure

Experience-based test techniques include:
- Knowledge and experience of the expected use of the software, its environment, likely defects, and distribution of those defects.

Refer to ISO 29119-4 for descriptions of test techniques and corresponding coverage measures.

4.2 Black-box Test Techniques

FL-4.2.1 (K3) Apply equivalence partitioning to derive test cases from given requirements

Valid equivalence partition\n- Invalid values should be rejected. Invalid equivalence partition\n- Partition may be divided into sub-partitions\n- Each value must belong to one and only one equivalence partition\n- Invalid equivalence partitions should be tested individually\n\nFor 100% coverage, test cases must cover all identified partitions (including invalid partitions) by using a minimum of one value from each partition.\nCoverage is measured as the number of equivalence partitions tested, divided by the total number of identified equivalence partitions.\nEquivalence partitioning is applicable at all levels."}'>Equivalence partitions for both valid and invalid values:
- Valid values should be accepted. Valid equivalence partition
- Invalid values should be rejected. Invalid equivalence partition
- Partition may be divided into sub-partitions
- Each value must belong to one and only one equivalence partition
- Invalid equivalence partitions should be tested individually

For 100% coverage, test cases must cover all identified partitions (including invalid partitions) by using a minimum of one value from each partition.
Coverage is measured as the number of equivalence partitions tested, divided by the total number of identified equivalence partitions.
Equivalence partitioning is applicable at all levels.

FL-4.2.2 (K3) Apply boundary value analysis to derive test cases from given requirements

The minimum and maximum values are boundary values.\nTwo boundary values are Invalid and valid.\nThree-point boundary values are before, at, and just over the boundary.\nBoundary Value Analysis can be applied at all levels. Boundary coverage is the number of boundary values tested, divided by the total number of identified boundary test values."}'>Boundary Value Analysis (BVA) is applicable for numeric or sequential data. The minimum and maximum values are boundary values.
Two boundary values are Invalid and valid.
Three-point boundary values are before, at, and just over the boundary.
Boundary Value Analysis can be applied at all levels. Boundary coverage is the number of boundary values tested, divided by the total number of identified boundary test values.

FL-4.2.3 (K3) Apply decision table testing to derive test cases from given requirements

Conditions (inputs) and Resulting Actions (outputs) are recorded in rows.\nColumns correspond to a decision rule that defines a uniqu combination of conditions with results in the execution of actions associated with that role.\n\nA full decision table has enough columns (test cases) to cover eveyr combination of conditions.\nBy deleting columns that do not affect the outcome, test cases can decrease.\nCoverage is to have 1 test case per decsion rule, and is applied at any test level"}'>Decision tables record complex business rules. Conditions (inputs) and Resulting Actions (outputs) are recorded in rows.
Columns correspond to a decision rule that defines a uniqu combination of conditions with results in the execution of actions associated with that role.

A full decision table has enough columns (test cases) to cover eveyr combination of conditions.
By deleting columns that do not affect the outcome, test cases can decrease.
Coverage is to have 1 test case per decsion rule, and is applied at any test level

FL-4.2.4 (K3) Apply state transition testing to derive test cases from given requirements

A State Transition diagram shows the possible software states, and how the software enters, exits, and transitions between states.
A State Transition table shows the valid transitions, events, and resulting actions for valid transitions.
State transition testing is used for menu-based applications, or when modeling a business scenario
Coverage is measured by the number of states or transitions tested, divided by the total number of identified states or transitions.

FL-4.2.5 (K2) Explain how to derive test cases from a use case

Pre-conditions, post-conditions, and natural language are also used.\n\nUse cases also extend beyond basic behavior to include exceptional behavior and error handling.\nCoverage can be measured by the total number of use case behaviors tested divided by the total number of use case behaviors."}'>Use cases are associated with actors (i.e. users) and subjects (component or system)
Uses cases specify some behavior, and is described by interactons and activities. Pre-conditions, post-conditions, and natural language are also used.

Use cases also extend beyond basic behavior to include exceptional behavior and error handling.
Coverage can be measured by the total number of use case behaviors tested divided by the total number of use case behaviors.

4.3 White-box Test Techniques

FL-4.3.1 (K2) Explain statement coverage

Statement testing exercises the potential executable statements in the code.
Coverage is measured as he number of statements executed by the tests, divided by the total number of executed statements in the test object (expressed as a percentage).

FL-4.3.2 (K2) Explain decision coverage

Decision Testing exercises the decsions in the code and test the code that is executed based on the decision outcomes.
Decision Test Cases follow the control flows that occur from a decsion point - IF leads to True, False statements)

Decision Coverage is measured as the number of decision outcomes executed by the tests, divided by the total number of decision outcomes in the test object (expressed as a percentage).

FL-4.3.3 (K2) Explain the value of statement and decision coverage

When 100% statement coverage is achieved, it ensures that all executable statements in the code have been tested at least once.

When 100% decision coverage is achieved, it executes all decision outcomes (for true and false outcomes)

Achieving 100% decision coverage ensures 100% statement coverage, but not vice versa.

4.4 Experience-based Test Techniques

FL-4.4.1 (K2) Explain error guessing

Error guesing is a technique used to anticipate the occurrence of errors, defects, and failures, based on the tester's knowledge:
- How the application worked in the past
- What kind of errors tend to be made
- Failures that have occurred in other applications

Create a list of all possible errors, defects, and failures; design tests to expose those failures and the defects that caused them.

FL-4.4.2 (K2) Explain exploratory testing

Exploratory testing incorporates the use of other black-box, white-box, and experience-based techniques."}'>In Exploratory Testing, informal (not pre-defined) tests are designed, executed, logged, and evaluated dynaically during test execution.

In session-based testing, exploratory testing is conducted using a defined time-box, and the tester uses a test charter containing test objectives to guide the testing.

Exploratory testing is used when there are inadequate specifications, significant time pressure, or to complement formal testing. Exploratory testing incorporates the use of other black-box, white-box, and experience-based techniques.

FL-4.4.3 (K2) Explain checklist-based testing

In Checklist-based testing, testers design, implement, and execute tests to cover test conditions found in a checklist. Checklists cna be build based on experience, knowledge, or known failure modes.

Checklists can support various functional and non-functional test types.

Learning Objectives for Test Management

5.1 Test Organization

FL-5.1.1 (K2) Explain the benefits and drawbacks of independent testing

Benefits of independent testing:
- Independent testers recognize different kinds of failures due to different backgrounds, perspectives, etc.
- Independent testers can verify, challenge, or disprove assumptions made by stakeholders
- Independent testers can report without pressure of the company that hired them.

Potential drawbacks of test independence:
- Isolation from development team, leading to lack of collaboration
- Developers may lose a sense of responsiblity for quality
- Independent testers may be seen as a bottleneck
- Independnet testers may lack important information

FL-5.1.2 (K1) Identify the tasks of a test manager and tester

Test Manager tasks may include:
- Develop and review test policy and test strategy
- Plan the test activities
- Write and coordinate test plans
- Initiate the analysis, design, implementaton, and execution of tests.
- Monitor test progress and results, check the status of test criteria, and facilitate test completion
- Prepare and deliver test progress reports and test summary reports
- Adapt planing based on test results and progress and take actions for test control
- Defect management system and confguration management of testware
- Introduce suitable metrics
- Support selection and implementation of tools
- Decide about the implementation of tets environments
- Advocate for testers, test team, and test profession
- Develop the skills and careers of testers

Tester tasks include:
- Review and contribute to test plans
- Analyze, review, assess requirements, user stories, acceptance criteria, specs, and test basis
- Identifiy and document test conditions
- Define and set up and verify test environments,
- Design and implement tst caes and test procedures
- Prepare and acquire test data
- Craete detailed test execution schedule
- Execute tests, evaluate results, document deviations
- Use appropriate tools for test process
- Automate tests as needed
- Evaluate non-functional characteristics (efficiency, reliablity, usability, security, compatibility, portability)
- Review tests developed by others

5.2 Test Planning and Estimation

FL-5.2.1 (K2) Summarize the purpose and content of a test plan

A test plan outlines test activities for development and mainteannce projects.
Planning is influenced by the test policy and test strategy
Test planning is a continuous activity and is performed throughout the product's lifecycle.
Planning may be documented in a master test plan and in separate test plans for test levesl (i.e. system, acceptance)

Test Planning activities include:
- Scope, objectives, and risks of testing
- Overall approach of testing
- Integrating and coordinating test activities into software lifecycle activities
- Making decisions about what to test, peopel, resources for test activities
- Scheduling of test analysis, design, implementation, execution, and evaluation activities
- Selecitng metrics for test monitoring and control
- Budgeting for test activities
- Determing levels of detail an structure for test documentation

Refer to ISO 29119-3 for Test Plan structure

FL-5.2.2 (K2) Differentiate between various test strategies

and may consider factors such as risks, safety, available resources and skills, technology, system characteristics, test objectives, and regulations."}'>A Test Strategy is a generalized description of teh test process, usually at the product or organizational level:
- Analytical: Analysis of a factor (Risk-based)
- Model-Based: Tests are design based on some model of a required aspect of the product (i.e. business process, state, reliability growth)
- Methodical: Systematic use of predefined tests or test conditions (i.e. look and feel for mobile and web pages)
- Process-Compliant (Standard-Compliant): Involves analyzing, designing, and implementing tests based on external rules and standards.
- Directed (Consultative): Driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts.
- Regression-averse: Avoid regression of existing capabilities, reuse existing testware, automation, test suites.
- Reactive: Reacting to events occurring during test execution or in response to knowledge gained from prior results (i.e Exploratory)

The Test Approach tailors the Test Strategy for a particular project or release, and is the starting point for selecting Test Techniques, Test Levels, and Test Types; and for defining Entry Criteria and Exit Criteria.

The selected approach depends on the Context and may consider factors such as risks, safety, available resources and skills, technology, system characteristics, test objectives, and regulations.

FL-5.2.3 (K2) Give examples of potential entry and exit criteria

If entry criteria are not met, it is likely that the actvity will prove more difficult, time-consuming, costly, and risky.\n\nExit criteria (Definition of Done) define what conditions must be achieved in order to declare a test level or a set of tests completed.\n\nEntry and exit criteria should be defined for each test level and test type, and will differ based on test objectives\n\nTypical entry criteria:\n- Availability of testable requirements, user stories, models\n- Availability of test items that have met the exit criteria for pror test levels\n- Availability of test environment\n- Availability of necessary test tools\n- Availability of test data and other necessary resources\n\nTypical exit criteria:\n- Planned tests have been executed\n- A defined level of coverage\n- The number of unresolved defects is within an agreed limit\n- The numebr of estimated remaining defects is sufficiently low\n- The evaluated levels of reliability, performance efficiency, usability, security, and other relevant quality characteristics is sufficient\n\nIt is also common for test activities to be curtailed due to budget expended, scheduled time completed, and pressure to bring the product to market.\nProject stakeholders and business owners have reviewed and accepted the risk to go live without further testing."}'>Entry criteria (Definition of Ready) define the preconditions for undertaking a given test activity. If entry criteria are not met, it is likely that the actvity will prove more difficult, time-consuming, costly, and risky.

Exit criteria (Definition of Done) define what conditions must be achieved in order to declare a test level or a set of tests completed.

Entry and exit criteria should be defined for each test level and test type, and will differ based on test objectives

Typical entry criteria:
- Availability of testable requirements, user stories, models
- Availability of test items that have met the exit criteria for pror test levels
- Availability of test environment
- Availability of necessary test tools
- Availability of test data and other necessary resources

Typical exit criteria:
- Planned tests have been executed
- A defined level of coverage
- The number of unresolved defects is within an agreed limit
- The numebr of estimated remaining defects is sufficiently low
- The evaluated levels of reliability, performance efficiency, usability, security, and other relevant quality characteristics is sufficient

It is also common for test activities to be curtailed due to budget expended, scheduled time completed, and pressure to bring the product to market.
Project stakeholders and business owners have reviewed and accepted the risk to go live without further testing.

FL-5.2.4 (K3) Apply knowledge of prioritization, and technical and logical dependencies, to schedule test execution for a given set of test cases

if a test case with a higher priority is dependent on a test case with a lower priority, the lower priority must be executed first.\n\nIf there are dependencies across test cases, they most be ordered appropriately regardless of their relative priorities.\n\nTrade-offs between efficiency of test execution vs. adherence to prioritization must be made."}'>The test execution schedule should take into account such factors as prioritization, dependencies, confirmation tests, regression tests, and the most efficient sequence for executing the tests.

Execute the test cases with the highest priority first. if a test case with a higher priority is dependent on a test case with a lower priority, the lower priority must be executed first.

If there are dependencies across test cases, they most be ordered appropriately regardless of their relative priorities.

Trade-offs between efficiency of test execution vs. adherence to prioritization must be made.

Factors influencing the test effort include:
- Product characteristics (risks, quality, size, complexity, required level of detail, legal and regulatory compliance)
- Development process characteristics (stablity and maturity of the organization, development model, test approach, tools used, test process, time pressure)
- People characteristics (skills and experience, team cohesion and leadership)
- Test results (number and severity of defects found, amount of rework required)

FL-5.2.6 (K2) Explain the difference between two estimation techniques: the metrics-based technique and the expert-based technique

Two common techniques:
- Metrics-based: estimated test effort based on metrics of former similar projects, or based on typical values. (i.e. burn-down charts, defect removal charts)
- Expert-based: estimated the test effort based on the experience of owners of the testing tasks or by experts (i.e. planning poker, Wideband Delphi estimation)

5.3 Test Monitoring and Control

FL-5.3.1 (K1) Recall metrics used for testing

Metrics can be collected at the end of test activities to assess:
- Progress against planned schedule and budget
- Adequacy of the test approach
- Effectiveness of test activities relative to the objectives

Common test metrics include:
- Percentage of planned work done in test case preparation
- Percentage of planned work done in test environment preparation
- Test case execution (number of test cases run/ not run, test caes passed/ failed, test conditions passed/ failed)
- Defect information (defect density, defects found/fixed, failure rate, confirmation test results)
- Test coverage of requirements, user stories, acceptance criteria, risks, code
- Task completion, resource allocation, effort
- Cost of testing, cost/beneft analysis

FL-5.3.2 (K2) Summarize the purposes, contents, and audiences for test reports

The purpose of test reporting is to summarize and communicate test activity information, both during and at the end of test activity.

Test Progress Report is prepared during a test activity.
Test Summary Report is prepared at the end of a test activity.

Test Progress Reports are prepared by the Test Manager and include:
- Status of test activities and progress against the test plan.
- Factors impeding progress
- Testing planned for the next reporting period
- Quality of the test object.

Test Summary Reports are issued by the Test Manager when the exit criteria are reached and include:
- Summary of testing performed
- Information on what occurred during a test period
- Deviations from plan (schedule, duration, effort)
- Status of testing and product quality with respect to Exit Criteria or Definition of Done
- Factors that have blocked or continue to block progress
- Metrics of defects, test cases, test coverage, activity progress, and resource consumption.
- Residual risks
- Reusable test work products produced

Test reports should be tailored to the audience and prepared to ISO 29119-3 (Test Progress, Test Completion)

5.4 Configuration Management

FL-5.4.1 (K2) Summarize how configuration management supports testing

The purpose of Configuration Management is to establish and maintain the integrity of the component or system, the testware, and their relationships to one another through the product and project lifecyle.

Configuration Management may involve ensuring the following:
- All test items are uniquely identified, version controlled, tracked for changes, and related to each other,
- All items are testware are uniquely identified, version controlled, tracked for changes, related to each other, and related to versions of the test item(s) so that traceabilty can be maintained throughout the test process.
- All identified documents and software items are referenced unambiguously in test documentation.

During test planning, configuration management procedures and infrastructure (tools) should be identified and implemented.

5.5 Risks and Testing

FL-5.5.1 (K1) Define risk level by using likelihood and impact

Risk involves the possibility of an event in the future which has negative consequences.
The level of risk id determined by:
- Likelihood of the event
- Impact (the harm) from that event

FL-5.5.2 (K2) Distinguish between project and product risks

Product Risk involves the possibility that a work product (i.e. specification, component, system, or test) may fail to satisfy the legitimate needs of its users and/or stakeholders.
When Product Risks are associated with specific quality characteristics (i.e. functional suitability, relability, usability, security, etc.) Product Risks are called Quality Risks, and can include:
- Software might not perform its intended function per the specification
- Software might not perform its intended function per the user, customer, or stakeholder needs
- System architecture may not support a non-functional requirement
- A particular computation may perform incorrectly
- A loop control structure may be coded incorrectly
- Response-times may be inadequate for transacton processing
- User experience (UX) feedback might not meet product expectations

Project Risk involves situations that may have a negative effect on a project's ability to achieve its objectives, ind examples include:
- Project Issues (Delays, Inaccurate estimates, late changes leading to re-work)
- Organizational Issues (Skills, Personnel, availability of users, business or subject experts)
- Political Issues: (Accurate communications, timely follow up on issues, improper attitude or expectations of testing)
- Technical Issues: (Requirements not defined, Test Environment not ready on time, Late data conversion, weaknesses in development process, Poor defect management)
- Supplier Issues: (Third party failure to deliver, contractual issues)

Project risks may affect both development and testing activities.

FL-5.5.3 (K2) Describe, by using examples, how product risk analysis may influence the thoroughness and scope of testing

produc risks and the assessment of each risk's likelihood and impact.\nProduct risk information guides test planning; specification, preparation, and execution of tets cases, and test monitoring and control\n\nFor a risk-based approach, the results of product risk analysis are used to:\n- Determine the test techniques to be employed\n- Determine particular levels and types of testing to be performed\n- Determine the extent of testing to be carried out\n- Prioritize testing in an attempt to find critical defects as early as possible\n- Determine whether any activities in addition to testing could be employed to reduce risk\n\nRisk-based testing draws on collective knowledge and insight of stakeholders to carry out product risk analysis:\n- Analyse (and re-evaluate) what can go wrong (risks)\n- Determine which risks are important to deal with \n- Implement actions to mitigate those risks\n- Make contingency plans to deal with risks should they become actual events\n\nTesting may identify new risks, help to determine what should be mitigated, and lower uncertainty about risks"}">Risk-based approaches involve product risk analysis, which includes the identification of produc risks and the assessment of each risk's likelihood and impact.
Product risk information guides test planning; specification, preparation, and execution of tets cases, and test monitoring and control

For a risk-based approach, the results of product risk analysis are used to:
- Determine the test techniques to be employed
- Determine particular levels and types of testing to be performed
- Determine the extent of testing to be carried out
- Prioritize testing in an attempt to find critical defects as early as possible
- Determine whether any activities in addition to testing could be employed to reduce risk

Risk-based testing draws on collective knowledge and insight of stakeholders to carry out product risk analysis:
- Analyse (and re-evaluate) what can go wrong (risks)
- Determine which risks are important to deal with
- Implement actions to mitigate those risks
- Make contingency plans to deal with risks should they become actual events

Testing may identify new risks, help to determine what should be mitigated, and lower uncertainty about risks

5.6 Defect Management

FL-5.6.1 (K3) Write a defect report, covering a defect found during testing

Any defects identified should be investigated and tracked from discovery and classification to their resolution (correction of the defects and confirmation testing of the solution; deferral to a subsequent release; acceptance as a permanent product limitation)

Testers should attempt to minimize the number of false positives reported as defects.

Defects may be reported during coding, static analysis, reviews, or during dynamic testing, or use of a software product.

Typical defect reports have the following objectives:
- Provide developers an dother parties with information about any adverse event to enable them to identify specific effects, isolate the problem, and correct the potential defects or otherwise resolve the problem
- Provide test managers a means of tracking the quality of the work product and the impact on the testing
- Provide ideas for development and test process improvement.

A defect report filed during dynamic testing typically includes:
- Identifier
- Title and short summary of defect being reported
- Date of the defect report, issuing organization, and author
- Identification of the test item (configuration item being tested), and environment
- Development lifecycle phase the defect was observed
- Description of the defect to enable reproduction and resolution, including logs, dumps, screenshots, and recordings.
- Expected and actual results
- Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)
- Urgency/ priority to fix
- State of the defect report (i.e. open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed)
- Conclusion, recommendations and approvals
- Global issues, changes resulting from a defect
- Change history, actions taken to isolate, repair, and confirm defect as fixed
- References, including the test case that revealed the problem

Defect report can be found in ISO 29119-3 - Incident Reports

Learning Objectives for Test Tools

6.1 Test tool considerations

FL-6.1.1 (K2) Classify test tools according to their purpose and the test activities they support

Test tools can have one or more of the following purposes, depending on the context:
- Automating repetitive tasks or tasks that require significant resources when done manually
- Supporting manual test activities
- Allowing for more consistent testing and a higher level of defect reproducibility
- Automate activities that cannot be executed manually
- Increase reliablity of testing

Some test tools can be intrusive and affect the actual outcome of the test (Probe Effect).

Categories of tools for test activities include:
- Management of testing and testware: test management tools, requirements, defect, configuration, continuous integration
- Static Testing: static analysis tools (D)
- Test Design and Implementation: Model-based, test data preparation
- Test Execution and Logging: Test execution, coverage (D), test harnesses (D)
- Performance Measurement and Dynamic Analysis: Performance testing, dynamic analysis (D)
- Specialized Testing Needs

FL-6.1.2 (K1) Identify benefits and risks of test automation

Potential benefits of using tools to support test execution include:
- Reduction in repetitive manual work
- Greater consistency and repeatability
- More objective assessment
- Easier access to information about testing

Potential risks of using tools to support testing include:
- Unrealistic expectations
- Under-estimating time, cost, and effort for initial introduction of the tool
- Under-estimating time and effort needed to achieve significant and continuing benefits
- Under-estimating effort required to maintain the test work products
- Tool amy be relied on too much
- Version control of test work products may be neglected
- Relationships and interoperabilty issues between critical tools may be neglected (requirements, configuration, defect, multiple vendors)
- Tool vendor may go out of business, retire the tool, or sell the tool to a different vendor
- Vendor may provide a poor response for support, upgrades, and defect fixes
- Open source project may be suspended
- A new platform or technology may not be supported by the tool
- May be no clear ownership of the tool (i.e. mentoring, updates)

FL-6.1.3 (K1) Remember special considerations for test execution and test management tools

Test Execution Tools execute test objects using automated test scripts:
- Capturing test approach: Scripts may be unstable
- Data-driven Test approach: this approach uses a more generic test script to be executed with different data
- Keyword-driven Test approach: generic script processes keywords (action words) to process the associated test data.

Test Management Tools often need to interface with toher tools or spreadsheets to:
- Produce useful information in a format that fits the needs of the organization
- Maintain consistent traceability to requirements (requirements management tool)
- Link with test object version information (configuration management tool)

6.2 Effective use of tools

FL-6.2.1 (K1) Identify the main principles for selecting a tool

tools already in use within the organization, for compatibility and integration\n- Evaluation of the tool against clear requirements and objective criteria\n- Consideration of whether the tool is available for a free trial period\n- Evaluation of the vendor (training, support, commercial) or support for open source tools\n- Identification of internal requirements for coaching and mentoring in the use of the tool\n- Evaluation of training needs, considering the skills of those working directly with the tool\n- Consideration of pros and cons of various licensing models\n- Estimation of a cost-beneft ratio based on a concrete business case.\n\nA Proof-Of-Concept evaluation should be done to establish whether the tool:\n- Performs effectively with the software under test and within the current infrastructure, or\n- To identify changes needed to that infrastructure to use the tool effectively."}">The main considerations for selecting a tool in an organization include:
- Assessment of the organization's maturity, strengths, and weaknesses
- Identification of opportunities for an improved test process supported by tools
- Understanding of the technologies used by the test object(s) for compatibility
- Understanding the build and continuous integration tools already in use within the organization, for compatibility and integration
- Evaluation of the tool against clear requirements and objective criteria
- Consideration of whether the tool is available for a free trial period
- Evaluation of the vendor (training, support, commercial) or support for open source tools
- Identification of internal requirements for coaching and mentoring in the use of the tool
- Evaluation of training needs, considering the skills of those working directly with the tool
- Consideration of pros and cons of various licensing models
- Estimation of a cost-beneft ratio based on a concrete business case.

A Proof-Of-Concept evaluation should be done to establish whether the tool:
- Performs effectively with the software under test and within the current infrastructure, or
- To identify changes needed to that infrastructure to use the tool effectively.

FL-6.2.2 (K1) Recall the objectives for using pilot projects to introduce tools

Introducing the selected tool into an organization generally starts with a pilot project, which has the following objectives:
- Gaining in-depth knowledge about the tool (strengths, weaknesses)
- Evaluating how the tool fits with existing processes and practices, and determining what has to change
- Deciding on standard ways of using and managing the tool (i.e. libraries, modularity)
- Assessing whether the benefits will be achieved at a reasonable cost.
- Understanding metrics that the tool should collect and report.

FL-6.2.3 (K1) Identify the success factors for evaluation, implementation, deployment, and on-going support of test tools in an organization

to fit the use of the tool\n- Providing training, coaching, mentoring for tool users\n- Defining guidelines for the use of the tool\n- Monitoring tool use and benefits\n- Providing support to users of a given tool\n- Gathering lessons learned\n\nSeparate organizations and third-party suppliers may need to be integrated."}'>Success factors for evaluation, implementation, deployment, and on-going support of tools within an organization include:
- Rolling out the tool to the rest of the organization incrementally.
- Adapting and improving processes to fit the use of the tool
- Providing training, coaching, mentoring for tool users
- Defining guidelines for the use of the tool
- Monitoring tool use and benefits
- Providing support to users of a given tool
- Gathering lessons learned

Separate organizations and third-party suppliers may need to be integrated.
-- DanielZrymiak - 31 Mar 2021
Topic revision: r2 - 07 Apr 2021, DanielZrymiak
© 2020 Ultranauts - 75 Broad Street, 2nd Floor, Suite 206, New York, NY 10004 - info@ultranauts.co