-- DanielZrymiak - 23 Apr 2020



ISTQB Foundation Software Testing Knowledge Base





Resources and References

Certifications

  • Foundation
  • Advanced Tester


Fundamentals of Testing

What Is Testing?

Software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation.

A common misperception of testing is that it only consists of running tests, i.e., executing the software and checking the results. As described in section 1.4, software testing is a process which includes many different activities; test execution (including checking of results) is only one of these activities. The test process also includes activities such as test planning, analyzing, designing, and implementing tests, reporting test progress and results, and evaluating the quality of a test object.

Some testing does involve the execution of the component or system being tested; such testing is called dynamic testing. Other testing does not involve the execution of the component or system being tested; such testing is called static testing. So, testing also includes reviewing work products such as requirements, user stories, and source code.

Why Is Testing Necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Principles of Testing

A number of testing principles have been suggested over the past 50 years and offer general guidelines common for all testing.

1. Testing shows the presence of defects, not their absence

2. Exhaustive testing is impossible

3. Early testing saves time and money

4. Defects cluster together

5. Beware of the pesticide paradox

6. Testing is context dependent

7. Absence-of-errors is a fallacy

Test Process

There are common sets of test activities without which testing will be less likely to achieve its established objectives. These sets of test activities are a test process. The proper, specific software test process in any given situation depends on many factors. Which test activities are involved in this test process, how these activities are implemented, and when these activities occur may be discussed in an organization’s test strategy.

A test process consists of the following main groups of activities:
  •  Test planning

  •  Test monitoring and control

  •  Test analysis

  •  Test design

  •  Test implementation

  •  Test execution

  •  Test completion

    Each main group of activities is composed of constituent activities, consisting of multiple individual tasks, which would vary from one project or release to another.

Psychology of Testing

An element of human psychology called confirmation bias can make it difficult to accept information that disagrees with currently held beliefs. For example, since developers expect their code to be correct, they have a confirmation bias that makes it difficult to accept that the code is incorrect. In addition to confirmation bias, other cognitive biases may make it difficult for people to understand or accept information produced by testing. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news.

As a result of these psychological factors, some people may perceive testing as a destructive activity, even though it contributes greatly to project progress and product quality (see sections 1.1 and 1.2). To try to reduce these perceptions, information about defects and failures should be communicated in a constructive way. This way, tensions between the testers and the analysts, product owners, designers, and developers can be reduced. This applies during both static and dynamic testing.


Testing Throughout the Software Development Lifecycle

Software Development Lifecycle Models

A software development lifecycle model describes the types of activity performed at each stage in a software development project, and how the activities relate to one another logically and chronologically. There are a number of different software development lifecycle models, each of which requires different approaches to testing.

2.1.1 Software Development and Software Testing

It is an important part of a tester's role to be familiar with the common software development lifecycle models so that appropriate test activities can take place.

In any software development lifecycle model, there are several characteristics of good testing:
  •  For every development activity, there is a corresponding test activity

  •  Each test level has test objectives specific to that level

  •  Test analysis and design for a given test level begin during the corresponding development activity

  •  Testers participate in discussions to define and refine requirements and design, and are involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as drafts are available

    No matter which software development lifecycle model is chosen, test activities should start in the early stages of the lifecycle, adhering to the testing principle of early testing.

    This syllabus categorizes common software development lifecycle models as follows:

    •  Sequential development models

    •  Iterative and incremental development models

Test Levels

Test levels are groups of test activities that are organized and managed together. Each test level is an instance of the test process, consisting of the activities described in section 1.4, performed in relation to software at a given level of development, from individual units or components to complete systems or, where applicable, systems of systems. Test levels are related to other activities within the software development lifecycle. The test levels used in this syllabus are:
  •  Component testing

  •  Integration testing

  •  System testing

  •  Acceptance testing

    For every test level, a suitable test environment is required.
Test levels are characterized by the following attributes:

 Specific objectives

 Test basis, referenced to derive test cases

 Test object (i.e., what is being tested)

 Typical defects and failures

 Specific approaches and responsibilities For every test level, a suitable test environment is required.

Test Types

A test type is a group of test activities aimed at testing specific characteristics of a software system, or a part of a system, based on specific test objectives. Such objectives may include:
    •  Evaluating functional quality characteristics, such as completeness, correctness, and appropriateness

    •  Evaluating non-functional quality characteristics, such as reliability, performance efficiency, security, compatibility, and usability

    •  Evaluating whether the structure or architecture of the component or system is correct, complete, and as specified

    •  Evaluating the effects of changes, such as confirming that defects have been fixed (confirmation testing) and looking for unintended changes in behavior resulting from software or environment changes (regression testing)

Maintenance Testing

Once deployed to production environments, software and systems need to be maintained. Changes of various sorts are almost inevitable in delivered software and systems, either to fix defects discovered in operational use, to add new functionality, or to delete or alter already-delivered functionality. Maintenance is also needed to preserve or improve non-functional quality characteristics of the component or system over its lifetime, especially performance efficiency, compatibility, reliability, security, , and portability.

When any changes are made as part of maintenance, maintenance testing should be performed, both to evaluate the success with which the changes were made and to check for possible side-effects (e.g., regressions) in parts of the system that remain unchanged (which is usually most of the system).

Static Testing

Static Testing Basics

Static testing relies on the manual examination of work products (i.e., reviews) or tool-driven evaluation of the code or other work products (i.e., static analysis). Both types of static testing assess the code or other work product being tested without actually executing the code or work product being tested.

Static analysis is important for safety-critical computer systems (e.g., aviation, medical, or nuclear software), but static analysis has also become important and common in other settings. For example, static analysis is an important part of security testing. Static analysis is also often incorporated into automated software build and distribution tools, for example in Agile development, continuous delivery, and continuous deployment.

Review Process, Types, Techniques

Reviews vary from informal to formal. Informal reviews are characterized by not following a defined process and not having formal documented output. Formal reviews are characterized by team participation, documented results of the review, and documented procedures for conducting the review. The formality of a review process is related to factors such as the software development lifecycle model, the maturity of the development process, the complexity of the work product to be reviewed, any legal or regulatory requirements, and/or the need for an audit trail.

The focus of a review depends on the agreed objectives of the review (e.g., finding defects, gaining understanding, educating participants such as testers and new team members, or discussing and deciding by consensus).

Test Techniques

The purpose of a test technique, is to help in identifying test conditions, test cases, and test data.


Test Management

Test Planning and Estimation

A test plan outlines test activities for development and maintenance projects. Planning is influenced by the test policy and test strategy of the organization, the development lifecycles and methods being used, the scope of testing, objectives, risks, constraints, criticality, testability, and the availability of resources.

As the project and test planning progress, more information becomes available and more detail can be included in the test plan. Test planning is a continuous activity and is performed throughout the product's lifecycle. (Note that the product’s lifecycle may extend beyond a project's scope to include the maintenance phase.) Feedback from test activities should be used to recognize changing risks so that planning can be adjusted. Planning may be documented in a master test plan and in separate test plans for test levels, such as system testing and acceptance testing, or for separate test types, such as usability testing and performance testing. Test planning activities may include the following and some of these may be documented in a test plan:
  •  Determining the scope, objectives, and risks of testing

  •  Defining the overall approach of testing

  •  Integrating and coordinating the test activities into the software lifecycle activities

  •  Making decisions about what to test, the people and other resources required to perform the various test activities, and how test activities will be carried out

  •  Scheduling of test analysis, design, implementation, execution, and evaluation activities, either on particular dates (e.g., in sequential development) or in the context of each iteration (e.g., in iterative development)

  •  Selecting metrics for test monitoring and control

  •  Budgeting for the test activities

  •  Determining the level of detail and structure for test documentation (e.g., by providing templates or example documents)

There are a number of estimation techniques used to determine the effort required for adequate testing. Two of the most commonly used techniques are:
  •  The metrics-based technique: estimating the test effort based on metrics of former similar projects, or based on typical values

  •  The expert-based technique: estimating the test effort based on the experience of the owners of the testing tasks or by experts

    For example, in Agile development, burndown charts are examples of the metrics-based approach as effort remaining is being captured and reported, and is then used to feed into the team’s velocity to determine the amount of work the team can do in the next iteration; whereas planning poker is an example of the expert-based approach, as team members are estimating the effort to deliver a feature based on their experience (ISTQB-CTFL-AT).

Within sequential projects, defect removal models are examples of the metrics-based approach, where volumes of defects and time to remove them are captured and reported, which then provides a basis for estimating future projects of a similar nature

Test Monitoring and Control

The purpose of test monitoring is to gather information and provide feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and should be used to assess test progress and to measure whether the test exit criteria, or the testing tasks associated with an Agile project's definition of done, are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and (possibly) reported. Actions may cover any test activity and may affect any other software lifecycle activity.

Examples of test control actions include:
  •  Re-prioritizing tests when an identified risk occurs (e.g., software delivered late)

  •  Changing the test schedule due to availability or unavailability of a test environment or other resources

  •  Re-evaluating whether a test item meets an entry or exit criterion due to rework

Configuration Management

The purpose of configuration management is to establish and maintain the integrity of the component or system, the testware, and their relationships to one another through the project and product lifecycle.

To properly support testing, configuration management may involve ensuring the following:
  •  All test items are uniquely identified, version controlled, tracked for changes, and related to each other

  •  All items of testware are uniquely identified, version controlled, tracked for changes, related to each other and related to versions of the test item(s) so that traceability can be maintained throughout the test process

  •  All identified documents and software items are referenced unambiguously in test documentation

Risk and Testing

Product risk involves the possibility that a work product (e.g., a specification, component, system, or test) may fail to satisfy the legitimate needs of its users and/or stakeholders. When the product risks are associated with specific quality characteristics of a product (e.g., functional suitability, reliability, performance efficiency, usability, security, compatibility, maintainability, and portability), product risks are also called quality risks.

Project risk involves situations that, should they occur, may have a negative effect on a project's ability to achieve its objectives.

Risk is used to focus the effort required during testing. It is used to decide where and when to start testing and to identify areas that need more attention. Testing is used to reduce the probability of an adverse event occurring, or to reduce the impact of an adverse event. Testing is used as a risk mitigation activity, to provide information about identified risks, as well as providing information on residual (unresolved) risks.

A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk. It involves product risk analysis, which includes the identification of product risks and the assessment of each risk’s likelihood and impact. The resulting product risk information is used to guide test planning, the specification, preparation and execution of test cases, and test monitoring and control. Analyzing product risks early contributes to the success of a project.

Risk-based testing draws on the collective knowledge and insight of the project stakeholders to carry out product risk analysis. To ensure that the likelihood of a product failure is minimized, risk management activities provide a disciplined approach to:
  •  Analyze (and re-evaluate on a regular basis) what can go wrong (risks)

  •  Determine which risks are important to deal with

  •  Implement actions to mitigate those risks

  •  Make contingency plans to deal with the risks should they become actual events

In addition, testing may identify new risks, help to determine what risks should be mitigated, and lower uncertainty about risks.

Defect Management

Any defects identified should be investigated and should be tracked from discovery and classification to their resolution (e.g., correction of the defects and successful confirmation testing of the solution, deferral to a subsequent release, acceptance as a permanent product limitation, etc.). In order to manage all defects to resolution, an organization should establish a defect management process which includes a workflow and rules for classification.

Typical defect reports have the following objectives:
  •  Provide developers and other parties with information about any adverse event that occurred, to enable them to identify specific effects, to isolate the problem with a minimal reproducing test, and to correct the potential defect(s), as needed or to otherwise resolve the problem

  •  Provide test managers a means of tracking the quality of the work product and the impact on the testing (e.g., if a lot of defects are reported, the testers will have spent a lot of time reporting them instead of running tests, and there will be more confirmation testing needed)

  •  Provide ideas for development and test process improvement


Tool Support For Testing

Test Tool Considerations

Tools can be classified based on several criteria such as purpose, pricing, licensing model (e.g., commercial or open source), and technology used. Tools are classified in this syllabus according to the test activities that they support.

Some tools clearly support only or mainly one activity; others may support more than one activity, but are classified under the activity with which they are most closely associated. Tools from a single provider, especially those that have been designed to work together, may be provided as an integrated suite.

Effective Use of Tools

Success factors for evaluation, implementation, deployment, and on-going support of tools within an organization include:
  •  Rolling out the tool to the rest of the organization incrementally

  •  Adapting and improving processes to fit with the use of the tool

  •  Providing training, coaching, and mentoring for tool users

  •  Defining guidelines for the use of the tool (e.g., internal standards for automation)

  •  Implementing a way to gather usage information from the actual use of the tool

  •  Monitoring tool use and benefits

  •  Providing support to the users of a given tool

  •  Gathering lessons learned from all users

It is also important to ensure that the tool is technically and organizationally integrated into the software development lifecycle, which may involve separate organizations responsible for operations and/or third party suppliers.



References

Topic revision: r3 - 07 Apr 2021, DanielZrymiak
© 2020 Ultranauts - 75 Broad Street, 2nd Floor, Suite 206, New York, NY 10004 - info@ultranauts.co