Catalog
concept#Quality Assurance#Software Engineering#Observability#Reliability

Testing

Systematic verification of software to detect defects, assess quality, and ensure conformance.

Testing is the systematic verification of software to find defects, deviations, and quality issues.
Established
Medium

Classification

  • Medium
  • Technical
  • Architectural
  • Intermediate

Technical context

Continuous Integration systems (Jenkins, GitHub Actions)Issue trackers (Jira, GitHub Issues)Test frameworks (pytest, JUnit, Selenium)

Principles & goals

Early and frequent testing reduces cost and risk.Automation ensures repeatability and speed.Tests should be independent, deterministic, and maintainable.
Build
Team, Domain, Enterprise

Use cases & scenarios

Compromises

  • Excessive trust in insufficient tests leads to production defects.
  • High test maintenance ties up resources from feature development.
  • Flaky tests undermine CI processes and developer productivity.
  • Follow the test pyramid: many unit, moderate integration, few UI tests.
  • Run tests early and often in CI (shift-left).
  • Identify and prioritize fixing flaky tests.

I/O & resources

  • Requirements, acceptance criteria, user stories
  • Test frameworks and testing tools
  • Test data, mocks and staging infrastructure
  • Test reports and failure logs
  • Coverage and quality metrics
  • Release recommendations and risk assessments

Description

Testing is the systematic verification of software to find defects, deviations, and quality issues. It includes strategies, techniques and test types (unit, integration, system, acceptance), plus automation and metrics for assessment. The goal is more reliable software, reduced risk, and early detection of regressions.

  • Early bug detection reduces correction costs.
  • Improved software quality and user satisfaction.
  • Better decision basis for releases and risk management.

  • Complete test coverage is often not economically achievable.
  • Automation requires initial effort and maintenance.
  • Poor test quality can give false confidence.

  • Test coverage

    Percentage of code or requirements covered by tests.

  • Mean Time to Detect (MTTD)

    Average time to detect a defect after it is introduced.

  • Production defect rate

    Number of critical defects per release in production.

Unit-first strategy in microservices

Teams perform extensive unit tests per service and integrate them into CI to obtain fast feedback.

End-to-end tests for user flows

E2E tests validate complete user flows in a reproducible test environment before release.

Test pyramid for prioritization

The test pyramid prioritizes unit over integration and UI tests to balance speed and coverage.

1

Define test goals and acceptance criteria based on requirements.

2

Set up an initial test infrastructure with CI integration.

3

Prioritize and implement test cases (unit → integration → E2E).

4

Introduce metrics, monitoring and regular test maintenance.

⚠️ Technical debt & bottlenecks

  • Missing test coverage in critical modules.
  • Monolithic test environments with slow runtimes.
  • Outdated test data and missing data strategies.
Slow test runtimesFlaky testsInsufficient test data
  • Automating all tests without prioritization leads to high maintenance.
  • Measuring only test coverage as quality indicator.
  • Running tests only at the end of the project (late testing).
  • Unclear acceptance criteria make test design hard.
  • Dependencies on external systems without isolation cause unstable tests.
  • Overreliance on manual smoke checks.
Test design and analysis techniquesAutomation scripting and tool knowledgeDomain and architecture understanding
Fast feedback cyclesAutomatability and repeatabilityObservability and measurability
  • Limited resources for test infrastructure
  • Regulatory requirements for test documentation
  • Legacy systems with hard-to-test architecture