Catalog
concept#Software Engineering#Quality Assurance#DevOps#Reliability

Unit Testing

Isolated, automated tests for individual code units to ensure correctness and prevent regressions.

Unit testing isolates individual software components to verify correctness and detect regressions early.
Established
Low

Classification

  • Low
  • Technical
  • Design
  • Intermediate

Technical context

CI/CD systems (e.g., GitLab CI, GitHub Actions)Test frameworks (e.g., JUnit, pytest)Mocking libraries and test doubles

Principles & goals

Isolation: Tests should be independent and deterministic.Speed: Unit tests must be fast to execute.Automation: Tests are integrated into the CI pipeline.
Build
Team, Domain

Use cases & scenarios

Compromises

  • Overfocus on coverage instead of meaningful tests.
  • Flaky tests reduce trust in the test suite.
  • Insufficient tests allow regressions in integration paths.
  • Write small, focused tests that verify a single behavior.
  • Use mocks sparingly and only for true external dependencies.
  • Automate test execution in CI and keep runtimes short.

I/O & resources

  • Source code of the unit under test
  • Test framework and assertion libraries
  • Definition of expected interfaces and behavior
  • Automated test cases with assertions
  • Reports on test runs and failures
  • Regression detection on code changes

Description

Unit testing isolates individual software components to verify correctness and detect regressions early. Tests are automated, fast, and designed to run frequently during development and in CI pipelines. Effective unit testing improves design by encouraging modular code and provides quick feedback to developers.

  • Enables early detection of defects during development.
  • Supports refactoring via automated regression checks.
  • Increases code quality and design through focused, modular tests.

  • Does not cover integration or system-level issues.
  • High maintenance effort with frequent interface changes.
  • Misconfigured mocks can give a false sense of security.

  • Test runtime

    Average execution time of the unit test suite per commit.

  • Test coverage

    Percentage of code covered by unit tests (line/branch).

  • Failure rate per commit

    Share of commits that trigger unit test failures in CI.

JUnit suite for a service class

Focus on individual service methods with mocking of dependencies.

pytest modules for Python library

Parametrized tests verify varied inputs and edge cases automatically.

CI test run against pull request

Automatic test run prevents introducing regressive changes.

1

Identify a clear unit and its interfaces.

2

Set up a test framework and write baseline test cases.

3

Integrate tests into CI and ensure continuous execution.

4

Regular maintenance: refactor tests on API changes.

⚠️ Technical debt & bottlenecks

  • Large monolithic test suites with long runtimes.
  • Insufficiently modular codebase limiting unit testability.
  • Lack of test data isolation causing non-deterministic outcomes.
Test maintenanceFlaky testsEnvironment parity
  • Unit tests that are actually integration tests and hit external systems.
  • Excessive focus on coverage metrics without prioritizing critical paths.
  • Mocking the logic under test instead of external dependencies.
  • Lack of maintenance leads to outdated or misleading tests.
  • Ignoring flaky tests instead of fixing them reduces trust.
  • Inter-test dependencies between units lead to hard-to-debug failures.
Programming skills and modular code designFamiliarity with test frameworks and mockingKnowledge of CI integration and test automation
Modularity of components for isolated testingFast feedback cycles through short test runtimesStable interfaces to reduce test maintenance
  • Limited access to real dependencies requires mocks/stubs.
  • Legacy code can be hard to isolate.
  • Test data management must be designed to be deterministic.