Manual Testing
Practical, non-automated testing method for detecting defects, usability issues and deviations through human inspection.
Classification
- ComplexityMedium
- Impact areaOrganizational
- Decision typeOrganizational
- Organizational maturityIntermediate
Technical context
Principles & goals
Use cases & scenarios
Compromises
- Lack of automation leads to delayed regression testing.
- Unstructured sessions may result in incomplete coverage.
- Dependence on individual tester knowledge and experience.
- Short timeboxed exploratory sessions with clear goals.
- Use checklists and session charters for repeatability.
- Log findings immediately into an issue tracker.
I/O & resources
- Requirement or user story descriptions
- Build/release instance and test environment
- Accessible test data and possibly user accounts
- Defect reports with reproduction steps
- Short and long reports for decision bodies
- Recommendations for test case automation
Description
Manual testing is a hands-on method for validating software quality by human testers. It covers exploratory, functional and regression testing without automation and excels at usability and visual checks and ad-hoc investigations. It complements automated testing, aiding risk discovery, acceptance criteria validation and exploratory test design.
✔Benefits
- Finds usability and UI issues that are hard to detect automatically.
- Rapid risk discovery for new or unclear requirements.
- Flexible for ad-hoc investigations and exploratory sessions.
✖Limitations
- Limited scalability: high manual effort for large test surfaces.
- Hard to reproduce results without clear documentation.
- Tester subjectivity can hinder consistent evaluation.
Trade-offs
Metrics
- Defect Detection Rate
Share of defects found per testing effort unit; indicates effectiveness of manual sessions.
- Test cycle time
Time from test start to completion for defined scenarios; measures efficiency.
- Escaped defects
Number of defects discovered in production that were not found before release.
Examples & implementations
Exploratory testing for feature launch
A team ran exploratory sessions before launch and discovered several usability issues fixed prior to release.
Acceptance test with PO
The product owner interactively validated acceptance criteria; minor adjustments were documented immediately.
Regression test after hotfix
After an urgent hotfix, manual smoke and regression tests were run to identify side effects.
Implementation steps
Define goals and focus areas (scope & risks).
Prepare test environment and ensure access.
Conduct exploratory and structured sessions.
Document, prioritize and track results.
⚠️ Technical debt & bottlenecks
Technical debt
- Missing automation for recurring smoke tests.
- No central checklists or session charters established.
- Incomplete documentation of exploratory sessions.
Known bottlenecks
Misuse examples
- Relying solely on manual tests in a high release tempo.
- Manual testing without prioritization leads to wasted effort.
- Testers fail to document findings sufficiently for reproduction.
Typical traps
- Underestimating the effort for regression testing.
- Confusing exploratory findings with completed tests.
- Lack of context information hampers defect analysis.
Required skills
Architectural drivers
Constraints
- • Time constraints before releases limit scope.
- • Not every platform allows full manual checks.
- • Experienced testers may be limited in availability.