Catalog
concept#Software Engineering#Architecture#Reliability

Concurrency

Concepts and patterns for executing program parts concurrently, including synchronization and concurrency control.

Concurrency is the study and practice of executing multiple computations in overlapping time periods, including threads, processes and asynchronous tasks.
Established
High

Classification

  • High
  • Technical
  • Architectural
  • Intermediate

Technical context

Operating system threads and schedulerLanguage runtimes such as JVM or CLRMessage brokers and asynchronous queues

Principles & goals

Minimize shared state and prefer immutability.Use explicit synchronization only where necessary; prefer simpler models.Design for fault tolerance, idempotence and safe retries.
Build
Domain, Team

Use cases & scenarios

Compromises

  • Deadlocks can block resources and cause service outages.
  • Race conditions cause inconsistent states and data corruption.
  • Improper use of concurrency primitives increases maintenance difficulty.
  • Use immutable data structures where possible.
  • Prefer higher abstractions (actors, tasks) over raw locks.
  • Automate deterministic tests and stress tests.

I/O & resources

  • Requirements profile (latency, throughput, data consistency)
  • System architecture and existing runtime environment
  • Workload characteristics (I/O-bound vs CPU-bound)
  • Design decisions on models and synchronization mechanisms
  • Test strategy with deterministic and heuristic tests
  • Metrics and monitoring for throughput, latency and errors

Description

Concurrency is the study and practice of executing multiple computations in overlapping time periods, including threads, processes and asynchronous tasks. It covers synchronization, coordination, and hazards like race conditions and deadlocks. The goal is to ensure correctness and performance when resources are accessed concurrently across components and systems.

  • Better utilization of CPU and I/O resources through parallelism.
  • Improved system responsiveness and throughput.
  • Scalability by distributing work across multiple execution units.

  • Increased design and testing effort due to concurrency bugs.
  • Difficult reproducibility of race conditions and heisenbugs.
  • Synchronization overhead can reduce performance.

  • Throughput

    Number of processed requests per time unit under concurrent load.

  • Latency

    Time to complete a request, especially under contention.

  • Concurrency-related defects

    Count of discovered race conditions, deadlocks and inconsistencies in operation and tests.

Web server with worker threads

An HTTP server uses a pool of worker threads to handle requests in parallel.

Real-time trading system

Parallel processing of market feeds and order handling with focus on latency and consistency.

Map-reduce job

Data is distributed across workers, reduced and aggregated.

1

Requirements analysis: identify which parts need parallelization.

2

Choose model: select threading, actor model, or async tasks.

3

Design: minimize shared state, define clear interfaces.

4

Implementation: apply primitives and libraries consistently.

5

Test and monitor: set up deterministic tests, load tests and runtime monitoring.

⚠️ Technical debt & bottlenecks

  • Legacy code with global state that is hard to parallelize.
  • Insufficient concurrency tests causing later defects.
  • Ad-hoc synchronization without documentation or standard libraries.
Mutex contentionI/O boundLock convoy
  • Undertested concurrent changes deployed to production without load tests.
  • Neglected error handling in async tasks leading to silent data loss.
  • Using locks to solve every synchronization issue without design review.
  • False confidence in tests without covering non-deterministic scenarios.
  • Underestimating overhead from context switches and synchronization.
  • Lack of monitoring so sporadic deadlocks remain undetected.
Experience in concurrent programming and synchronization primitivesDebugging and analysis skills for race conditions and deadlocksKnowledge of performance measurement and load testing
Low latencyHigh throughputFault tolerance and robustness
  • Hardware limits: CPU cores, memory and cache coherence.
  • Language and runtime memory model constrain behaviour.
  • Regulatory requirements for consistency and availability.