Catalog
concept#Cloud#Platform#Architecture#DevOps

Serverless Computing

An operational and architectural paradigm where cloud providers manage runtimes and scaling, while developers focus on functions and event-driven logic.

Serverless computing describes a cloud execution model where applications run in provider-managed runtime environments and developers do not manage server infrastructure.
Established
Medium

Classification

  • Medium
  • Technical
  • Architectural
  • Intermediate

Technical context

Object storage (e.g., S3)Message and event systems (e.g., Kafka, SNS)API gateways and authentication services

Principles & goals

Design for short-lived executions and idempotencyDecoupling via eventsObservability and automatic scaling as first-class requirements
Build
Enterprise, Domain, Team

Use cases & scenarios

Compromises

  • Vendor lock-in from provider-specific features
  • Unpredictable costs at high invocation volumes without controls
  • Complexity in debugging and distributed failures
  • Keep functions short, stateless and idempotent
  • Use dead-letter queues for failure handling
  • Configure limits and cost budgets early

I/O & resources

  • Cloud account and permissions
  • Definition of events and triggers
  • Observability and monitoring stack
  • Scalable functions with monitoring data
  • Usage-based cost reports
  • Automated error-handling paths

Description

Serverless computing describes a cloud execution model where applications run in provider-managed runtime environments and developers do not manage server infrastructure. It emphasizes event-driven functions, automatic scaling and pay-per-use billing, reduces operational burden and alters architectural and development decisions across teams and organizations.

  • Reduced infrastructure operational burden
  • Fine-grained scaling and cost efficiency for variable load
  • Faster iteration by focusing on code instead of servers

  • Limits on execution duration and resources per invocation
  • Potential cold-start latencies
  • Challenges with long-running or stateful workloads

  • Cold-start latency

    Time until first usable response after inactivity; relevant for latency SLAs.

  • Cost per million invocations

    Monetary metric to estimate usage-dependent costs.

  • Error rate per invocation

    Share of failed executions; indicator for reliability and resilience.

File processing via object storage trigger

Upload triggers a function that processes images and stores metadata.

Real-time notifications via event streams

Events generate notifications distributed by serverless functions.

Webhook-driven API endpoints

External services send webhooks that trigger functions for processing and forwarding.

1

Analyze workloads suitable for serverless

2

Create a prototype with typical event flow

3

Introduce monitoring, retries and cost alerts

⚠️ Technical debt & bottlenecks

  • Lock-in via provider-specific SDKs
  • Orphaned functions without lifecycle management
  • Lack of observability standards for functions
Cold startsProvider limitsNetwork and I/O latencies
  • Running long-running DB migrations in functions
  • Distributing large binaries as direct function packages
  • Configuring unlimited retries without backoff
  • Ignoring cold-start strategies for latency requirements
  • Neglecting provider limits and throttling
  • Missing end-to-end tests for distributed flows
Cloud architecture understandingExperience with event-driven systemsKnowledge of observability and cost control
Event-driven processingCost efficiency for variable loadMinimization of operational burden
  • Maximum execution duration per function depends on provider
  • Limited resources per invocation (memory/CPU)
  • Provider-specific APIs and configurations