Serverless Computing
An operational and architectural paradigm where cloud providers manage runtimes and scaling, while developers focus on functions and event-driven logic.
Classification
- ComplexityMedium
- Impact areaTechnical
- Decision typeArchitectural
- Organizational maturityIntermediate
Technical context
Principles & goals
Use cases & scenarios
Compromises
- Vendor lock-in from provider-specific features
- Unpredictable costs at high invocation volumes without controls
- Complexity in debugging and distributed failures
- Keep functions short, stateless and idempotent
- Use dead-letter queues for failure handling
- Configure limits and cost budgets early
I/O & resources
- Cloud account and permissions
- Definition of events and triggers
- Observability and monitoring stack
- Scalable functions with monitoring data
- Usage-based cost reports
- Automated error-handling paths
Description
Serverless computing describes a cloud execution model where applications run in provider-managed runtime environments and developers do not manage server infrastructure. It emphasizes event-driven functions, automatic scaling and pay-per-use billing, reduces operational burden and alters architectural and development decisions across teams and organizations.
✔Benefits
- Reduced infrastructure operational burden
- Fine-grained scaling and cost efficiency for variable load
- Faster iteration by focusing on code instead of servers
✖Limitations
- Limits on execution duration and resources per invocation
- Potential cold-start latencies
- Challenges with long-running or stateful workloads
Trade-offs
Metrics
- Cold-start latency
Time until first usable response after inactivity; relevant for latency SLAs.
- Cost per million invocations
Monetary metric to estimate usage-dependent costs.
- Error rate per invocation
Share of failed executions; indicator for reliability and resilience.
Examples & implementations
File processing via object storage trigger
Upload triggers a function that processes images and stores metadata.
Real-time notifications via event streams
Events generate notifications distributed by serverless functions.
Webhook-driven API endpoints
External services send webhooks that trigger functions for processing and forwarding.
Implementation steps
Analyze workloads suitable for serverless
Create a prototype with typical event flow
Introduce monitoring, retries and cost alerts
⚠️ Technical debt & bottlenecks
Technical debt
- Lock-in via provider-specific SDKs
- Orphaned functions without lifecycle management
- Lack of observability standards for functions
Known bottlenecks
Misuse examples
- Running long-running DB migrations in functions
- Distributing large binaries as direct function packages
- Configuring unlimited retries without backoff
Typical traps
- Ignoring cold-start strategies for latency requirements
- Neglecting provider limits and throttling
- Missing end-to-end tests for distributed flows
Required skills
Architectural drivers
Constraints
- • Maximum execution duration per function depends on provider
- • Limited resources per invocation (memory/CPU)
- • Provider-specific APIs and configurations