Cloud Native
A conceptual approach for designing, delivering and operating applications optimized for elastic cloud environments.
Classification
- ComplexityHigh
- Impact areaOrganizational
- Decision typeArchitectural
- Organizational maturityIntermediate
Technical context
Principles & goals
Use cases & scenarios
Compromises
- Lack of platform ownership leads to sprawl and inefficiency.
- Wrong expectations of 'cloud' can cause uncontrolled cost increases.
- Security gaps due to uncoordinated configurations and permissions.
- Consistent use of declarative infrastructure and GitOps workflows.
- Secure default configurations and automated security checks.
- Provide and document platform APIs for developers.
I/O & resources
- Cloud infrastructure or cluster provider
- Container image registry and CI/CD pipeline
- Observability tooling and SLO definitions
- Platform-backed deployments and self-service features
- Metrics for availability, latency and cost
- Automated scaling and resilience mechanisms
Description
Cloud Native describes principles for designing, building, and operating applications that run in elastic cloud environments. It emphasizes containerization, microservices, dynamic orchestration, and declarative infrastructure. The goal is high scalability, resilience and rapid delivery through platforms and automated operational practices.
✔Benefits
- Improved scalability via elastic infrastructure and automatic scaling.
- Faster time-to-market due to containerization and CI/CD pipelines.
- Higher reliability through isolation and orchestrated recovery.
✖Limitations
- Increased operational overhead and required platform expertise.
- Complexity in distributed systems (networking, consistency, debugging).
- Not all legacy applications are economically suitable for migration.
Trade-offs
Metrics
- Mean Time To Recovery (MTTR)
Time to recover after an outage.
- Release frequency
Number of deployments per time unit.
- Cost per user session
Operational costs relative to user activity.
Examples & implementations
Kubernetes-based platform at a payments provider
Use of Kubernetes and service-oriented architecture to improve scalability and resilience.
Microservice architecture in a media company
Breakdown of a monolith into small services with independent deployability and observability.
Platform-as-a-Service for developer teams
Internal platform provides self-service for deployments, CI/CD and monitoring.
Implementation steps
Define vision and goals; prioritize critical use cases.
Create platform backlog and set responsibilities.
Provision base infrastructure (cluster, registry, CI).
Migrate incrementally, introduce observability and SLOs.
⚠️ Technical debt & bottlenecks
Technical debt
- Insufficiently automated deployments in early phases.
- Non-standardized configurations across clusters.
- Missing tests for platform operator actions.
Known bottlenecks
Misuse examples
- Containerization used only as packaging, without CI/CD integration or observability.
- Use of cloud features without cost monitoring, leading to budget overruns.
- Migrating all systems at once instead of incrementally, with high risk.
Typical traps
- Unclear ownership of platform components leads to diffusion of responsibility.
- Missing SLOs and observability prevent targeted improvements.
- Ignoring data and integration requirements during design.
Required skills
Architectural drivers
Constraints
- • Regulatory requirements may limit cloud usage.
- • Budget limits and cost control require governance.
- • Legacy integrations can complicate migration.