Catalog
concept#Observability#Reliability#DevOps#Security

Logging

The structured recording of events and state data for debugging, monitoring and compliance.

Logging is the structured recording of events, states and metrics in applications and infrastructure.
Established
Medium

Classification

  • Medium
  • Technical
  • Architectural
  • Intermediate

Technical context

Log forwarders such as Fluentd, Logstash or Vector.Indexing and search systems like Elasticsearch or Loki.Monitoring and alerting systems (Prometheus, Grafana).

Principles & goals

Standardize format and field names across services.Separate event data from metrics; use structured formats.Define retention and access control based on risk and compliance.
Run
Team, Domain, Enterprise

Use cases & scenarios

Compromises

  • Exposing sensitive data in logs without masking.
  • Excessive logging load impacts system performance.
  • Incorrect timestamps or missing correlation hinder analysis.
  • Use structured formats (e.g. JSON) and fixed field names.
  • Introduce correlation IDs and propagate them through the request path.
  • Mask or avoid PII before central storage.

I/O & resources

  • Code instrumentation for structured log output.
  • Central pipeline (forwarder, ingest, indexer).
  • Definitions for retention, masking and access rights.
  • Searchable central log database or index.
  • Dashboards, alerts and audit reports.
  • Anonymized export artifacts for analysis.

Description

Logging is the structured recording of events, states and metrics in applications and infrastructure. It supports debugging, monitoring, compliance and forensic analysis. Effective logging defines format, contextual metadata, retention and access controls to balance usability, performance and privacy. It requires trade-offs around volume, retention and centralization.

  • Improved troubleshooting and faster incident resolution.
  • Foundation for monitoring, KPIs and SLA monitoring.
  • Support for compliance, audits and forensic analysis.

  • Large volumes of data can cause storage and cost issues.
  • Unstructured logs are hard to search and correlate.
  • Incorrect retention can cause privacy breaches or data loss.

  • Log throughput (entries/s)

    Number of log entries produced or indexed per second.

  • Median write latency

    Time until log entries are persisted and available.

  • Cost per GB per month

    Monthly storage cost relative to volume.

Centralized JSON logging

All services emit structured JSON logs to a central pipeline for search and alerting.

Correlation IDs for distributed traces

Request IDs are propagated across services to correlate events across system boundaries.

Retention policy for compliance

Logs are archived in a tamper-evident manner per regulatory requirements and deleted after retention period.

1

Audit existing logs and formats.

2

Define schema and field conventions, adapt instrumentation.

3

Build central pipeline, configure retention and access, integrate alerting.

⚠️ Technical debt & bottlenecks

  • Legacy unstructured log formats that hinder migration.
  • Missing governance for field names and schema versions.
  • Monolithic, non-scalable central log stores.
Log volumeIndexing rateNetwork throughput
  • Storing passwords or card data in plaintext.
  • Excessive debug logging in production systems without sampling.
  • Using different field names for the same concepts.
  • Assuming logs are automatically searchable without indexing.
  • Ignoring time zones and timestamp precision.
  • Missing access logging for the log data itself.
Knowledge of structured log formatting and schema design.Operational know-how for pipeline, scaling and cost management.Understanding of privacy requirements and masking techniques.
Troubleshooting and time-to-resolutionObservability and alertingCompliance and data retention
  • Budget constraints for storage and retention
  • Privacy regulations (GDPR, retention requirements)
  • Performance impact with synchronous logging