Catalog
concept#AI#Governance#Product#Reliability#Security

High-Risk AI Systems

Concept for identifying and governing AI applications that pose significant risks to rights, safety, or health.

High-risk AI systems are AI applications that pose significant risks to fundamental rights, safety, or health.
Emerging
High

Classification

  • High
  • Organizational
  • Organizational
  • Intermediate

Technical context

Identity and access management systems (IAM)Security information and event management (SIEM)MLOps and model registry platforms

Principles & goals

Risk-based approachTransparency and explainabilityAccountability and documentation
Build
Enterprise, Domain

Use cases & scenarios

Compromises

  • Misclassification of sensitive systems despite compliance
  • Compliance as false security instead of real risk reduction
  • Excessive bureaucracy and operational delays
  • Regular impact assessments throughout the lifecycle
  • Model cards and documentation for transparency
  • Continuous monitoring and retraining for drift

I/O & resources

  • Model artifacts and training/test data
  • Impact and risk assessments
  • Legal requirements and compliance criteria
  • Conformity dossier and audit reports
  • Monitoring dashboards and alerts
  • Risk mitigation and operational runbooks

Description

High-risk AI systems are AI applications that pose significant risks to fundamental rights, safety, or health. The concept covers classification, governance, risk assessment and mandatory conformity measures. It aims to ensure robustness, transparency and human-centric safeguards throughout the system lifecycle. Organizations must combine processes, technical controls and documentation to mitigate risks.

  • Protection of fundamental rights and safety
  • Clearer accountability and auditability
  • Reduced liability and reputational risk

  • Regulatory requirements can vary by region
  • High implementation and maintenance effort
  • Constraints for agile product development possible

  • False Positive Rate (FPR)

    Share of false positive decisions, relevant for false alarms and user burden.

  • Model drift rate

    Rate at which model performance degrades over time requiring recalibration.

  • Time to incident resolution

    Average duration from detection to remediation of a security- or function-relevant incident.

European AI regulation - classification example

The EU AI Act draft lists examples of high-risk systems, such as applicant selection or medical diagnostic systems.

Clinical image analysis under regulatory constraints

A hospital performed clinical validation of a model and added monitoring and documentation processes.

Bank implementation for credit decisions

A bank developed control and appeal procedures to reduce risks and discrimination.

1

Identify system and assess high-risk relevance

2

Conduct impact assessment and prioritize risks

3

Define technical and organizational measures

4

Establish monitoring, reporting and evidence collection

⚠️ Technical debt & bottlenecks

  • Lack of reproducibility of training runs
  • Incomplete model and data documentation
  • Monolithic pipelines without versioning
Data qualityScalabilityDomain expertise
  • Using sensitive health data without an impact assessment
  • Deploying to production without robustness tests
  • Overreliance on explainability tools as sole safeguard
  • Underestimating data drift
  • Confusing compliance with actual safety
  • Insufficient domain expert involvement in evaluation
AI safety and robustness expertiseData and bias analysis skillsKnowledge of AI-related law and compliance
Legal requirements and auditabilityRobustness and fault toleranceExplainability and transparency
  • Legal obligations and retention requirements
  • Limited access to labeled training data
  • Budget and resource constraints