High-Risk AI Systems
Concept for identifying and governing AI applications that pose significant risks to rights, safety, or health.
Classification
- ComplexityHigh
- Impact areaOrganizational
- Decision typeOrganizational
- Organizational maturityIntermediate
Technical context
Principles & goals
Use cases & scenarios
Compromises
- Misclassification of sensitive systems despite compliance
- Compliance as false security instead of real risk reduction
- Excessive bureaucracy and operational delays
- Regular impact assessments throughout the lifecycle
- Model cards and documentation for transparency
- Continuous monitoring and retraining for drift
I/O & resources
- Model artifacts and training/test data
- Impact and risk assessments
- Legal requirements and compliance criteria
- Conformity dossier and audit reports
- Monitoring dashboards and alerts
- Risk mitigation and operational runbooks
Description
High-risk AI systems are AI applications that pose significant risks to fundamental rights, safety, or health. The concept covers classification, governance, risk assessment and mandatory conformity measures. It aims to ensure robustness, transparency and human-centric safeguards throughout the system lifecycle. Organizations must combine processes, technical controls and documentation to mitigate risks.
✔Benefits
- Protection of fundamental rights and safety
- Clearer accountability and auditability
- Reduced liability and reputational risk
✖Limitations
- Regulatory requirements can vary by region
- High implementation and maintenance effort
- Constraints for agile product development possible
Trade-offs
Metrics
- False Positive Rate (FPR)
Share of false positive decisions, relevant for false alarms and user burden.
- Model drift rate
Rate at which model performance degrades over time requiring recalibration.
- Time to incident resolution
Average duration from detection to remediation of a security- or function-relevant incident.
Examples & implementations
European AI regulation - classification example
The EU AI Act draft lists examples of high-risk systems, such as applicant selection or medical diagnostic systems.
Clinical image analysis under regulatory constraints
A hospital performed clinical validation of a model and added monitoring and documentation processes.
Bank implementation for credit decisions
A bank developed control and appeal procedures to reduce risks and discrimination.
Implementation steps
Identify system and assess high-risk relevance
Conduct impact assessment and prioritize risks
Define technical and organizational measures
Establish monitoring, reporting and evidence collection
⚠️ Technical debt & bottlenecks
Technical debt
- Lack of reproducibility of training runs
- Incomplete model and data documentation
- Monolithic pipelines without versioning
Known bottlenecks
Misuse examples
- Using sensitive health data without an impact assessment
- Deploying to production without robustness tests
- Overreliance on explainability tools as sole safeguard
Typical traps
- Underestimating data drift
- Confusing compliance with actual safety
- Insufficient domain expert involvement in evaluation
Required skills
Architectural drivers
Constraints
- • Legal obligations and retention requirements
- • Limited access to labeled training data
- • Budget and resource constraints