Catalog
concept#AI#Software Engineering#Governance#Product

Human-Centered AI

A concept for designing AI systems that places human needs, values and workflows at the center.

Human-centered AI focuses on designing and developing AI systems that prioritize human needs, values, and workflows.
Emerging
Medium

Classification

  • Medium
  • Organizational
  • Organizational
  • Intermediate

Technical context

User research tools and survey systemsML model monitoring and explainability toolchainsGovernance and compliance platforms

Principles & goals

User-centeredness: understand needs and contexts first.Transparency: make decisions and uncertainties understandable.Accountability: clearly define responsibilities and consequences.
Discovery
Enterprise, Domain, Team

Use cases & scenarios

Compromises

  • Apparent user-centeredness without genuine participation (tokenism).
  • Excessive trust in explanatory surfaces despite model uncertainty.
  • Neglecting systemic impacts in favor of individual usability.
  • Integrate early and continuous user involvement
  • Provide transparency about limits and uncertainties
  • Perform interdisciplinary reviews and pre-release checks

I/O & resources

  • User research and context analyses
  • Model and data quality evaluations
  • Legal and ethical frameworks
  • Designs and interfaces with explainability
  • Governance policies and responsibility allocation
  • Metrics for monitoring benefit and harm

Description

Human-centered AI focuses on designing and developing AI systems that prioritize human needs, values, and workflows. It combines user-centered design, ethical guidelines, and technical robustness to create trustworthy, transparent, and accountable AI. Applicable across product strategy, architecture, and organizational governance.

  • Higher user trust and better acceptance of AI features.
  • Reduced risks through early identification of harm.
  • Better product decisions by incorporating real user needs.

  • Requires additional effort for research and testing.
  • Not all quality requirements can be solved purely user-centered.
  • Conflicts between user benefit and regulatory requirements possible.

  • Trust index

    Measure user trust via surveys and behavioral data.

  • User-value KPI

    Impact of the AI feature on concrete usage goals.

  • Bias and fairness metrics

    Quantitative indicators for monitoring systematic biases.

Google People + AI Guidebook (design example)

Practical guidance for user-centered AI interaction and design decisions.

Organizational policies aligned with OECD principles

Implementing principles for responsible AI use within governance processes.

Explainable recommender services with user testing

Pilot combining explanations, feedback loops and acceptance measurement.

1

Conduct needs analysis and involve stakeholders

2

Build prototypes and test with users

3

Define governance policies and set up monitoring

⚠️ Technical debt & bottlenecks

  • Missing tooling for continuous bias monitoring
  • Inconsistent explanation APIs across components
  • Insufficiently documented governance decisions
user-research-capacityexplainable-modelsgovernance-processes
  • Deploying an AI feature without user tests to build trust
  • Misleading explanations that conceal uncertainties
  • Personalization without checking for discriminatory effects
  • Confusing explainability with correctness
  • Too narrow user segments overlooking systemic effects
  • Overestimating technical solutions for social problems
User research and usability testingBasic understanding of ML modelsKnowledge of ethics, law and governance
Transparency requirements towards users and auditorsScalability of feedback and monitoring processesInterdisciplinary integration of design, law and ML
  • Data protection and regulatory requirements
  • Limited resources for user research
  • Technical limits in explainability and robustness