Providing guidelines for the
responsible use of AI in health

Best practice development

Frameworks for safe and equitable
AI in healthcare

Learn more

Become a member

Help us develop the next generation of best practices for healthcare AI

Join us

Draft Responsible Health AI Framework

We’ve drafted framework for responsible health AI with an invitation for public review and comment. The framework, consisting of an Assurance Standards Guide, provides considerations to ensure standards are met in the deployment of AI in healthcare. This draft framework was created with a consensus-based approach, drawing upon the expertise and knowledge of multiple, diverse stakeholders from across the healthcare ecosystem.

A set of draft companion documents, called The Assurance Reporting Checklists, provides criteria to evaluate standards across the AI lifecycle; from identifying a use case and developing a product to deployment and monitoring.


our philosophy on AI model evaluation

Developer’s Responsibility: Evaluate the AI model thoroughly before deployment to ensure it meets safety and performance standards.

End-User’s Responsibility: Conduct local evaluations to ensure the AI tool fits the specific needs and conditions of the health system.

End-User’s Monitoring Responsibility: Monitor AI tool performance over time, ensuring it remains effective and adapting to any changes in conditions.​

AI That Serves All of Us

Our coalition intends to develop a framework with Health Equity in mind, aiming to address algorithmic bias.​

We created the Coalition for Health AI (CHAI™) to welcome a diverse array of stakeholders to listen, learn, and collaborate to drive the development, evaluation, and appropriate use of AI in healthcare.

Our purpose

Founding Partners​

Contact Us