CHAI Global Summit @ HLTH 2024, LAS VEGAS, October 18, 2024 — Today the Coalition for Health AI (CHAI) advanced draft frameworks for how it will certify independent Assurance Labs and help to standardize the output of these labs that test Health AI models through CHAI Model Cards — a kind of ingredient and nutrition label. The CHAI certification process and model card design are expected to be available by the end of April 2025, following a process of review and feedback by CHAI members, partners, and the public.
“This has been a pivotal year for CHAI on our journey to help enable trusted independent assurance of AI solutions with local monitoring and validation. We are thrilled with the progress of our CHAI workgroups who represent a diversity of perspectives and expertise across the health ecosystem,” said Brian Anderson, MD, CEO of CHAI. “These frameworks for certification and basic transparency are building blocks of responsible health AI. They will help to streamline the path for AI innovation, build trust with patients and clinicians, and position health systems and solution innovators ahead of emerging state and federal regulations.”
CHAI certification of independent quality assurance labs
The draft CHAI certification program framework was created with the ANSI National Accreditation Board and several emerging quality assurance labs using ISO 17025, the predominant standard for testing and calibration laboratories worldwide. Among the requirements are mandatory disclosure of conflicts of interest between assurance labs and model developers, and the protection of data and intellectual property. This standard was also used by the Office of the National Coordinator for Health Information Technology (ONC) for the Electronic Health Record (EHR) certification program. Additionally, the certification program integrates data quality and integrity requirements derived from FDA’s draft guidance on the use of high-quality real-world data, CHAI testing and evaluation metrics sourced from the various working groups, and alignment with the National Academy of Medicine’s AI Code of Conduct.
“As a 35-year MedTech industry veteran, I have experienced years of frustration at the gaps in product development and governance practices between health delivery organizations and MedTech companies,” said Eric Henry, Senior Quality Systems and Compliance Advisor in the FDA and Life Sciences practice of King & Spalding. “I am encouraged by CHAI’s commitment to map its quality assurance laboratory guidelines to internationally recognized consensus medical device and healthcare standards. This effort will lead to a more harmonized view of what ‘good’ looks like in the evaluation of AI-enabled health technology across these two critical players in the healthcare ecosystem.”
CHAI Model Card, a Health AI “Nutrition Label”
The draft CHAI Model Card presents a standard template to provide a degree of transparency with key information to support the evaluation of AI solution performance and safety. The Model Card includes the identity of the developer, intended uses, targeted patient populations, AI model type, data types, key performance metrics, security and compliance accreditations, maintenance requirements, known risks and out-of-scope uses, known bias, and ethical considerations and third party information (e.g. relevant clinical studies).
The Model Card was designed by a workgroup representing a range of stakeholders including regional health systems, EHR solution vendors, medical device makers, and health AI leaders and startups.
It was designed as a starting point for those reviewing AI models during the procurement process and for electronic health records (EHR) vendors who need to comply with the ONC Health IT Certification Program (HTI-1). CHAI completed an assessment of all of the HTI-1 requirements and gathered consensus recommendations from clinicians, health system organizational data custodians, and developers about what additional information should be included beyond the existing regulatory requirements.
“AI’s rapid advancement in healthcare is not just an opportunity — it’s a call to action. We must transform principles into tangible steps that can foster trust and accountability. A common, user-friendly applied model card can help serve as a powerful tool for clinicians, health systems, and patients alike,” said Christine Swisher PhD, VP Health Data Intelligence at Oracle Health and leader of the CHAI Model Card Workgroup.
“The rapid evolution of AI in healthcare has created a landscape that can feel unregulated and fragmented. CHAI’s efforts to introduce a standardized model card represent a crucial step toward ensuring transparency, safety, and trust in AI-driven clinical applications,” said Demetri Giannikopoulous, Chief Transformation Officer, Aidoc and CHAI applied model card workgroup member. “By establishing a common framework that aligns with federal regulations, we are moving beyond theoretical discussions and building the foundation for scalable, reliable, and ethical AI solutions that can be adopted across the healthcare ecosystem. This initiative ensures that every AI solution can be rigorously evaluated, delivering real value to clinicians, patients and healthcare organizations alike.”
The initial drafts of the certification program and model cards will be presented at the CHAI Global Summit at HLTH 2024. CHAI will proactively engage stakeholders from across the healthcare ecosystem including patient advocates, under-resourced local health systems, and startups for additional feedback. To provide feedback, please use this assurance lab form and model card form.
About CHAI
The CHAI (Coalition for Health AI) mission is to be the trusted source of guidelines for Responsible AI in Health that serves all. It aims to ensure high-quality care, foster trust among users, and meet the growing healthcare needs. As a coalition bringing together leaders and experts representing health systems, startups, government and patient advocates, CHAI has established diverse working groups focusing on privacy and security, fairness, transparency, usefulness, and safety of AI algorithms.
Press Contact:
Andrea Heuer
[email protected]
(917) 914-5563