The health sector is in the midst of a transformative shift, driven by the increasing deployment of artificial intelligence (AI). While AI holds immense potential to improve diagnostics, personalize treatment, improve predictive analytics for claims, and overall improve healthcare efficiency, it also introduces significant challenges related to safety, fairness, and privacy. Policymakers and regulators are now grappling with how to balance these opportunities with the risks posed by AI.
What are Quality Assurance Labs?
The Coalition for Health AI (CHAI) is leading the development of a diverse and independent network of certified Quality Assurance Labs aimed at promoting AI adoption by providing independent evaluations of AI models, with a focus on transparency and ethical deployment. These labs would complement local validation and monitoring.
Quality Assurance Labs will help address key concerns by evaluating the following:
- Technical Performance and Bias: Quality Assurance Labs will assess AI models’ accuracy, identifying any biases in their predictions.
- Subgroup Performance: Labs will ensure that AI models perform well across different patient populations, preventing disparities in care, and supporting equity-based evaluations of AI models.
- Pre-Deployment Simulations: Quality Assurance Labs will simulate AI model deployment, helping organizations understand the real-world implications of integrating AI into their clinical and non-clinical workflows.
- Usability in Prospective Studies: AI models must be tested in practical settings to ensure they are effective and user-friendly for health professionals.
- Ongoing Monitoring: Once deployed, models will continue to be monitored to ensure they do not degrade in performance or become biased over time.
CHAI’s certification program will initially focus on labs that are already performing AI evaluations in the pre-deployment setting, certifying the labs for conformance to ISO Standard 17025, which helps to ensure that quality assurance labs are competent and free of any commercial conflicts of interests. Additionally, CHAI certified Quality Assurance Labs will need to adhere to the FDA’s guidance on use of high quality real world data in creating testing data sets for AI model evaluations. The goal of CHAI’s certification framework is to standardize the evaluation process for AI models in health. The framework will achieve the following:
- Standardization and Consistency: All certified labs will follow the same standards, ensuring comparable evaluations across different models and labs.
- Transparency and Trust: The results of evaluations will be publicly available, building trust in AI systems among healthcare providers, payors, patients, and regulators.
- Ethical Considerations: By emphasizing the need for diverse data sets, the framework will help ensure that AI models are fair and equitable, reducing the risk of biased outcomes.
Who Will Use Quality Assurance Labs?
The target users of Quality Assurance Labs include AI vendors who need to demonstrate the quality of their models and health systems or payors seeking to adopt AI. Quality Assurance Labs will also benefit health sector organizations that develop their own AI models. Additionally, many lesser-resourced community health systems and rural health clinics, lack the in-house expertise to validate their models locally. Quality Assurance Labs can provide third-party validation, ensuring that the models work as intended and identify bias in under-represented populations.
As AI becomes increasingly embedded into the health sector, the need for trusted, transparent, and ethical AI systems is greater than ever. CHAI’s vision for Quality Assurance Labs offers a promising solution to the challenges of AI validation, providing health systems and payors with the tools they need to adopt AI confidently. By creating a standardized, certified network of labs, CHAI is laying the foundation for a future where AI can be safely and effectively integrated into Health, driving better outcomes for all of us.
Media Inquiries: [email protected]