Predictive AI Work Group

Objective: Develop a Implementation Guide (IG) that provides best practice guidance and a testing & evaluation (T&E) framework for evaluators to leverage when assessing the responsible use of predictive AI solutions in healthcare across the entire AI lifecycle.

T&E Framework

Metrics, Measures, and Methods per CHAI Principle & Considerations

Testing & Evaluation (T&E) Objective:

  • Determine how to practically evaluate CHAI Assurance Standards (AS) Guide principles, for a sepsis use case, by defining metrics, measures, and methods across the AI Lifecycle
  • Describe how to apply those metrics, measures, and methods using a real-world Predictive AI use case (sepsis)
  • Usefulness
  • Fairness, Equity, and Bias Management
  • Transparency
  • Safety
  • Privacy and Security

BP Framework

Best Practice Guidance per CHAI Principle & Considerations

Best Practice (BP) Objective: Review a set of best practice guidance (e.g., business, data, and security requirements, etc.) within the context of a sepsis use case

  • Business Requirements
  • Data Requirements
  • Technical Requirements
  • Security Requirements

Co-Chairs

Sonya Makhni
Mayo Clinic Platform

Raj Ratwani
MEDSTAR

Geralyn Miller
Microsoft

Han-Chin Ching
AWS

Marzyeh Ghassemi
MIT

Initial Members

MEMBERS

ORGANIZATIONS

Michelle Un CVS
Jessica Handley Medstar
Bernard Bizzo Mass Gen
Saad Alam AdventHealth
Benjamin Rader
Boston Children’s
Jon McManus Sharp
Ram Rimal UNC Health
Melissa Davis Yale
Girish Nadkarni Mt. Sinai
Naveen Raja UC Riverside
Rachel Tripp Mayo
Chaitanya Shivade Amazon
Olga Matlin Amazon
Chris Lindsell Duke
Keith Morse Stanford
Amelia Sattler Stanford
[email protected] Unifi AI
Micah Seller Intel
Sarah Gebauer
RAND Corporation
Niels Olson
College of American Pathologists
Eric Henry
King and Spalding
Gary Weissman Penn Medicine
Walter Wiggins
Greensboro Radiology
Jon Ng Iterative Health
Andy Beck Path AI
Dereck Paul Glass Health
Daniel Nadler Open Evidence
Brett Moran Parkland
Demetri Giannikopoulos Aidoc
Kristyn Looney IU Health
Laura Patton Oracle
Tristan Naumann Microsoft
Erik Duhaime Centaur Labs
Kadija Ferryman Johns Hopkins​
Laks Meyyappan CVS
Allan Fong Medstar
Sayon Dutta Mass Gen
Saad Alam AdventHealth
William La Cava
Boston Children’s
Jon McManus Sharp
Rachini Moosavi UNC Health
Tristan Markwell Providence
Daniella Meeker Yale
Matthew Solomon
Kaiser Permanente
Arasha Kia Mt. Sinai
Jason Adams UC Davis
Ricardo Henao Duke
Priya Desai Stanford
Mary Richards
Amputee Coalition
Enes Hosgor Gesund AI
Stephen Konya ONC
Keith Morse Stanford
Pramod Misra Unifi AI
Colleen Houlahan Cleveland Clinic
Atul Kanvinde Shepard Center
Jansen Seheult
College of American Pathologists
Divya Chhabra Dosis
Nita Farahany Duke
Mozziyar Etemadi Northwestern
Christos Davatzikos Penn Medicine
Andrei Petrus Lucem Health
Dom Pimenta Tortus
Subbu Venkatraman Eko Health
Chris Mansi Viz AI
Jim Min Cleerly
James Leo MemorialCare
Elizabeth Johnson
Montana State University
Craig Norquist Honor Health
Demetri Giannikopoulos Aidoc
Laura Patton Oracle​

Phases of Work

PHASE 1

Week of August 26th:  Usefulness, Usability & Efficacy 

  • Review metrics, measures, methods collected from WG members to date and obtain feedback, identify gaps, discuss details to include in Implementation Guide (Use Case Overview and Testing Metrics/Measures)​

Week of September 2nd: Fairness, Equity & Bias Management​ and Safety & Reliability

  • Review feedback from previous meeting and incorporate into draft technical implementation guide
  • Review metrics, measures, methods collected from WG members to date and obtain feedback, identify gaps, discuss details to include in Implementation Guide (Use Case Overview and Testing Metrics/Measures)​

Week of September 9th: Transparency, Intelligibility & Accountability​ 

  • Review feedback from previous meeting and incorporate into draft technical implementation guide
  • Review metrics, measures, methods collected from WG members to date and obtain feedback, identify gaps, discuss details to include in Implementation Guide (Use Case Overview and Testing Metrics/Measures)​

Week of September 16th: Security & Privacy 

  • Review feedback from previous meeting and incorporate into draft technical implementation guide
  • Review metrics, measures, methods collected from WG members to date and obtain feedback, identify gaps, discuss details to include in Implementation Guide (Use Case Overview and Testing Metrics/Measures)​

Week of September 23rd: Align on drafted set of best practices and methods/metrics, and discuss next steps for HLTH Conference

  • Work Group present findings, cross-pollinate, obtain feedback, identify gaps, ensure alignment

Week of September 30th:  Review Period

  • CHAI Board Member review

Week of October 7th: Review Period

  • Work Group Member review

Week of October 14th: Final Work Group edits

  • Prepare initial version of Implementation Guide and HLTH presentation slides
  • Present at HLTH Conference on October 19th 

PHASE 2

November – December 2024

  • Continue above approach to complete first version of Implementation Guide

PHASE 3

Spring 2025

  • Obtain feedback from CHAI Member community across  Implementation Guide
  • Incorporate feedback and share with public

Meeting Notes