Generative AI Work Group

Objective: Develop a Implementation Guide (IG) that provides best practice guidance and a testing & evaluation (T&E) framework for evaluators to leverage when assessing the responsible use of generative AI solutions in healthcare across the entire AI lifecycle.

T&E Framework

Metrics, Measures, and Methods per CHAI Principle & Considerations

Testing & Evaluation (T&E) Objective:

  • Determine how to practically evaluate CHAI Assurance Standards (AS) Guide principles, for a sepsis use case, by defining metrics, measures, and methods across the AI Lifecycle
  • Describe how to apply those metrics, measures, and methods using a real-world Predictive AI use case (sepsis)
  • Usefulness
  • Fairness, Equity, and Bias Management
  • Transparency
  • Safety
  • Privacy and Security

BP Framework

Best Practice Guidance per CHAI Principle & Considerations

Best Practice (BP) Objective: Review a set of best practice guidance (e.g., business, data, and security requirements, etc.) within the context of a sepsis use case

  • Business Requirements
  • Data Requirements
  • Technical Requirements
  • Security Requirements

Co-Chairs

Rebecca Mishuris
Mass. General

Karandeep Singh
UC San Diego

Zack Lipton
Abridge

Initial Members

[row_inner_3 style=”collapse” col_bg=”rgb(255, 255, 255)” h_align=”center” visibility=”hidden”] [col_inner_3 span=”6″ span__sm=”12″ bg_color=”rgb(255,255,255)” border=”1px 0px 1px 1px” border_color=”rgb(164, 164, 164)”]

MEMBERS

[/col_inner_3] [col_inner_3 span=”6″ span__sm=”12″ border=”1px 1px 1px 0px” border_color=”rgb(164, 164, 164)”]

ORGANIZATIONS

[/col_inner_3] [/row_inner_3] [row_inner_3 visibility=”hidden”] [col_inner_3 span__sm=”12″ bg_color=”rgb(255,255,255)”] [/col_inner_3] [/row_inner_3]
[row_inner_3 style=”collapse” col_bg=”rgb(255, 255, 255)” h_align=”center”] [col_inner_3 span=”6″ span__sm=”12″ bg_color=”rgb(255,255,255)” border=”1px 0px 1px 1px” border_color=”rgb(164, 164, 164)”]

MEMBERS

[/col_inner_3] [col_inner_3 span=”6″ span__sm=”12″ border=”1px 1px 1px 0px” border_color=”rgb(164, 164, 164)”]

ORGANIZATIONS

[/col_inner_3] [/row_inner_3] [row_inner_3] [col_inner_3 span__sm=”12″ bg_color=”rgb(255,255,255)”]
Judy Gichoya Emory
Adam Rodman Harvard
Jon McManus
Sharp Healthcare
Meta Tshilombo
American Diabetes Association
Abdul Tariq
Children’s Hospital of Philadelphia
Sara Murray UCSF
Hua Xu Yale
Daniel Yang
Kaiser Permanente
Vincent Liu
Kaiser Permanente
David Talby John Snow Labs
Kim Nolen Pfizer
Dinh Nguyen
Kaiser Permanente
Lisa Lehmann Harvard
Chris Schubert CVS
Zach Hettinger MedStar
Dax Dickenson AdventHealth
Timothy Driscoll
Boston Children’s
Tristan Markwell Providence
David McSwain UNC Health
Prem Timsina Mt. Sinai
Matt Gill Mayo Clinic
Xiaoyu Miao Amazon
Chaitanya Shivade Amazon
Monica Agrawal Duke
Dev Dash Stanford
Omar Escontrias
National Health Council
Brigham Hyde Atropos Health
Samta Shukla
BCBS Minnesota
Gary Marchant
Arizona State University
Jim St. Clair
Mississippi AI Collaborative
Pramod Misra Unifi AI
Rebecca Gwilt Nixon Gwilt Law
Daniel Samarov
Twistle (Health Catalyst)
Shreya Shah Stanford
Troy Foster Stanford
Byron Yount Mercy
Ashish Atreja UC Davis
Tyler Rhoades Cleveland Clinic
Hooman Rashidi
College of American Pathologiest
Chris Matheson EBSCO
Erica Lesyshyn EBSCO
Tinglong Dai Johns Hopkins
Christine Swisher Oracle
Celena Wheeler Oracle​
[/col_inner_3] [/row_inner_3]

Phases of Work

Phase One
Week of August 26th: Usefulness, Usability & Efficacy Review best practices, metrics, measures, methods collected from WG members to date and obtain feedback, identify gaps, discuss details to include in Technical Implementation Guide (Best Practices, Use Case Overview, and Testing Metrics/Measures)​
Week of September 2nd: Fairness, Equity & Bias Management​ – Review feedback from previous meeting and incorporate into draft technical implementation guide

– Review best practices, metrics, measures, methods collected from WG members to date and obtain feedback, identify gaps, discuss details to include in Technical Implementation Guide (Best Practices, Use Case Overview, and Testing Metrics/Measures)​

Week of September 9th: Transparency, Intelligibility & Accountability​ – Review feedback from previous meeting and incorporate into draft technical implimentation guide

– Review best practices, metrics, measures, methods collected from WG members to date and obtain feedback, identify gaps, discuss details to include in Technical Implimentation Guide (Best Practices, Use Case Overview, and Teating Metrics/Measures)

Week of September 16th: Security & Privacy – Review Feedback from previous meeting and incorporate into draft technical implementation guide

– Review best practices, metrics, measures, methods collected from WG members to date and obtain feedback, identify gaps, discuss details to include in Technical Implementation Guide (Best Practices, Use Case Overview, and Testing Metrics/Measures)

Week Of September 23rd: – Align on drafted set of best practices and methods/metrics, and discuss next steps for HLTH Conference
Week of September 30th: Review Period – CHAI Board Member review
Week of Octover 7th: Review Period – Work Group Member review
Week Of October 14th: Final Work Group edits – Prepare initial version of Technical Implementation Guide and HLTH presentation slides

– Present at HLTH Conference on October 19th

Phase Two
November – December 2024 -Continue above approach to complete first version of Technical Implementation Guide
Phase Three
Spring 2025 -Obtain Feedback from CHAT Member community across Technical Implementation Guide

– Incorporate feedback and share with public

Meeting Notes