Steven Bedrick

Can Electronic care planning using AI Summarization Yield equal Documentation Quality? (EASY eDocQ)

David A. Dorr, Nicole G. Weiskopf, Jean M Sabile, Emma Young, Dylan Mathis, Samantha Weller, Jacob Alex, Kimberly Doerr, Steven Bedrick
medRxiv, Jan 2025

Abstract

Importance Data, information and knowledge in health care has expanded exponentially over the last 50 years, leading to significant challenges with information overload and complex, fragmented care plans. Generative AI has the potential to facilitate summarization and integration of knowledge and wisdom to through rapid integration of data and information to enable efficient care planning.Objective Our objective was to understand the value of AI generated summarization through short synopses at the care transition from hospital to first outpatient visit.Design Using a de-identified data set of recently hospitalized patients with multiple chronic illnesses, we used the data-information-knowledge-wisdom framework to train clinicians and an open-source generative AI Large Language Model system to produce summarized patient assessments after hospitalizations. Both sets of synopses were judged blinded in random order by clinician judges.Participants De-identified patients had multiple chronic conditions and a recent hospitalization. Raters were physicians at various levels of training.Main outcome Accuracy, succinctness, synthesis and usefulness of synopses using a standardized scale with scores > 80% indicating success.Results AI and clinicians summarized 80 patients with 10% overlap. In blinded trials, AI synopses were rated as useful 75% of the time versus 76% for human generated ones. AI had lower succinctness ratings for the Data synopsis task (55-67%) versus human (84-86%). For accuracy and synthesis, AI had near equal or better scores in other domains (AI: 72%-79%, humans: 68%-84%), with best scores from AI in Wisdom. Interrater agreement was moderate, indicating different preferences for synopsis content, and did not vary between AI and human-created synopses.Discussion AI-created synopses that were nearly equivalent to human-created ones; they were slightly longer and did not always synthesize individual data elements compared to humans. Given their rapid introduction into clinical care, our framework and protocol for evaluation of these tools provides strong benchmarking capabilities for developers and implementers.Question Can a Generative AI Large Language Model be trained to generate accurate and useful patient synopses through chart summarization for use in outpatient settings after hospital discharge?Findings Using a Data-Information-Knowledge-Wisdom framework, clinicians and an open-source AI system were trained to summarize charts; these synopses were rated blindly using a standardized index. Synopses from the AI were rated as useful 75% of the time versus 76% for human generated ones, AI synopses scored highest in Wisdom for accuracy and synthesis. Interrater agreement was moderate but did not vary between AI and human.Meaning This study provides a concrete, replicable protocol for benchmarking LLM summarization outputs and demonstrates general equivalence to human-created synopses for outpatient use after care transitions.Competing Interest StatementThe authors have declared no competing interest.Funding StatementThis study did not receive any fundingAuthor DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.YesThe details of the IRB/oversight body that provided approval or exemption for the research described are given below:The Oregon Health & Science University Institutional Review Board waived ethical approval for this work.I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.YesI understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).YesI have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.YesAll data produced in the present study are available upon reasonable request to the authors. The data on which the work was done is available on Physionet.org https://physionet.org/content/mimiciii/1.4/

Back to List

Add the full text or supplementary notes for the publication here using Markdown formatting.