Leveraging generative AI for clinical evidence synthesis needs to ensure trustworthiness

Gongbo Zhang, Qiao Jin, Denis Jered McInerney, Yong Chen, Fei Wang, Curtis L. Cole, Qian Yang, Yanshan Wang, Bradley A. Malin, Mor Peleg, Byron C. Wallace, Zhiyong Lu, Chunhua Weng, Yifan Peng

Research output: Contribution to journalArticlepeer-review


Evidence-based medicine promises to improve the quality of healthcare by empowering medical decisions and practices with the best available evidence. The rapid growth of medical evidence, which can be obtained from various sources, poses a challenge in collecting, appraising, and synthesizing the evidential information. Recent advancements in generative AI, exemplified by large language models, hold promise in facilitating the arduous task. However, developing accountable, fair, and inclusive models remains a complicated undertaking. In this perspective, we discuss the trustworthiness of generative AI in the context of automated summarization of medical evidence.

Original languageEnglish
Article number104640
JournalJournal of Biomedical Informatics
StatePublished - May 2024

Bibliographical note

Publisher Copyright:
© 2024


  • Evidence-based medicine
  • Large language models
  • Medical evidence summarization
  • Trustworthy generative AI

ASJC Scopus subject areas

  • Health Informatics
  • Computer Science Applications


Dive into the research topics of 'Leveraging generative AI for clinical evidence synthesis needs to ensure trustworthiness'. Together they form a unique fingerprint.

Cite this