TY - JOUR
T1 - A community-of-practice-based evaluation methodology for knowledge intensive computational methods and its application to multimorbidity decision support
AU - Van Woensel, William
AU - Tu, Samson W.
AU - Michalowski, Wojtek
AU - Sibte Raza Abidi, Syed
AU - Abidi, Samina
AU - Alonso, Jose Ramon
AU - Bottrighi, Alessio
AU - Carrier, Marc
AU - Edry, Ruth
AU - Hochberg, Irit
AU - Rao, Malvika
AU - Kingwell, Stephen
AU - Kogan, Alexandra
AU - Marcos, Mar
AU - Martínez Salvador, Begoña
AU - Michalowski, Martin
AU - Piovesan, Luca
AU - Riaño, David
AU - Terenziani, Paolo
AU - Wilk, Szymon
AU - Peleg, Mor
N1 - Publisher Copyright:
© 2023 Elsevier Inc.
PY - 2023/6
Y1 - 2023/6
N2 - Objective: The study has dual objectives. Our first objective (1) is to develop a community-of-practice-based evaluation methodology for knowledge-intensive computational methods. We target a whitebox analysis of the computational methods to gain insight on their functional features and inner workings. In more detail, we aim to answer evaluation questions on (i) support offered by computational methods for functional features within the application domain; and (ii) in-depth characterizations of the underlying computational processes, models, data and knowledge of the computational methods. Our second objective (2) involves applying the evaluation methodology to answer questions (i) and (ii) for knowledge-intensive clinical decision support (CDS) methods, which operationalize clinical knowledge as computer interpretable guidelines (CIG); we focus on multimorbidity CIG-based clinical decision support (MGCDS) methods that target multimorbidity treatment plans. Materials and methods: Our methodology directly involves the research community of practice in (a) identifying functional features within the application domain; (b) defining exemplar case studies covering these features; and (c) solving the case studies using their developed computational methods—research groups detail their solutions and functional feature support in solution reports. Next, the study authors (d) perform a qualitative analysis of the solution reports, identifying and characterizing common themes (or dimensions) among the computational methods. This methodology is well suited to perform whitebox analysis, as it directly involves the respective developers in studying inner workings and feature support of computational methods. Moreover, the established evaluation parameters (e.g., features, case studies, themes) constitute a re-usable benchmark framework, which can be used to evaluate new computational methods as they are developed. We applied our community-of-practice-based evaluation methodology on MGCDS methods. Results: Six research groups submitted comprehensive solution reports for the exemplar case studies. Solutions for two of these case studies were reported by all groups. We identified four evaluation dimensions: detection of adverse interactions, management strategy representation, implementation paradigms, and human-in-the-loop support. Based on our whitebox analysis, we present answers to the evaluation questions (i) and (ii) for MGCDS methods. Discussion: The proposed evaluation methodology includes features of illuminative and comparison-based approaches; focusing on understanding rather than judging/scoring or identifying gaps in current methods. It involves answering evaluation questions with direct involvement of the research community of practice, who participate in setting up evaluation parameters and solving exemplar case studies. Our methodology was successfully applied to evaluate six MGCDS knowledge-intensive computational methods. We established that, while the evaluated methods provide a multifaceted set of solutions with different benefits and drawbacks, no single MGCDS method currently provides a comprehensive solution for MGCDS. Conclusion: We posit that our evaluation methodology, applied here to gain new insights into MGCDS, can be used to assess other types of knowledge-intensive computational methods and answer other types of evaluation questions. Our case studies can be accessed at our GitHub repository (https://github.com/william-vw/MGCDS).
AB - Objective: The study has dual objectives. Our first objective (1) is to develop a community-of-practice-based evaluation methodology for knowledge-intensive computational methods. We target a whitebox analysis of the computational methods to gain insight on their functional features and inner workings. In more detail, we aim to answer evaluation questions on (i) support offered by computational methods for functional features within the application domain; and (ii) in-depth characterizations of the underlying computational processes, models, data and knowledge of the computational methods. Our second objective (2) involves applying the evaluation methodology to answer questions (i) and (ii) for knowledge-intensive clinical decision support (CDS) methods, which operationalize clinical knowledge as computer interpretable guidelines (CIG); we focus on multimorbidity CIG-based clinical decision support (MGCDS) methods that target multimorbidity treatment plans. Materials and methods: Our methodology directly involves the research community of practice in (a) identifying functional features within the application domain; (b) defining exemplar case studies covering these features; and (c) solving the case studies using their developed computational methods—research groups detail their solutions and functional feature support in solution reports. Next, the study authors (d) perform a qualitative analysis of the solution reports, identifying and characterizing common themes (or dimensions) among the computational methods. This methodology is well suited to perform whitebox analysis, as it directly involves the respective developers in studying inner workings and feature support of computational methods. Moreover, the established evaluation parameters (e.g., features, case studies, themes) constitute a re-usable benchmark framework, which can be used to evaluate new computational methods as they are developed. We applied our community-of-practice-based evaluation methodology on MGCDS methods. Results: Six research groups submitted comprehensive solution reports for the exemplar case studies. Solutions for two of these case studies were reported by all groups. We identified four evaluation dimensions: detection of adverse interactions, management strategy representation, implementation paradigms, and human-in-the-loop support. Based on our whitebox analysis, we present answers to the evaluation questions (i) and (ii) for MGCDS methods. Discussion: The proposed evaluation methodology includes features of illuminative and comparison-based approaches; focusing on understanding rather than judging/scoring or identifying gaps in current methods. It involves answering evaluation questions with direct involvement of the research community of practice, who participate in setting up evaluation parameters and solving exemplar case studies. Our methodology was successfully applied to evaluate six MGCDS knowledge-intensive computational methods. We established that, while the evaluated methods provide a multifaceted set of solutions with different benefits and drawbacks, no single MGCDS method currently provides a comprehensive solution for MGCDS. Conclusion: We posit that our evaluation methodology, applied here to gain new insights into MGCDS, can be used to assess other types of knowledge-intensive computational methods and answer other types of evaluation questions. Our case studies can be accessed at our GitHub repository (https://github.com/william-vw/MGCDS).
KW - Benchmarking
KW - Clinical
KW - Computer-interpretable clinical guidelines
KW - Decision support systems
KW - Evaluation study
KW - Multimorbidity
UR - http://www.scopus.com/inward/record.url?scp=85159773859&partnerID=8YFLogxK
U2 - 10.1016/j.jbi.2023.104395
DO - 10.1016/j.jbi.2023.104395
M3 - Article
C2 - 37201618
AN - SCOPUS:85159773859
SN - 1532-0464
VL - 142
JO - Journal of Biomedical Informatics
JF - Journal of Biomedical Informatics
M1 - 104395
ER -