Assessing the expertise of researchers has garnered increased interest recently. This heightened focus arises from the growing emphasis on interdisciplinary science and the subsequent need to form expert teams. When forming these teams, the coordi-nators need to assess expertise in fields that are often very different from theirs. The conventional reliance on signals of success, prestige, and academic impact can unintentionally perpetuate biases within the assessment process. This traditional approach favors senior researchers and those affiliated with prestigious institutions, potentially overlooking talented individuals from underrepresented backgrounds or institutions. This paper addresses the challenge of determining expertise by proposing a methodology that leverages the relevance of a researcher’s recent publication track to the proposed research as a ”sensemaking” signal. We introduce a novel α−relevance metric between the trained embedding over the titles and abstracts of a researcher’s recent publications and the embedding of a call and show that high values of α−relevance indicate expertise in the field of the call. By evaluating the α−relevance threshold, we establish a robust framework for the assessment process. For the evaluation process, we use (1) NIH grant-winning records and researchers’ publications ob-tained from Scopus and (2) grant submissions dataset from a research university and the corresponding researchers’ publications. Additionally, we investigate the optimal time window required to capture the researcher’s expertise based on their publication timeline. Considering the temporal relationship between grant win-nings and publications, we identify the most informative time window reflecting the researcher’s relevant contributions. The data-driven methodology transcends traditional signals of success, promot-ing a fair evaluation process of the researcher’s relevance to the proposed research. By leveraging objective indicators, we aim to facilitate the formation of expert teams across disciplines while mitigating biases in assessing expertise.
Bibliographical noteFunding Information:
We collected several datasets of grant submissions, grant wins, and the grant authors’ publication lists. Two grants were collected. One is of winning submissions made to NIH grant proposals, and the second is of submissions made by a research university’s scientists and their outcomes. In addition, we have the publication record of all scientists in our datasets. The datasets differ in the information we have for the grants and the submissions: The Research University (RU) dataset contains precise submission dates but lacks information on grant call release and expiration dates. The NIH dataset provides grant call release and expiration dates but does not contain proposals’ submission dates. Furthermore, the call for submission period varies across calls in the NIH dataset, ranging from several months to over three years. We detail here the process of obtaining the various datasets.
© 2023, Binghamton University Libraries. All rights reserved.
ASJC Scopus subject areas
- Applied Mathematics
- Modeling and Simulation
- Statistical and Nonlinear Physics