Deep learning to refine the identification of high-quality clinical research articles from the biomedical literature: Performance evaluation

Cynthia Lokker*, Elham Bagheri, Wael Abdelkader, Rick Parrish, Muhammad Afzal, Tamara Navarro, Chris Cotoi, Federico Germini, Lori Linkins, R. Brian Haynes, Lingyang Chu, Alfonso Iorio

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    7 Citations (SciVal)

    Abstract

    Background Identifying practice-ready evidence-based journal articles in medicine is a challenge due to the sheer volume of biomedical research publications. Newer approaches to support evidence discovery apply deep learning techniques to improve the efficiency and accuracy of classifying sound evidence. Objective To determine how well deep learning models using variants of Bidirectional Encoder Representations from Transformers (BERT) identify high-quality evidence with high clinical relevance from the biomedical literature for consideration in clinical practice. Methods We fine-tuned variations of BERT models (BERTBASE, BioBERT, BlueBERT, and PubMedBERT) and compared their performance in classifying articles based on methodological quality criteria. The dataset used for fine-tuning models included titles and abstracts of >160,000 PubMed records from 2012-2020 that were of interest to human health which had been manually labeled based on meeting established critical appraisal criteria for methodological rigor. The data was randomly divided into 80:10:10 sets for training, validating, and testing. In addition to using the full unbalanced set, the training data was randomly undersampled into four balanced datasets to assess performance and select the best performing model. For each of the four sets, one model that maintained sensitivity (recall) at ?99% was selected and were ensembled. The best performing model was evaluated in a prospective, blinded test and applied to an established reference standard, the Clinical Hedges dataset. Results In training, three of the four selected best performing models were trained using BioBERTBASE. The ensembled model did not boost performance compared with the best individual model. Hence a solo BioBERT-based model (named DL-PLUS) was selected for further testing as it was computationally more efficient. The model had high recall (>99%) and 60% to 77% specificity in a prospective evaluation conducted with blinded research associates and saved >60% of the work required to identify high quality articles. Conclusions Deep learning using pretrained language models and a large dataset of classified articles produced models with improved specificity while maintaining >99% recall. The resulting DL-PLUS model identifies high-quality, clinically relevant articles from PubMed at the time of publication. The model improves the efficiency of a literature surveillance program, which allows for faster dissemination of appraised research.
    Original languageEnglish
    Article number104384
    JournalJournal of Biomedical Informatics
    Volume142
    DOIs
    Publication statusPublished (VoR) - 8 May 2023

    Keywords

    • bioinformatics
    • machine learning
    • evidence-based medicine
    • literature retrieval
    • medical informatics
    • Natural Language Processing

    Fingerprint

    Dive into the research topics of 'Deep learning to refine the identification of high-quality clinical research articles from the biomedical literature: Performance evaluation'. Together they form a unique fingerprint.

    Cite this