TY - JOUR
T1 - Predicting article quality scores with machine learning
T2 - The U.K. Research Excellence Framework
AU - Thelwall, Mike
AU - Kousha, Kayvan
AU - Wilson, Paul
AU - Makita, Meiko
AU - Abdoli, Mahshid
AU - Stuart, Emma
AU - Levitt, Jonathan
AU - Knoth, Petr
AU - Cancellieri, Matteo
N1 - Publisher Copyright:
© 2023 Mike Thelwall, Kayvan Kousha, Paul Wilson, Meiko Makita, Mahshid Abdoli, Emma Stuart, Jonathan Levitt, Petr Knoth, and Matteo Cancellier.
PY - 2023/3/1
Y1 - 2023/3/1
N2 - National research evaluation initiatives and incentive schemes choose between simplistic quantitative indicators and time-consuming peer/expert review, sometimes supported by bibliometrics. Here we assess whether machine learning could provide a third alternative, estimating article quality using more multiple bibliometric and metadata inputs. We investigated this using provisional three-level REF2021 peer review scores for 84,966 articles submitted to the U.K. Research Excellence Framework 2021, matching a Scopus record 2014– 18 and with a substantial abstract. We found that accuracy is highest in the medical and physical sciences Units of Assessment (UoAs) and economics, reaching 42% above the baseline (72% overall) in the best case. This is based on 1,000 bibliometric inputs and half of the articles used for training in each UoA. Prediction accuracies above the baseline for the social science, mathematics, engineering, arts, and humanities UoAs were much lower or close to zero. The Random Forest Classifier (standard or ordinal) and Extreme Gradient Boosting Classifier algorithms performed best from the 32 tested. Accuracy was lower if UoAs were merged or replaced by Scopus broad categories. We increased accuracy with an active learning strategy and by selecting articles with higher prediction probabilities, but this substantially reduced the number of scores predicted.
AB - National research evaluation initiatives and incentive schemes choose between simplistic quantitative indicators and time-consuming peer/expert review, sometimes supported by bibliometrics. Here we assess whether machine learning could provide a third alternative, estimating article quality using more multiple bibliometric and metadata inputs. We investigated this using provisional three-level REF2021 peer review scores for 84,966 articles submitted to the U.K. Research Excellence Framework 2021, matching a Scopus record 2014– 18 and with a substantial abstract. We found that accuracy is highest in the medical and physical sciences Units of Assessment (UoAs) and economics, reaching 42% above the baseline (72% overall) in the best case. This is based on 1,000 bibliometric inputs and half of the articles used for training in each UoA. Prediction accuracies above the baseline for the social science, mathematics, engineering, arts, and humanities UoAs were much lower or close to zero. The Random Forest Classifier (standard or ordinal) and Extreme Gradient Boosting Classifier algorithms performed best from the 32 tested. Accuracy was lower if UoAs were merged or replaced by Scopus broad categories. We increased accuracy with an active learning strategy and by selecting articles with higher prediction probabilities, but this substantially reduced the number of scores predicted.
KW - artificial intelligence
KW - bibliometrics
KW - citation analysis
KW - machine learning
KW - scientometrics
UR - http://www.scopus.com/inward/record.url?scp=85163028826&partnerID=8YFLogxK
U2 - 10.1162/qss_a_00258
DO - 10.1162/qss_a_00258
M3 - Article
AN - SCOPUS:85163028826
SN - 2641-3337
VL - 4
SP - 547
EP - 573
JO - Quantitative Science Studies
JF - Quantitative Science Studies
IS - 2
ER -