Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/154620
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGoh, Yeow Chongen_US
dc.contributor.authorCai, Xin Qingen_US
dc.contributor.authorTheseira, Walteren_US
dc.contributor.authorKo, Giovannien_US
dc.contributor.authorKhor, Khiam Aiken_US
dc.date.accessioned2021-12-29T07:31:28Z-
dc.date.available2021-12-29T07:31:28Z-
dc.date.issued2020-
dc.identifier.citationGoh, Y. C., Cai, X. Q., Theseira, W., Ko, G. & Khor, K. A. (2020). Evaluating human versus machine learning performance in classifying research abstracts. Scientometrics, 125(2), 1197-1212. https://dx.doi.org/10.1007/s11192-020-03614-2en_US
dc.identifier.issn0138-9130en_US
dc.identifier.urihttps://hdl.handle.net/10356/154620-
dc.description.abstractWe study whether humans or machine learning (ML) classification models are better at classifying scientific research abstracts according to a fixed set of discipline groups. We recruit both undergraduate and postgraduate assistants for this task in separate stages, and compare their performance against the support vectors machine ML algorithm at classifying European Research Council Starting Grant project abstracts to their actual evaluation panels, which are organised by discipline groups. On average, ML is more accurate than human classifiers, across a variety of training and test datasets, and across evaluation panels. ML classifiers trained on different training sets are also more reliable than human classifiers, meaning that different ML classifiers are more consistent in assigning the same classifications to any given abstract, compared to different human classifiers. While the top five percentile of human classifiers can outperform ML in limited cases, selection and training of such classifiers is likely costly and difficult compared to training ML models. Our results suggest ML models are a cost effective and highly accurate method for addressing problems in comparative bibliometric analysis, such as harmonising the discipline classifications of research from different funding agencies or countries.en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relationNRF2014-NRF-SRIE001-027en_US
dc.relation.ispartofScientometricsen_US
dc.rights© The Author(s) 2020. All rights reserved.en_US
dc.subjectEngineering::Mechanical engineeringen_US
dc.titleEvaluating human versus machine learning performance in classifying research abstractsen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen_US
dc.contributor.departmentTalent Recruitment and Career Support (TRACS)en_US
dc.identifier.doi10.1007/s11192-020-03614-2-
dc.identifier.pmid32836529-
dc.identifier.scopus2-s2.0-85088147629-
dc.identifier.issue2en_US
dc.identifier.volume125en_US
dc.identifier.spage1197en_US
dc.identifier.epage1212en_US
dc.subject.keywordsDiscipline Classifcationen_US
dc.subject.keywordsText Classifcationen_US
dc.description.acknowledgementThe study was partially funded by the Singapore National Research Foundation, Grant No. NRF2014-NRF-SRIE001-027en_US
item.fulltextNo Fulltext-
item.grantfulltextnone-
Appears in Collections:MAE Journal Articles
TRACS Posters and Papers

SCOPUSTM   
Citations 20

4
Updated on Dec 29, 2021

PublonsTM
Citations 20

2
Updated on Dec 29, 2021

Page view(s)

49
Updated on Jul 6, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.