Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/172177
Title: | DaisyRec 2.0: benchmarking recommendation for rigorous evaluation | Authors: | Sun, Zhu Fang, Hui Yang, Jie Qu, Xinghua Liu, Hongyang Yu, Di Ong, Yew-Soon Zhang, Jie |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2022 | Source: | Sun, Z., Fang, H., Yang, J., Qu, X., Liu, H., Yu, D., Ong, Y. & Zhang, J. (2022). DaisyRec 2.0: benchmarking recommendation for rigorous evaluation. IEEE Transactions On Pattern Analysis and Machine Intelligence, 45(7), 8206-8226. https://dx.doi.org/10.1109/TPAMI.2022.3231891 | Project: | RG90/20 | Journal: | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abstract: | Recently, one critical issue looms large in the field of recommender systems - there are no effective benchmarks for rigorous evaluation - which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation. | URI: | https://hdl.handle.net/10356/172177 | ISSN: | 0162-8828 | DOI: | 10.1109/TPAMI.2022.3231891 | Schools: | School of Computer Science and Engineering | Organisations: | A*STAR Centre for Frontier AI Research | Rights: | © 2022 IEEE. All rights reserved. | Fulltext Permission: | none | Fulltext Availability: | No Fulltext |
Appears in Collections: | SCSE Journal Articles |
SCOPUSTM
Citations
20
11
Updated on May 5, 2025
Page view(s)
131
Updated on May 7, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.