Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDing, Qinxuen_US
dc.contributor.authorLiu, Yongen_US
dc.contributor.authorMiao, Chunyanen_US
dc.contributor.authorCheng, Feien_US
dc.contributor.authorTang, Haihongen_US
dc.identifier.citationDing, Q., Liu, Y., Miao, C., Cheng, F. & Tang, H. (2021). A hybrid bandit framework for diversified recommendation. Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), 35, 4036-4044.en_US
dc.description.abstractThe interactive recommender systems involve users in the recommendation procedure by receiving timely user feedback to update the recommendation policy. Therefore, they are widely used in real application scenarios. Previous interactive recommendation methods primarily focus on learning users' personalized preferences on the relevance properties of an item set. However, the investigation of users' personalized preferences on the diversity properties of an item set is usually ignored. To overcome this problem, we propose the Linear Modular Dispersion Bandit (LMDB) framework, which is an online learning setting for optimizing a combination of modular functions and dispersion functions. Specifically, LMDB employs modular functions to model the relevance properties of each item, and dispersion functions to describe the diversity properties of an item set. Moreover, we also develop a learning algorithm, called Linear Modular Dispersion Hybrid (LMDH) to solve the LMDB problem and derive a gap-free bound on its n-step regret. Extensive experiments on real datasets are performed to demonstrate the effectiveness of the proposed LMDB framework in balancing the recommendation accuracy and diversity.en_US
dc.description.sponsorshipAI Singaporeen_US
dc.description.sponsorshipMinistry of Health (MOH)en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.relationNRF-NRFI05- 2019-0002en_US
dc.rights© 2021 Association for the Advancement of Artificial Intelligence. All Rights Reserved. This paper was published in Proceedings of Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) and is made available with permission of Association for the Advancement of Artificial Intelligence.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.subjectEngineering::Computer science and engineering::Information systems::Information storage and retrievalen_US
dc.titleA hybrid bandit framework for diversified recommendationen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conferenceThirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)en_US
dc.contributor.researchAlibaba-NTU Singapore Joint Research Instituteen_US
dc.contributor.researchJoint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY)en_US
dc.description.versionAccepted versionen_US
dc.subject.keywordsLinear Modular Dispersion Banditen_US
dc.subject.keywordsInteractive Recommender Systemsen_US
dc.citation.conferencelocationVirtual Conferenceen_US
dc.description.acknowledgementThis research is supported, in part, by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI) (Alibaba-NTU-AIR2019B1), Nanyang Technological University, Singapore. This research is also supported, in part, by the National Research Foundation, Prime Minister’s Office, Singapore under its AI Singapore Programme (AISG Award No: AISG-GC-2019-003) and under its NRF Investigatorship Programme (NRFI Award No. NRF-NRFI05- 2019-0002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore. This research is also supported, in part, by the Singapore Ministry of Health under its National Innovation Challenge on Active and Confident Ageing (NIC Project No. MOH/NIC/COG04/2017 and MOH/NIC/HAIG03/2017).en_US
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
A_Hybrid_Bandit_Framework_for_Diversified_Recommendation.pdf361.07 kBAdobe PDFView/Open

Page view(s)

Updated on Jul 3, 2022

Download(s) 50

Updated on Jul 3, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.