Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/163625
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLi, Huien_US
dc.contributor.authorXu, Mengtingen_US
dc.contributor.authorBhowmick, Sourav S.en_US
dc.contributor.authorRayhan, Joty Shafiqen_US
dc.contributor.authorSun, Changshengen_US
dc.contributor.authorCui, Jiangtaoen_US
dc.date.accessioned2022-12-13T01:33:54Z-
dc.date.available2022-12-13T01:33:54Z-
dc.date.issued2022-
dc.identifier.citationLi, H., Xu, M., Bhowmick, S. S., Rayhan, J. S., Sun, C. & Cui, J. (2022). PIANO: influence maximization meets deep reinforcement learning. IEEE Transactions On Computational Social Systems, 1-13. https://dx.doi.org/10.1109/TCSS.2022.3164667en_US
dc.identifier.issn2329-924Xen_US
dc.identifier.urihttps://hdl.handle.net/10356/163625-
dc.description.abstractSince its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM, which is NP-hard, is to select a set of k users known as seed users who can influence the most individuals in the social network. The state-of-the-art algorithms estimate the expected influence of nodes based on sampled diffusion paths. As the number of required samples has been recently proven to be lower bounded by a particular threshold that presets tradeoff between the accuracy and the efficiency, the result quality of these traditional solutions is hard to be further improved without sacrificing efficiency. In this article, we present an orthogonal and novel paradigm to address the IM problem by leveraging deep reinforcement learning (RL) to estimate the expected influence. In particular, we present a novel framework called deeP reInforcement leArning-based iNfluence maximizatiOn (PIANO) that incorporates network embedding and RL techniques to address this problem. In order to make it practical, we further present PIANO-E and PIANO@⟨angle d⟩, both of which can be applied directly to answer IM without training the model from scratch. Experimental study on real-world networks demonstrates that PIANO achieves the best performance with respect to efficiency and influence spread quality compared to state-of-the-art classical solutions. We also demonstrate that the learned parametric models generalize well across different networks. Besides, we provide a pool of pretrained PIANO models such that any IM task can be addressed by directly applying a model from the pool without training over the targeted network.en_US
dc.language.isoenen_US
dc.relation.ispartofIEEE Transactions on Computational Social Systemsen_US
dc.rights© 2022 IEEE. All rights reserved.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titlePIANO: influence maximization meets deep reinforcement learningen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.identifier.doi10.1109/TCSS.2022.3164667-
dc.identifier.scopus2-s2.0-85132537548-
dc.identifier.spage1en_US
dc.identifier.epage13en_US
dc.subject.keywordsDeep Reinforcement Learningen_US
dc.subject.keywordsGraph Embeddingen_US
dc.description.acknowledgementThis work was supported by the National Natural Science Foundation of China under Grant 61972309.en_US
item.fulltextNo Fulltext-
item.grantfulltextnone-
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 50

7
Updated on Mar 25, 2024

Web of ScienceTM
Citations 50

5
Updated on Oct 30, 2023

Page view(s)

74
Updated on Mar 29, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.