Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/103301
Title: Which channel to ask my question? : personalized customer service request stream routing using deep reinforcement learning
Authors: Liu, Zining
Long, Chong
Lu, Xiaolu
Hu, Zehong
Zhang, Jie
Wang, Yafang
Keywords: Personalized Customer Service
Engineering::Computer science and engineering
Deep Reinforcement Learning
Issue Date: 2019
Source: Liu, Z., Long, C., Lu, X., Hu, Z., Zhang, J., & Wang, Y. (2019). Which channel to ask my question? : personalized customer service request stream routing using deep reinforcement learning. IEEE Access, 7, 107744-107756. doi:10.1109/ACCESS.2019.2932047
Series/Report no.: IEEE Access
Abstract: Customer services are critical to all companies, as they may directly connect to the brand reputation. Due to a great number of customers, e-commerce companies often employ multiple communication channels to answer customers' questions, for example, Chatbot and Hotline. On one hand, each channel has limited capacity to respond to customers' requests; on the other hand, customers have different preferences over these channels. The current production systems are mainly built based on business rules that merely consider the tradeoffs between the resources and customers' satisfaction. To achieve the optimal tradeoff between the resources and customers' satisfaction, we propose a new framework based on deep reinforcement learning that directly takes both resources and user model into account. In addition to the framework, we also propose a new deep-reinforcement-learning-based routing method-double dueling deep Q-learning with prioritized experience replay (PER-DoDDQN). We evaluate our proposed framework and method using both synthetic and a real customer service log data from a large financial technology company. We show that our proposed deep-reinforcement-learning-based framework is superior to the existing production system. Moreover, we also show that our proposed PER-DoDDQN is better than all other deep Q-learning variants in practice, which provides a more optimal routing plan. These observations suggest that our proposed method can seek the tradeoff, where both channel resources and customers' satisfaction are optimal.
URI: https://hdl.handle.net/10356/103301
http://hdl.handle.net/10220/49964
DOI: http://dx.doi.org/10.1109/ACCESS.2019.2932047
Rights: © 2019 IEEE. This journal is 100% open access, which means that all content is freely available without charge to users or their institutions. All articles accepted after 12 June 2019 are published under a CC BY 4.0 license*, and the author retains copyright. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles, or use them for any other lawful purpose, as long as proper attribution is given.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Journal Articles

Files in This Item:
File Description SizeFormat 
08784156.pdf982.31 kBAdobe PDFThumbnail
View/Open

Google ScholarTM

Check

Altmetric

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.