Contact Complexity in Customer Service (2402.15655v1)
Abstract: Customers who reach out for customer service support may face a range of issues that vary in complexity. Routing high-complexity contacts to junior agents can lead to multiple transfers or repeated contacts, while directing low-complexity contacts to senior agents can strain their capacity to assist customers who need professional help. To tackle this, a machine learning model that accurately predicts the complexity of customer issues is highly desirable. However, defining the complexity of a contact is a difficult task as it is a highly abstract concept. While consensus-based data annotation by experienced agents is a possible solution, it is time-consuming and costly. To overcome these challenges, we have developed a novel machine learning approach to define contact complexity. Instead of relying on human annotation, we trained an AI expert model to mimic the behavior of agents and evaluate each contact's complexity based on how the AI expert responds. If the AI expert is uncertain or lacks the skills to comprehend the contact transcript, it is considered a high-complexity contact. Our method has proven to be reliable, scalable, and cost-effective based on the collected data.
- Applying natural language processing and hierarchical machine learning approaches to text difficulty classification. International Journal of Artificial Intelligence in Education volume, 30:337–370.
- Comparing machine learning classification approaches for predicting expository text difficulty. International Florida Artificial Intelligence Research Society Conference.
- Xgboost: A scalable tree boosting system. ACM SIGKDD International Conference.
- Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232.
- Deep learning. The MIT Press.
- Ho, T. K. (1995). Random decision forests. Proceedings of the 3rd International Conference on Document Analysis and Recognition, pages 278–282.
- Lightgbm: A highly efficient gradient boosting decision tree. Neural Information Processing Systems (NIPS).
- Aucμ𝜇{}_{\mu}start_FLOATSUBSCRIPT italic_μ end_FLOATSUBSCRIPT: A performance metric for multi-class machine learning models. International Conference on Machine Learning (ICML), PMLR(97).
- On information and sufficiency. Annals of Mathematical Statistics, 22:79–86.
- Catboost: unbiased boosting with categorical features. d Conference on Neural Information Processing Systems (NeurIPS).
- Mining of massive datasets. Cambridge University Press, pages 1–17.
- Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27:379–423.
- The information bottleneck method. arXiv, page physics/0004057.
- Deep learning and the information bottleneck principle. arXiv, page 1503.02406.
- Wang, Y. (2006). Automatic recognition of text difficulty from consumers health information. Computer-Based Medical Systems, 2006. CBMS 2006. 19th IEEE, pages 131–136.
- Zhou, Z.-H. (2021). Ensemble methods: Foundations and algorithms. CRC Press, page 23.