Predicting and Explaining Customer Response to Upselling in Telecommunications: A Malaysian Case Study

Railey Shahril Abdullah - Petronas Digital Sdn Bhd, Kuala Lumpur, 50088, Malaysia
Nur Shaheera Shastera Nulizairos - Universiti Teknologi MARA, Shah Alam, 40450, Malaysia
Nor Hapiza Mohd Ariffin - Sohar University, Oman
Deden Witarsyah - Telkom University, Bandung, Indonesia
Ruhaila Maskat - Universiti Teknologi MARA Shah Alam Selangor Malaysia


Citation Format:



DOI: http://dx.doi.org/10.62527/joiv.8.3-2.2823

Abstract


This research explores the predictive capabilities of XGBoost (XGB) and Random Forest (RF) models for customer upsell responses, emphasizing the use of Explainable Artificial Intelligence (XAI) techniques to gain insights. Initially trained without hyperparameter tuning, both models were later optimized using 5-fold cross-validation. While RF consistently achieved high accuracy (0.99), XGB exhibited lower accuracy (0.85) yet demonstrated superior precision and recall. Post-tuning, XGB maintained its competitive edge despite a slight decrease in ROC-AUC scores (0.76 and 0.75 versus RF's 0.67 and 0.72), indicating proficiency in classifying positive cases. XAI techniques complemented XGB’s prediction, revealing significant predictors such as inactive duration in days, race (Chinese), total communication count, age, and active period in days. Lesser predictive value was attributed to factors such as race (Indian), gender (female), and region (northern). While the feature importance plot provided a broad overview, it did not detail specific attribute relationships to predictions. To address this, a summary violin plot was employed to illustrate how feature importance varies with actual values, enhancing the understanding of each feature's impact. Results indicated that longer inactivity periods negatively influenced predictions, while non-Chinese ethnicity, higher communication frequency, and younger age were associated with positive outcomes. Dependence plots further elucidated these relationships, highlighting how older non-Chinese customers and those with shorter inactive periods and frequent communication were more likely to accept offers. Local explanations using Shapley's force plot and LIME offered deeper insights into specific instances. Overall, the study underscores the complementary use of XAI techniques to understand a model’s predictions.

Keywords


Explainable Artificial Intelligence (XAI); Predictive analytics; Upselling response; Shapley Additive Explanations (SHAP); Local Interpretable Model-agnostic Explanations (LIME)

Full Text:

PDF

References


S. M. A. M. Manchanayake et al., “Potential Upselling Customer Prediction Through User Behavior Analysis Based on CDR Data,” 2019 IEEE 14th International Conference on Industrial and Information Systems: Engineering for Innovations for Industry 4.0, ICIIS 2019 - Proceedings, pp. 46–51, 2019, doi: 10.1109/ICIIS47346.2019.9063278.

B. Denizci Guillet, “Online upselling: Moving beyond offline upselling in the hotel industry,” Int J Hosp Manag, vol. 84, no. June 2019, 2020, doi: 10.1016/j.ijhm.2019.102322.

A. Melidis, “Personalized marketing campaign for upselling using predictive modeling in the health insurance sector,” 2020, [Online]. Available: https://run.unl.pt/handle/10362/99076

N. Dookeram, Z. Hosein, and P. Hosein, “A Recommender System for the Upselling of Telecommunications Products,” International Conference on Advanced Communication Technology, ICACT, vol. 2022-Febru, pp. 66–72, 2022, doi: 10.23919/ICACT53585.2022.9728818.

V. Belle and I. Papantonis, “Principles and Practice of Explainable Machine Learning,” Front Big Data, vol. 4, no. July, pp. 1–25, 2021, doi: 10.3389/fdata.2021.688969.

B. Dimanov, U. Bhatt, M. Jamnik, and A. Weller, “You shouldnat trust me: Learning models which conceal unfairness from multiple explanation methods,” Frontiers in Artificial Intelligence and Applications, vol. 325, no. 2019, pp. 2473–2480, 2020, doi: 10.3233/FAIA200380.

S. Mohseni, N. Zarei, and E. D. Ragan, “A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems,” ACM Trans Interact Intell Syst, vol. 11, no. 3–4, 2021, doi: 10.1145/3387166.

M. Nazar, M. M. Alam, E. Yafi, and M. M. Su’Ud, “A Systematic Review of Human-Computer Interaction and Explainable Artificial Intelligence in Healthcare with Artificial Intelligence Techniques,” IEEE Access, vol. 9, pp. 153316–153348, 2021, doi: 10.1109/ACCESS.2021.3127881.

C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nat Mach Intell, vol. 1, no. 5, pp. 206–215, 2019, doi: 10.1038/s42256-019-0048-x.

N. Burkart and M. F. Huber, “A survey on the explainability of supervised machine learning,” Journal of Artificial Intelligence Research, vol. 70, pp. 245–317, 2021, doi: 10.1613/JAIR.1.12228.

R. Confalonieri, L. Coba, B. Wagner, and T. R. Besold, “A historical perspective of explainable Artificial Intelligence,” Wiley Interdiscip Rev Data Min Knowl Discov, vol. 11, no. 1, pp. 1–21, 2021, doi: 10.1002/widm.1391.

A. B. Arrieta, D. S. Javier, and S. Gil-Lopez, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, no. October 2019, pp. 82–115, 2020, doi: 10.1016/j.inffus.2019.12.012.

O. Loyola-Gonzalez, “Black-box vs. White-Box: Understanding their advantages and weaknesses from a practical point of view,” IEEE Access, vol. 7, pp. 154096–154113, 2019, doi: 10.1109/ACCESS.2019.2949286.

S. Lockey, N. Gillespie, D. Holm, and I. A. Someh, “A review of trust in artificial intelligence: Challenges, vulnerabilities and future directions,” Proceedings of the Annual Hawaii International Conference on System Sciences, vol. 2020-Janua, pp. 5463–5472, 2021, doi: 10.24251/hicss.2021.664.

C. C. Yang, “Explainable Artificial Intelligence for Predictive Modeling in Healthcare,” J Healthc Inform Res, vol. 6, no. 2, pp. 228–239, 2022, doi: 10.1007/s41666-022-00114-1.

A. M. Antoniadi et al., “Current challenges and future opportunities for xai in machine learning-based clinical decision support systems: A systematic review,” Applied Sciences (Switzerland), vol. 11, no. 11, pp. 1–23, 2021, doi: 10.3390/app11115088.

P. L. Fung et al., “Evaluation of white-box versus black-box machine learning models in estimating ambient black carbon concentration,” J Aerosol Sci, vol. 152, no. June 2020, 2021, doi: 10.1016/j.jaerosci.2020.105694.

T. Rieg, J. Frick, H. Baumgartl, and R. Buettner, “Demonstration of the potential of whitebox machine learning approaches to gain insights from cardiovascular disease electrocardiograms,” PLoS One, vol. 15, no. 12 December, pp. 1–20, 2020, doi: 10.1371/journal.pone.0243615.

A. Das and P. Rad, “Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey,” pp. 1–24, 2020, [Online]. Available: http://arxiv.org/abs/2006.11371

I. Ullah, A. Rios, V. Gala, and S. McKeever, “Explaining deep learning models for tabular data using layer-wise relevance propagation,” Applied Sciences (Switzerland), vol. 12, no. 1, 2022, doi: 10.3390/app12010136.

J. Duell, X. Fan, B. Burnett, G. Aarts, and S. M. Zhou, “A comparison of explanations given by explainable artificial intelligence methods on analysing electronic health records,” BHI 2021 - 2021 IEEE EMBS International Conference on Biomedical and Health Informatics, Proceedings, pp. 1–4, 2021, doi: 10.1109/BHI50953.2021.9508618.

G. Plumb, D. Molitor, and A. Talwalkar, “Model agnostic supervised local explanations,” Adv Neural Inf Process Syst, vol. 2018-Decem, no. NeurIPS, pp. 2515–2524, 2018.

D. Slack, S. Hilgard, E. Jia, S. Singh, and H. Lakkaraju, “Fooling LIME and SHAP: Adversarial attacks on post hoc explanation methods,” AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 180–186, 2020, doi: 10.1145/3375627.3375830.

C. K. Leung, A. G. M. Pazdor, and J. Souza, “Explainable Artificial Intelligence for Data Science on Customer Churn,” 2021 IEEE 8th International Conference on Data Science and Advanced Analytics, DSAA 2021, 2021, doi: 10.1109/DSAA53316.2021.9564166.

C. Molnar, Interpretable Machine Learning. 2022. [Online]. Available: https://christophm.github.io/interpretable-ml-book

M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’ Explaining the Predictions of Any Classifier,” NAACL-HLT 2016 - 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Demonstrations Session, pp. 97–101, 2016, doi: 10.18653/v1/n16-3020.

D. Alvarez-Melis and T. S. Jaakkola, “On the Robustness of Interpretability Methods,” no. Whi, 2018, [Online]. Available: http://arxiv.org/abs/1806.08049

P. S. R. Aditya and M. Pal, “Local Interpretable Model Agnostic Shap Explanations for machine learning models,” no. c, 2022.

L.-Y. Zhou, D. M. Amoh, L. K. Boateng, and A. A. Okine, “Combined Appetency and Upselling Prediction Scheme in Telecommunication Sector Using Support Vector Machines,” International Journal of Modern Education and Computer Science, vol. 11, no. 6, pp. 1–7, 2019, doi: 10.5815/ijmecs.2019.06.01.

N. Al Emadi, S. Thirumuruganathan, D. R. Robillos, and B. J. Jansen, “Will You Buy It Now?: Predicting Passengers that Purchase Premium Promotions Using the PAX Model,” Journal of Smart Tourism, vol. 1, no. 1, pp. 53–64, 2021, doi: 10.52255/smarttourism.2021.1.1.7.

A. Duval, “Explainable Artificial Intelligence ( XAI ) Explainable Artificial Intelligence ( XAI ) by Alexandre Duval MA4K9 Scholarly Report Submitted to The University of Warwick Mathematics Institute,” no. April, p. 58, 2019, doi: 10.13140/RG.2.2.24722.09929.

R. Nkolele and H. Wang, “Explainable Machine Learning: A Manuscript on the Customer Churn in the Telecommunications Industry,” Conference Proceedings: 2021 Ethics and Explainability for Responsible Data Science, EE-RDS 2021, 2021, doi: 10.1109/EE-RDS53766.2021.9708561.

K. Roshan and A. Zafar, “Utilizing Xai Technique to Improve Autoencoder Based Model for Computer Network Anomaly Detection with Shapley Additive Explanation(SHAP),” International Journal of Computer Networks and Communications, vol. 13, no. 6, pp. 109–128, 2021, doi: 10.5121/ijcnc.2021.13607.

A. Tharwat, “Classification assessment methods,” Applied Computing and Informatics, vol. 17, no. 1, pp. 168–192, 2018, doi: 10.1016/j.aci.2018.08.003.