EmoStory: Emotion Prediction and Mapping in Narrative Stories

Seng-Wei Too - Auronex Sdn Bhd, Kuala Lumpur, Malaysia
John See - Heriot-Watt University Malaysia, Putrajaya, Malaysia
Albert Quek - Multimedia University, Cyberjaya, 63100, Malaysia.
Hui-Ngo Goh - Multimedia University, Cyberjaya, 63100, Malaysia.

Citation Format:

DOI: http://dx.doi.org/10.30630/joiv.7.3-2.2335


A well-designed story is built upon a sequence of plots and events. Each event has its purpose in piquing the audience's interest in the plot; thus, understanding the flow of emotions within the story is vital to its success. A story is usually built up through dramatic changes in emotion and mood to create resonance with the audience. The lack of research in this understudied field warrants exploring several aspects of the emotional analysis of stories. In this paper, we propose an encoder-decoder framework to perform sentence-level emotion recognition of narrative stories on both dimensional and categorical aspects, achieving MAE=0.0846 and 54% accuracy (8-class), respectively, on the EmoTales dataset and a reasonably good level of generalization to an untrained dataset. The first use of attention and multi-head attention mechanisms for emotion representation mapping (ERM) yields state-of-the-art performance in certain settings. We further present the preliminary idea of EmoStory, a concept that seamlessly predicts both dimensional and categorical space in an efficient manner, made possible with ERM. This methodology is useful in only one of the two aspects is available. In the future, these techniques could be extended to model the personality or emotional state of characters in stories, which could benefit the affective assessment of experiences and the creation of emotive avatars and virtual worlds


Deep Learning; Affective Computing; Natural Language Processing

Full Text:



R. W. Picard, Affective computing. MIT press, 2000.

T. Pólya and I. Csertő, “Emotion Recognition Based on the Structure of Narratives,†Electronics (Basel), vol. 12, no. 4, p. 919, 2023, doi: 10.3390/electronics12040919.

S. Buechel and U. Hahn, “Emotion analysis as a regression problem–dimensional models and their implications on emotion representation and metrical evaluation,†in ECAI 2016, IOS Press, 2016, pp. 1114–1122. doi: 10.3233/978-1-61499-672-9-1114.

S. Ghosh, A. Ekbal, and P. Bhattacharyya, “VAD-assisted multi-task transformer framework for emotion recognition and intensity prediction on suicide notes,†Inf Process Manag, vol. 60, no. 2, p. 103234, 2023, doi: 10.1016/j.ipm.2022.103234.

C. Liu, M. Osama, and A. De Andrade, “DENS: A dataset for multi-class emotion analysis,†arXiv preprint arXiv:1910.11769, 2019, doi: 10.48550/arXiv.1910.11769.

E. Tromp and M. Pechenizkiy, “Rule-based emotion detection on social media: putting tweets on Plutchik’s wheel,†arXiv preprint arXiv:1412.4682, 2014, doi: 10.48550/arXiv.1412.4682.

M. Wyczesany and T. S. Ligeza, “Towards a constructionist approach to emotions: verification of the three-dimensional model of affect with EEG-independent component analysis,†Exp Brain Res, vol. 233, pp. 723–733, 2015, doi: 10.1007/s00221-014-4149-9.

C. Strapparava and R. Mihalcea, “Learning to identify emotions in text,†in Proceedings of the 2008 ACM symposium on Applied computing, 2008, pp. 1556–1560. doi: 10.1145/1363686.1364052.

A. Neviarouskaya, H. Prendinger, and M. Ishizuka, “Compositionality principle in recognition of fine-grained emotions from text,†in Proceedings of the International AAAI Conference on Web and Social Media, 2009, pp. 278–281. doi: 10.1609/icwsm.v3i1.13987.

V. Francisco, R. Hervás, F. Peinado, and P. Gervás, “EmoTales: creating a corpus of folk tales with emotional annotations,†Lang Resour Eval, vol. 46, pp. 341–381, 2012, doi: 10.1007/s10579-011-9140-5.

Y. Lim, K.-W. Ng, P. Naveen, and S.-C. Haw, “Emotion Recognition by Facial Expression and Voice: Review and Analysis,†Journal of Informatics and Web Engineering, vol. 1, no. 2, pp. 45–54, 2022, doi: 10.33093/jiwe.2022.1.2.4.

A. Mehrabian and J. A. Russell, An approach to environmental psychology. the MIT Press, 1974.

S. Buechel and U. Hahn, “Emotion representation mapping for automatic lexicon construction (mostly) performs on human level,†arXiv preprint arXiv:1806.08890, 2018, doi: 10.48550/arXiv.1806.08890.

S. Park, J. Kim, S. Ye, J. Jeon, H. Y. Park, and A. Oh, “Dimensional emotion detection from categorical emotion,†arXiv preprint arXiv:1911.02499, 2019, doi: 10.48550/arXiv.1911.02499.

M. Abdul-Mageed and L. Ungar, “Emonet: Fine-grained emotion detection with gated recurrent neural networks,†in Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: Long papers), 2017, pp. 718–728. doi: 10.18653/v1/P17-1067.

Matthew E. Peters et al., “Deep contextualized word representations,†in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, Jun. 2018, pp. 2227–2237.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,†arXiv preprint arXiv:1810.04805, 2018, doi: 10.48550/arXiv.1810.04805.

A. Vaswani et al., “Attention is all you need,†Adv Neural Inf Process Syst, vol. 30, 2017.

S. Zhu, S. Li, and G. Zhou, “Adversarial attention modeling for multi-dimensional emotion regression,†in Proceedings of the 57th annual meeting of the association for computational linguistics, 2019, pp. 471–480. doi: 10.18653/v1/P19-1045.

Y. Yang, D. Zhou, Y. He, and M. Zhang, “Interpretable relevant emotion ranking with event-driven attention,†in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, 2019, pp. 177–187. doi: 10.18653/v1/D19-1017.

Y. Liu et al., “Roberta: A robustly optimized bert pretraining approach,†arXiv preprint arXiv:1907.11692, 2019, doi: 10.48550/arXiv.1907.11692.

Z. Wu, X. Zhang, T. Zhi-Xuan, J. Zaki, and D. C. Ong, “Attending to emotional narratives,†in 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), IEEE, 2019, pp. 648–654. doi: 10.1109/ACII.2019.8925497.

D. Cortiz, “Exploring transformers in emotion recognition: a comparison of bert, distillbert, roberta, xlnet and electra,†arXiv preprint arXiv:2104.02041, 2021, doi: 10.48550/arXiv.2104.02041.

V. Francisco, P. Gervás, and F. Peinado, “Ontological reasoning to configure emotional voice synthesis,†in Web Reasoning and Rule Systems: First International Conference, RR 2007, Innsbruck, Austria, June 7-8, 2007. Proceedings 1, Springer, 2007, pp. 88–102. doi: 10.1007/978-3-540-72982-2_7.

M. M. Bradley and P. J. Lang, “Measuring emotion: the self-assessment manikin and the semantic differential,†J Behav Ther Exp Psychiatry, vol. 25, no. 1, pp. 49–59, 1994, doi: 10.1016/0005-7916(94)90063-9.

James Carney and Cole Robertson, “4000 stories with sentiment analysis dataset.†Accessed: May 19, 2023. [Online]. Available: https://brunel.figshare.com/articles/dataset/4000_stories_with_sentiment_analysis_dataset/7712540

M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,†arXiv preprint arXiv:1508.04025, 2015, doi: 10.48550/arXiv.1508.04025.

X. Liu, P. He, W. Chen, and J. Gao, “Multi-task deep neural networks for natural language understanding,†arXiv preprint arXiv:1901.11504, 2019, doi: 10.48550/arXiv.1901.11504.

Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,†arXiv preprint arXiv:1909.11942, 2019, doi: 10.48550/arXiv.1909.11942.

J. Wei and K. Zou, “Eda: Easy data augmentation techniques for boosting performance on text classification tasks,†arXiv preprint arXiv:1901.11196, 2019, doi: 10.48550/arXiv.1901.11196.


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

JOIV : International Journal on Informatics Visualization
ISSN 2549-9610  (print) | 2549-9904 (online)
Organized by Society of Visual Informatocs, and Institute of Visual Informatics - UKM and Soft Computing and Data Mining Centre - UTHM
W : http://joiv.org
E : joiv@pnp.ac.id, hidra@pnp.ac.id, rahmat@pnp.ac.id

View JOIV Stats

Creative Commons License is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.