Inf. Process. Manag. | 2019

Integrating character networks for extracting narratives from multimodal data

 
 

Abstract


Abstract This study aims to integrate diverse data within narrative multimedia (i.e., artworks containing stories and distributed through multimedia) into a unified character network (i.e., a social network between characters that appear in the story). By combining multiple data sources (e.g., the text, video, and audio), we attempted to enhance the accuracy and semantic richness of existing character networks that confine themselves to a particular data source. To merge various data, we propose story synchronization for (i) improving the accuracy of data extracted from the narrative multimedia and (ii) integrating the data into the unified character network. The story synchronization mainly consists of three steps: synchronizing (i) scenes, (ii) characters, and (iii) character networks. First, we synchronize dialogues in the text and audio, to discover speakers and time of dialogues. This enables us to segment the scene using time periods when dialogues (in the text and audio) and characters (in the video) do not commonly occur. Through the scene segmentation, we can discretize stories in the narrative work. By comparing the occurrence of dialogues and characters in each scene, we synchronize identities of the characters in the text and video (e.g., names and faces of characters). Thereby, we can more accurately estimate participants and time of a conversation between characters (i.e., a set of connected dialogues). Based on the conversation, the existing character networks are refined and integrated into the unified character network. In addition, we verified the efficacy of the proposed methods using movies in the real world, which are among the most accessible and popular narrative multimedia.

Volume 56
Pages 1894-1923
DOI 10.1016/J.IPM.2019.02.005
Language English
Journal Inf. Process. Manag.

Full Text