Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seung-Bo Park is active.

Publication


Featured researches published by Seung-Bo Park.


Sensors | 2009

Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

Kwang Cheol Shin; Seung-Bo Park; Geun-Sik Jo

In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.


Multimedia Tools and Applications | 2012

Social network analysis in a movie using character-net

Seung-Bo Park; Kyeong-Jin Oh; Geun-Sik Jo

There have been various approaches to analyzing movie stories using social networks. Social network analysis is an effective means to extract semantic information from movies. Movie analysis through social relationships among characters can support various types of information retrieval better than audio-visual feature analysis. The relationships among characters form the main structure of the story. Therefore, through social network analysis among characters, movie story information such as the major roles and the corresponding communities can be determined. Progression of most movie stories is done by characters, and the scriptwriter or director narrates the story and relationships among characters using character dialogs. A dialog has a direction and time that supplies information. Therefore, the dialog is better for constructing social networks of characters than the co-appearance. Additionally, through social networks using the dialog, we can extract accurate movie stories such as classification of major, minor or extra roles, community clustering, and sequence detection. To achieve this, we propose a Character-net that can represent the relationships between characters using dialogs, and a method that can extract the sequences via clustering communities composed of characters. Our experiments show that our proposed method can efficiently detect sequences.


Multimedia Tools and Applications | 2013

Emotion-based character clustering for managing story-based contents: a cinemetric analysis

Jason J. Jung; Eun-Soon You; Seung-Bo Park

Stories in digital content (e.g., movies) are usually developed using many kinds of relationships among the characters. In order to efficiently manage such contents, we want to exploit a social network (called Character-net) extracted from the stories. Since scripts are composed of several elements (i.e., scene headings, character names, dialogs, actions, etc.), we focus on analyzing interactions (e.g., dialog) among the characters to build such a social network. Most importantly, these relationships between minor and major characters can be abstracted and clustered into similar scenes. Thereby, in this paper, we propose a novel method that can cluster characters using their emotional similarity. If a minor character has a similar emotion vector tothe main character, then the minor character can be classified as a tritagonist who helps the main character. Conversely, this minor character may be clustered into another group and denoted as an antagonist. Additionally, we show the efficiency of our proposed method by experiment in this paper.


asian conference on intelligent information and database systems | 2011

Automatic emotion annotation of movie dialogue using WordNet

Seung-Bo Park; Eunsoon Yoo; Hyunsik Kim; Geun-Sik Jo

With the increasing interest in multimedia annotation, emotion annotation is being recognized as an essential resource which can be applied for a variety of purposes, including video information retrieval and dialogue systems. Here we introduce an automatic emotion annotation schema for dialogues, which could be used for the retrieval of specific scenes in film. Distinguished from previous works, we apply a new approach using the hypernym/hyponym relations and synonyms of WordNet, which enables us to organize a novel set of emotional concepts and to automatically detect whether a specific emotional word is associated with a specified emotional concept through measuring the conceptual distance between them.


IEICE Transactions on Information and Systems | 2005

Efficient Web Browsing with Semantic Annotation: A Case Study of Product Images in E-Commerce Sites

Jason J. Jung; Kee Sung Lee; Seung-Bo Park; Geun-Sik Jo

Web browsing task is based on depth-first searching scheme, so that searching relevant information from Web may be very tedious. In this paper, we propose personal browsing assistant system based on user intentions modeling. Before explicitly requested by a user, this system can analyze the prefetched resources from the hyperlinked Webpages and compare them with the estimated user intention, so that it can help him to make a better decision like which Webpage should be requested next. More important problem is the semantic heterogeneity between Web spaces. It makes the understandability of locally annotated resources more difficult. We apply semantic annotation, which is a transcoding procedure with the global ontology. Therefore, each local metadata can be semantically enriched, and efficiently comparable. As testing bed of our experiment, we organized three different online clothes stores whose images are annotated by semantically heterogeneous metadata. We simulated virtual customers navigating these cyberspaces. According to the predefined preferences of customer models, they conducted comparison-shopping. We have shown the reasonability of supporting the Web browsing, and its performance was evaluated as measuring the total size of browsed hyperspace.


Multimedia Tools and Applications | 2014

Affective social network--happiness inducing social media platform

Hyun-Jun Kim; Seung-Bo Park; Geun-Sik Jo

We propose a human emotion regarding social network, namely, the Affective Social Network. Our suggestion firstly builds a user’s emotion profile in terms of the personality, mood and emotion by analysing the user’s activities in social network. This subsequently builds an Emotional Relationship Matrix (ERM) which represents the depth of the emotional relationship based on the emotion profile. From our proposal, the more elaborate services based on the current user’s emotional state can be provided to users. By considering emotional aspects in a social network, we can effectively answer which users or media contents will show the best results for inducing users’ emotional states. From the experiments, we verified that message containing emotional words and users’ relationship in a social network has significant correlations.


international symposium on multimedia | 2010

Exploiting Script-Subtitles Alignment to Scene Boundary Dectection in Movie

Seung-Bo Park; Heung-Nam Kim; Hyunsik Kim; Geun-Sik Jo

A movie, a typical multimedia data, includes several textual information such as subtitles and annotation texts as well as audio-visual information. In general, a story of a movie is progressed by dialogues of characters and the dialogues are produced based on a pre-written script that contains detailed descriptions of all the information related to the movie. The script consists of scene headings, character names, dialogues, actions, and so on. Rich information related to the movie can be extracted easily and accurately from each component of the script. However, since the script does not contain time information, additional processes are required to synchronize the script with the movie. The subtitle is usually made from the script’s dialogue and has time information to display text on screen when the characters speak. Therefore, synchronization between the script and the movie can be accomplished through an alignment process between the script’s dialogue and its’ subtitle. In this paper, we propose a unique method of scene boundary detection exploited script subtitles alignment. By synchronizing the script to movie, we identify scene boundaries with semantic level. In addition, we build a thesaurus to enhance alignment performance. Experimental results with real movies show that the proposed method can successfully, not only align scripts with those subtitles, but also detect scene boundaries.


2008 IEEE International Workshop on Semantic Computing and Applications | 2008

Automatic Subtitles Localization through Speaker Identification in Multimedia System

Seung-Bo Park; Kyung-Jin Oh; Heung-Nam Kim; Geun-Sik Jo

With the increasing popularity of online video, efficient captioning and displaying the captioned text (subtitles) have also been issued with the accessibility. However, in most cases, subtitles are shown on a separate display below a screen. As a result, some viewers lose condensed information about the contents of the video. To elevate readability and visibility of viewers, in this paper, we present a framework for displaying synchronized text around a speaker in video. The proposed approach first identifies speakers using face detection technologies and subsequently detects a subtitles region. In addition, we adapt DFXP, which is interoperable timed text format of W3C, to support interchanging with existing legacy system. In order to achieve smooth playback of multimedia presentation, such as SMIL and DFXP, a prototype system, namely MoNaPlayer, has been implemented. Our case studies show that the proposed system is feasible to several multimedia applications.


international conference on it convergence and security, icitcs | 2012

Potential Emotion Word in Movie Dialog

Seung-Bo Park; Eun-Soon You; Jason J. Jung

Word emotion analysis is the basic step that recognizes emotions. Emotion words that express emotion on dialogs are classified into two classes such as direct and potential emotion word. Direct emotion word can represent clearly emotion and potential emotion word may represent specific emotion depending on context. Potential emotion word unlike direct emotion word is hardly extracted and identified. In this paper, we propose the method that extracts and identifies potential emotion words based on WordNet as well as direct emotion words. Potential emotion word can be extracted by measuring lexical affinity. Then, we consider the sense distance in order to minimize variation of meaning. In addition, we suggest the maximum sense distance that limits searching space and can extract the best potential emotion words.


web intelligence | 2009

Character-Net: Character Network Analysis from Video

Seung-Bo Park; Yoo-Won Kim; Mohammed Nazim Uddin; Geun-Sik Jo

Managing the video content for searching and summarizing has become a challenging task. Extracting semantics from video scenes enables information to be presented in a more understandable manner. Finding the semantics between video contexts is a difficult task; much recent research has focused on this issue. Most videos, such as TV serials and commercial movies, are character- centric. Therefore, the context and relationship between characters needs to be organized systematically to analyze the video. So, it is necessary to identify the contextual relationships between characters in the scene and the video. We propose Character-Net, a network structure. It finds characters in a group of shots, extracts the speaker and listeners in the scene, represents it with character-based graphs and draws the relationship between all characters by accumulating the character-based graphs at video. In this paper, we describe how to build Character-Net. Experimental results show Character-Net is an effective methodology to extract the major characters in videos.

Collaboration


Dive into the Seung-Bo Park's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge