Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyun-seok Min is active.

Publication


Featured researches published by Hyun-seok Min.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Near-Duplicate Video Clip Detection Using Model-Free Semantic Concept Detection and Adaptive Semantic Distance Measurement

Hyun-seok Min; Jae-Young Choi; W De Neve; Yong Man Ro

Motivated by the observation that content transformations tend to preserve the semantic information conveyed by video clips, this paper introduces a novel technique for near-duplicate video clip (NDVC) detection, leveraging model-free semantic concept detection and adaptive semantic distance measurement. In particular, model-free semantic concept detection is realized by taking advantage of the collective knowledge in an image folksonomy (which is an unstructured collection of user-contributed images and tags), facilitating the use of an unrestricted concept vocabulary. Adaptive semantic distance measurement is realized by means of the signature quadratic form distance (SQFD), making it possible to flexibly measure the similarity between video shots that contain a varying number of semantic concepts, and where these semantic concepts may also differ in terms of relevance and nature. Experimental results obtained for the MIRFLICKR-25000 image set (used as a source of collective knowledge) and the TRECVID 2009 video set (used to create query and reference video clips) demonstrate that model-free semantic concept detection and SQFD can be successfully used for the purpose of identifying NDVCs.


international conference on multimedia and expo | 2012

Video Copy Detection Using Inclined Video Tomography and Bag-of-Visual-Words

Hyun-seok Min; Se Min Kim; Wesley De Neve; Yong Man Ro

Techniques for video fingerprinting are helpful in managing vast libraries of video clips. Recent advances have shown that video tomography and Bag-of-Visual-Words (BoVW) can be successfully used for the purpose of video fingerprinting. In this paper, we introduce a novel video signature (i.e., a novel video fingerprint) that takes advantage of both video tomography and BoVW. Specifically, the proposed video signature is created by first extracting inclined tomography images from the video content, and by subsequently applying the BoVW approach to the inclined tomography images obtained. The key to our approach is that we make the angle of inclination of the tomography images dependent on the amount of motion in the video content. That way, the proposed video signature is able to capture both spatial and temporal information. Experimental results obtained for the publicly available TREVID-2009 video set indicate that video copy detection by means of the proposed video signature is robust against spatial and temporal transformations.


international symposium on multimedia | 2009

Near-Duplicate Video Detection Using Temporal Patterns of Semantic Concepts

Hyun-seok Min; Jae Young Choi; Wesley De Neve; Yong Man Ro

Methods for video copy detection are typically based on the use of low-level visual features. However, low-level features may vary significantly for near-duplicates, which are video sequences that have been the subject of spatial or temporal modifications. As such, the use of low-level visual features may be inadequate for detecting near-duplicates. In this paper, we present a new video copy detection method that aims to identify near-duplicates for a given query video sequence. More specifically, the proposed method is based on identifying semantic concepts along the temporal axis of a particular video sequence, resulting in the construction of a so-called semantic video signature. The semantic video signature is then used for the purpose of similarity measurement. The main advantage of the proposed method lies in the fact that the presence of semantic concepts is highly robust to spatial and temporal video transformations. Our experimental results show that the use of a semantic video signature allows for the efficient and effective detection of near-duplicates.


international conference on image processing | 2009

Semantic annotation of personal video content using an image folksonomy

Hyun-seok Min; Jae Young Choi; Wesley De Neve; Yong Man Ro; Konstantinos N. Plataniotis

The increasing popularity of user-generated content (UGC) requires effective annotation techniques in order to facilitate precise content search and retrieval. In this paper, we propose a new approach for the semantic annotation of personal video content, taking advantage of user-contributed tags available in an image folksonomy. Video shots and folksonomy images are first represented by a semantic vector. Next, the semantic vectors are used to measure the semantic similarity between each video shot and the folksonomy images. Tags assigned to semantically similar folksonomy images are then used to annotate the video shots. To verify the effectiveness of the proposed annotation method, experiments were performed with video sequences retrieved from YouTube and images downloaded from Flickr. Our experimental results demonstrate that the proposed method is able to successfully annotate personal video content with user-contributed tags retrieved from an image folksonomy. In addition, the size of our tag vocabulary is significantly higher than the size of the tag vocabulary used by conventional annotation methods.


international conference on multimedia and expo | 2010

Towards using semantic features for near-duplicate video detection

Hyun-seok Min; Wesley De Neve; Yong Man Ro

An increasing number of near-duplicate video clips (NDVCs) can be found on websites for video sharing. These NDVCs often infringe copyright or clutter search results. Consequently, a high need exists for techniques that allow identifying NDVCs. NDVC detection techniques represent a video clip with a unique set of features. Conventional video signatures typically make use of low-level visual features (e.g., color histograms, local interest points). However, low-level visual features are sensitive to transformations of the video content. In this paper, given the observation that transformations preserve the semantic information in the video content, we study the use of semantic features for the purpose of identifying NDVCs. Experimental results obtained for the MUSCLE-VCD-2007 dataset indicate that semantic features have a high level of robustness against transformations and different keyframe selection strategies. In addition, when relying on the temporal variation of semantic features, semantic video signatures are characterized by a high degree of uniqueness, even when a vocabulary with a low number of semantic concepts is in use (for a query video clip that is sufficiently long).


international conference on information systems | 2009

Malicious content filtering based on semantic features

Semin Kim; Hyun-seok Min; Jaehyun Jeon; Yong Man Ro; Seung-Wan Han

This paper proposes a method to filtering malicious contents using semantic features. In conventional content based approach, low-level features such as color and texture are used to filter malicious contents. But, it is difficult to detect them because of semantic gaps between the low-level features and global concepts. In this paper, global concepts are divided into several semantic features. These semantic features are used to classify the global concept of malicious contents. We design semantic features and construct semantic classifier. In experiment, we evaluate the performance to filter malicious contents by comparing low-level features and semantic features. Results show that our proposed method has better performance than the method using only low-level features.


international conference on digital forensics | 2012

Face verification using color sparse representation

Wook Jin Shin; Seung Ho Lee; Hyun-seok Min; Hosik Sohn; Yong Man Ro

This paper proposes an effective method for face verification using color sparse representation. In the proposed method, sparse representations are separately applied to multiple color bands of face images. The complementary residuals obtained from the multiple color face images are merged by means of score-level fusion, yielding improved discrimination capability for face verification. Experimental results using two public face databases (CMU Multi-PIE and Color FERET) showed that the proposed face verification method is highly robust under challenging conditions, compared to the conventional methods using grayscale sparse representation.


Signal Processing-image Communication | 2011

Bimodal fusion of low-level visual features and high-level semantic features for near-duplicate video clip detection

Hyun-seok Min; Jae Young Choi; Wesley De Neve; Yong Man Ro

The detection of near-duplicate video clips (NDVCs) is an area of current research interest and intense development. Most NDVC detection methods represent video clips with a unique set of low-level visual features, typically describing color or texture information. However, low-level visual features are sensitive to transformations of the video content. Given the observation that transformations tend to preserve the semantic information conveyed by the video content, we propose a novel approach for identifying NDVCs, making use of both low-level visual features (this is, MPEG-7 visual features) and high-level semantic features (this is, 32 semantic concepts detected using trained classifiers). Experimental results obtained for the publicly available MUSCLE-VCD-2007 and TRECVID 2008 video sets show that bimodal fusion of visual and semantic features facilitates robust NDVC detection. In particular, the proposed method is able to identify NDVCs with a low missed detection rate (3% on average) and a low false alarm rate (2% on average). In addition, the combined use of visual and semantic features outperforms the separate use of either of them in terms of NDVC detection effectiveness. Further, we demonstrate that the effectiveness of the proposed method is on par with or better than the effectiveness of three state-of-the-art NDVC detection methods either making use of temporal ordinal measurement, features computed using the Scale-Invariant Feature Transform (SIFT), or bag-of-visual-words (BoVW). We also show that the influence of the effectiveness of semantic concept detection on the effectiveness of NDVC detection is limited, as long as the mean average precision (MAP) of the semantic concept detectors used is higher than 0.3. Finally, we illustrate that the computational complexity of our NDVC detection method is competitive with the computational complexity of the three aforementioned NDVC detection methods.


international conference on image processing | 2010

Exploiting collective knowledge in an image folksonomy for semantic-based near-duplicate video detection

Hyun-seok Min; Wesley De Neve; Yong Man Ro

An increasing number of duplicates and near-duplicates can be found on websites for video sharing. These duplicates and near-duplicates often infringe copyright or clutter search results. Consequently, a high need exists for techniques that allow identifying duplicates and near-duplicates. In this paper, we propose a semantic-based approach towards the task of identifying near-duplicates. Our approach makes use of semantic video signatures that are constructed by detecting semantic concepts along the temporal axis of video sequences. Specifically, we make use of an image folksonomy (i.e., a set of user-contributed images annotated with user-supplied tags) to detect semantic concepts in video sequences, making it possible to exploit an unrestricted concept vocabulary. Comparative experiments using the MUSCLE-VCD-2007 dataset and folksonomy images retrieved from Flickr show that our approach is successful in identifying near-duplicates.


international conference on multimedia and expo | 2011

Leveraging an image folksonomy and the Signature Quadratic Form Distance for semantic-based detection of near-duplicate video clips

Hyun-seok Min; Jae Young Choi; Wesley De Neve; Yong Man Ro

Being able to detect near-duplicate video clips (NDVCs) is a prerequisite for a plethora of multimedia applications. Given the observation that content transformations tend to preserve semantic information, techniques for NDVC detection may benefit from the use of a semantic approach. This paper discusses how an image folksonomy (i.e., community-contributed images and metadata) and the Signature Quadratic Form Distance (SQFD) can be leveraged for the purpose of identifying NDVCs. Experimental results obtained for the MIRFLICKR-25000 image set and the TRECVID 2009 video set indicate that an image folksonomy and SQFD can be successfully used for detecting NDVCs. In addition, our findings show that model-free NDVC detection (i.e., NDVC detection using an image folksonomy) has a higher semantic coverage than model-based NDVC detection (i.e., NDVC detection using the VIREO-374 semantic concept models).

Collaboration


Dive into the Hyun-seok Min's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seung-Wan Han

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Dae Hwan Hwang

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Eul-Gyoon Lim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

H. J. Shin

Pohang University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge