Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haoran Yi is active.

Publication


Featured researches published by Haoran Yi.


Pattern Recognition Letters | 2005

A new motion histogram to index motion content in video segments

Haoran Yi; Deepu Rajan; Liang-Tien Chia

A new motion feature for video indexing is proposed in this paper. The motion content of the video at pixel level, is represented as a Pixel Change Ratio Map (PCRM). The PCRM enables us to capture the intensity of motion in a video sequence. It also indicates the spatial location and size of the moving object. The proposed motion feature is the motion histogram which is a non-uniformly quantized histogram of the PCRM. We demonstrate the usefulness of the motion histogram with three applications, viz., video retrieval, video clustering and video classification.


international conference on multimedia and expo | 2005

Adaptive hierarchical multi-class SVM classifier for texture-based image classification

Song Liu; Haoran Yi; Liang−Tien Chia; Deepu Rajan

In this paper, we present a new classification scheme based on support vector machines (SVM) and a new texture feature, called texture correlogram, for high-level image classification. Originally, SVM classifier is designed for solving only binary classification problem. In order to deal with multiple classes, we present a new method to dynamically build up a hierarchical structure from the training dataset. The texture correlogram is designed to capture spatial distribution information. Experimental results demonstrate that the proposed classification scheme and texture feature are effective for high-level image classification task and the proposed classification scheme is more efficient than the other schemes while achieving almost the same classification accuracy. Another advantage of the proposed scheme is that the underlying hierarchical structure of the SVM classification tree manifests the interclass relationships among different classes.


advances in multimedia | 2004

Semantic video indexing and summarization using subtitles

Haoran Yi; Deepu Rajan; Liang-Tien Chia

How to build semantic index for multimedia data is an important and challenging problem for multimedia information systems. In this paper, we present a novel approach to build a semantic video index for digital videos by analyzing the subtitle files of DVD/DivX videos. The proposed approach for building semantic video index consists of 3 stages, viz., script extraction, script partition and script vector representation. First, the scripts are extracted from the subtitle files that are available in the DVD/DivX videos. Then, the extracted scripts are partitioned into segments. Finally, the partitioned script segments are converted into a tfidf vector based representation, which acts as the semantic index. The efficiency of the semantic index is demonstrated through video retrieval and summarization applications. Experimental results demonstrate that the proposed approach is very promising.


Information Systems | 2006

A motion-based scene tree for browsing and retrieval of compressed videos

Haoran Yi; Deepu Rajan; Liang-Tien Chia

This paper describes a fully automatic content-based approach for browsing and retrieval of MPEG-2 compressed video. The first step of the approach is the detection of shot boundaries based on motion vectors available from the compressed video stream. The next step involves the construction of a scene tree from the shots obtained earlier. The scene tree is shown to capture some semantic information as well as to provide a construct for hierarchical browsing of compressed videos. Finally, we build a new model for video similarity based on global as well as local motion associated with each node in the scene tree. To this end, we propose new approaches to camera motion and object motion estimation. The experimental results demonstrate that the integration of the above techniques results in an efficient framework for browsing and searching large video databases.


Multimedia Tools and Applications | 2005

Automatic Generation of MPEG-7 Compliant XML Document for Motion Trajectory Descriptor in Sports Video

Haoran Yi; Deepu Rajan; Liang-Tien Chia

The MPEG-7 standard is a step towards standardizing the description of multimedia content so that quick and efficient identification of relevant content can be facilitated, together with efficient management of information. The description definition language (DDL) is a schema language to represent valid MPEG-7 descriptors and description schemes. MPEG-7 instances are XML documents that conform to a particular MPEG-7 schema, as expressed in the DDL and that describe audiovisual content. In this paper, we pick one of the visual descriptors related to motion in a video sequence, viz., motion trajectory. It describes the displacements of objects in time, where an object is defined as a spatiotemporal region or set of spatiotemporal regions. We present a method of automatically extracting trajectories from video sequences and generating an XML document that conforms to the MPEG-7 schema. We use sports videos in particular, because the trajectories are very random and the robustness of our algorithm can be demonstrated.


Image and Vision Computing | 2006

A motion-based scene tree for compressed video content management

Haoran Yi; Deepu Rajan; Liang-Tien Chia

This paper describes a fully automatic content-based approach for browsing and retrieval of MPEG-2 compressed video. The first step of the approach is the detection of shot boundaries based on motion vectors available from the compressed video stream. The next step involves the construction of a scene tree from the shots obtained earlier. The scene tree is shown to capture some semantic information as well as provide a construct for hierarchical browsing of compressed videos. Finally, we build a new model for video similarity based on global as well as local motion associated with each node in the scene tree. To this end, we propose new approaches to camera motion and object motion estimation. The experimental results demonstrate that the integration of the above techniques results in an efficient framework for browsing and searching large video databases.


international conference on acoustics, speech, and signal processing | 2005

Global motion compensated key frame extraction from compressed videos

Haoran Yi; Deepu Rajan; Liang-Tien Chia

A key frame extraction approach, based on change detection of DC images extracted from compressed video, is proposed in this paper. We define a simple pixel change map that captures additional information in a frame with respect to its adjacent frames. Since global motion contributes to pixel changes, falsely indicating the presence of key frames, it is compensated by adaptively filtering the pixel change map using a modified version of the least mean square (LMS) algorithm. The prediction errors thus obtained are used to subsequently select the key frames. The key frames are selected so that the cumulative prediction error is partitioned into equal amounts in each segment. The entire procedure is computationally simple and flexible. Experimental results illustrate the good performance of the proposed algorithm.


acm multimedia | 2004

Automatic extraction of motion trajectories in compressed sports videos

Haoran Yi; Deepu Rajan; Liang-Tien Chia

This paper presents an algorithm for automatically extracting significant motion trajectories in sports videos. Our approach consists of four stages: global motion estimation, motion blob detection, trajectory evolution and trajectory refinement. Global motion is estimated from the motion vectors in the compressed video using an iterative algorithm with robust outlier rejection. A statistical hypothesis test is carried out within the Block Rejection Map(<i>BRM</i>), which is the by-product of the global motion estimation, for the detection of motion blobs. Trajectory evolution is the process in which the motion blobs are either appended to an existing trajectory or are considered to be the beginning of a new trajectory based on its distance to an adaptive trajectory description. Finally, the extracted motion trajectories are refined using a Kalman filter. Experimental results on both indoor and outdoor sports videos demonstrate the effectiveness and efficiency of the proposed method.


IEEE Transactions on Circuits and Systems for Video Technology | 2006

Dynamic Programming-Based Reverse Frame Selection for VBR Video Delivery Under Constrained Resources

Dayong Tao; Jianfei Cai; Haoran Yi; Deepu Rajan; Liang-Tien Chia; King Ngi Ngan

In this paper, we investigate optimal frame-selection algorithms based on dynamic programming for delivering stored variable bit rate (VBR) video under both bandwidth and buffer size constraints. Our objective is to find a feasible set of frames that can maximize the videos accumulated motion values without violating any constraint. It is well known that dynamic programming has high complexity. In this research, we propose to eliminate nonoptimal intermediate frame states, which can effectively reduce the complexity of dynamic programming. Moreover, we propose a reverse frame selection (RFS) algorithm, where the selection starts from the last frame and ends at the first frame. Compared with the conventional dynamic programming-based forward frame selection, the RFS is able to find all of the optimal results for different preloads in one round. We further extend the RFS scheme to solve the problem of frame selection for VBR channels. In particular, we first perform the RFS algorithm offline, and the complexity is modest and scalable with the aids of frame stuffing and nonoptimal state elimination. During online streaming, we only need to retrieve the optimal frame-selection path from the pregenerated offline results, and it can be applied to any VBR channels as long as the VBR channels can be modeled as piecewise CBR channels. Experimental results show good performance of our proposed algorithms


advances in multimedia | 2004

Semantic analysis of basketball video using motion information

Song Liu; Haoran Yi; Liang-Tien Chia; Deepu Rajan; Syin Chan

This paper presents a new method for extracting semantic information from basketball video. Our approach consists of three stages: shot and scene boundary detection, scene classification and semantic video analysis for event detection. The scene boundary detection algorithm is based on both visual and motion prediction information. After the shot and scene boundary detection, a set of visual and motion features are extracted from scene or shot. The motion features, describing the total motion, camera motion and object motion within the scene respectively, are computed from the motion vector of the compressed video using an iterative algorithm with robust outlier rejection. Finally, the extracted features are used to differentiate offensive/defensive activities in the scenes. By analyzing the offensive/defensive activities, the positions of potential semantic events, such as foul and goal, are located. Experimental results demonstrate the effectiveness of the proposed method.

Collaboration


Dive into the Haoran Yi's collaboration.

Top Co-Authors

Avatar

Deepu Rajan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Liang-Tien Chia

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Song Liu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Dayong Tao

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Jianfei Cai

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Liang−Tien Chia

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Syin Chan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

King Ngi Ngan

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge