Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junfeng Jiang is active.

Publication


Featured researches published by Junfeng Jiang.


computer vision and pattern recognition | 2010

A new player-enabled rapid video navigation method using temporal quantization and repeated weighted boosting search

Junfeng Jiang; Xiao-Ping Zhang

In this paper, we present a new temporal quantization-based method using repeated weighted boosting search (RWBS) to navigate the video content non-uniformly. In particular, we formulate the rapid video navigation problem as a generic sampling problem. We present a video temporal density function (VTDF) based on the inter-frame mutual information to describe the time density of video activities. A new VTDF based temporal quantization method using RWBS is then applied to find the best quanta and partition in time domain. The video frames that are the nearest neighbors to the quanta in the quantization codebook are sampled to navigate the video. A video player is implemented based on the proposed method to navigate all sampled frames in its intelligent fast-forward mode. The implementation of video player demonstrates the feasibility of proposed method in practice. Experimental results show that the proposed method is effective to capture the important semantic information of video during rapid navigation.


international conference on acoustics, speech, and signal processing | 2011

A new video similarity measure model based on video time density function and dynamic programming

Junfeng Jiang; Xiao-Ping Zhang; Alexander C. Loui

In this paper, we propose a novel video similarity measure model using video time density function (VTDF) and dynamic programming. First, we employ VTDF to describe the density of video activities in time domain by calculating the inter-frame mutual information. Second, a temporal partition solution is applied to divide each video sequence into equi-sized temporal segments. Third, a new VTDF-based similarity measure using correlation is calculated to measure the similarity between two temporal segments. Fourth, dynamic programming is then developed to find the optimal non-linear mapping between two video sequences. A new normalized similarity measure function combing both visual characteristics and temporal information together is to evaluate the semantic similarity of two video sequences. Experimental results show that the proposed measurement model is effective to explore the semantic similarity of video sequences.


multimedia signal processing | 2010

Gaussian mixture vector quantization-based video summarization using independent component analysis

Junfeng Jiang; Xiao-Ping Zhang

In this paper, we propose a new Gaussian mixture vector quantization (GMVQ)-based method to summarize the video content. In particular, in order to explore the semantic characteristics of video data, we present a new feature extraction method using independent component analysis (ICA) and color histogram difference to build a compact 3D feature space first. A new GMVQ method is then developed to find the optimized quantization codebook. The optimal codebook size is determined by Bayes information criterion (BIC). The video frames that are the nearest-neighbours to the quanta in the GMVQ quantization codebook are sampled to summarize the video content. A kD-tree-based nearest-neighbour search strategy is employed to accelerate the search procedure. Experimental results show that our method is computationally efficient and practically effective to build a content-based video summarization system.


international conference on acoustics, speech, and signal processing | 2011

Video thumbnail extraction using video time density function and independent component analysis mixture model

Junfeng Jiang; Xiao-Ping Zhang

In this paper, we propose a new vector quantization method to create video thumbnail. In particular, we employ video time density function (VTDF) to explore the temporal characteristics of video data first. A VTDF-based temporal quantization is then applied to segment the whole video in time domain. The optimal number of segments is obtained by a temporal mean square error (TMSE)-based criterion. We employ independent component analysis (ICA) to each temporal segment for feature extraction and build a compact 2D feature space. An ICA mixture-based vector quantization method is developed to explore the spatial characteristics of video data. The optimal number of ICA mixture components is determined by Bayes information criterion (BIC). The video frames that are the nearest neighbors to the quantization codebook are sampled to generate the video thumbnails. Experimental results show that our method is computationally efficient and practically effective to create video thumbnails.


international conference on intelligent computing | 2010

A new hierarchical key frame tree-based video representation method using independent component analysis

Junfeng Jiang; Xiao-Ping Zhang

Key frame-based video representation is a procedure to summarize video content by mapping the entire video stream to several representative video frames. However, the existing methods are either computational expensive to extract the key frames at higher levels rather than shot level or ineffective to lay out the key frames sequentially. To overcome the shortcomings, we present a new hierarchical key frame tree-based video representation technique to model the video content hierarchically. Concretely, by projecting video frames from illumination-invariant raw feature space into low dimensional independent component analysis (ICA) subspace, each video frame is represented by a two-dimensional compact feature vector. A new kD-tree-based method is then employed to extract the key frames at shot level. A hierarchical agglomerative clustering-based method is applied to process the key frames hierarchically. Experimental results show that the proposed method is computationally efficient to model the semantic video content hierarchically.


automated information extraction in media production | 2010

A novel video thumbnail extraction method using spatiotemporal vector quantization

Junfeng Jiang; Xiao-Ping Zhang

In this paper, we propose a new spatiotemporal vector quantization method to create video thumbnail. In particular, we present a novel video data modeling tools, video time density function (VTDF) to explore the temporal characteristics of video content. A VTDF-based temporal quantization is applied to segment video data in time domain. The optimal number of segments is obtained by a temporal mean square error (TMSE)-based criterion. For each segment, we use independent component analysis (ICA) to build a compact 2D feature space first. A Gaussian mixture-based vector quantization method is then employed to explore the spatial characteristics of each segment. The optimal number of Gaussian components is determined by Bayes information criterion (BIC). The video frames that are the nearest neighbors to the quantization codebook are extracted to abstract the whole segment. Experimental results show that our method is computationally efficient and practically effective to create content-based video thumbnail.


2012 International Conference on Computing, Networking and Communications (ICNC) | 2012

Trends and opportunities in consumer video content navigation and analysis

Junfeng Jiang; Xiao-Ping Zhang

In recent years, digital videos are becoming available at an ever-increasing rate. It has never been easier for ordinary people to record, edit, deliver, and publish their own home-made digital videos over Internet. However, the increasing availability of digital video has not been accompanied by an increase in its accessibility. In other words, the abundance of video data makes it increasingly difficult for users to manage and navigate their video collections. In this paper, we first review the existing methodologies and technologies in video content analysis by addressing the trends and opportunities in consumer video content navigation and analysis. We then introduce a novel video content analysis framework using video time density function (VTDF) to tackle the problems in consumer video processing.


international conference on multimedia and expo | 2011

A content-based video fast-forward playback method using video time density function and rate distortion theory

Junfeng Jiang; Xiao-Ping Zhang; Alexander C. Loui

In this paper, we propose a new video summary method using video time density function (VTDF) and rate distortion theory. The whole system has two main modules, processing and playing. In the processing part, we apply VTDF to describe the temporal dynamics of video data first. A VTDF-based temporal quantization method is then developed to find the best quanta and partition in time domain. The optimal quanta are used to extract the representative video frames. A temporal mean square error (TMSE) is introduced by using rate-distortion theory to evaluate the quantization performance. In the playing module, we develop a video player to only play all sampled frames in its intelligent fast-forward mode. The built video player can allow users to do fast-forward playback based on the semantic video content, which demonstrates the feasibility of proposed method in practice.


acm multimedia | 2011

A smart video player with content-based fast-forward playback

Junfeng Jiang; Xiao-Ping Zhang

In this paper, we develop a video player to allow the users to do fast-forward playback based on the semantic video content. The whole system has two modules, processing and playing. In the processing part, we present a video time density function (VTDF) to describe the temporal dynamics of video data first. A VTDF-based temporal quantization method is then developed to find the best quanta and partition in the time domain. The optimal quanta are used to extract key frames. The optimal number of key frames is determined by a temporal mean square error (TMSE)-based criterion. In the playing module, we combine the key frame sequence and a set of parameters together and feed them into a triangle-based transition function to generate the sampled frames in a non-uniform way. A built video player will play all sampled frames in its intelligent fast-forward mode for a given fast-forward speed factor. The implementation of video player demonstrates the feasibility of proposed method in practice.


Proceedings of the 2010 ACM workshop on Social, adaptive and personalized multimedia interaction and access | 2010

A content-based rapid video playback method using motion-based video time density function and temporal quantization

Junfeng Jiang; Xiao-Ping Zhang

In this paper, we propose a new content-based rapid video playback method using motion-based video time density function (MVTDF) and temporal quantization. In particular, we formulate the rapid video playback problem as a generic sampling problem. We present a novel MVTDF using the inter-frame mutual information in pixel level to describe the time density of video motion activities. A MVTDF-based temporal quantization method is then employed to find the best quanta and partition in time domain. The video frames that are the nearest neighbors to the quanta in the quantization codebook are sampled to navigate the video in a non-uniform way. By selecting the most salient set of frames, the technique is integrated into a video player for variable-rate rapid video playback that preserves content. The implementation of video player demonstrates the feasibility of proposed method in practice. Experimental results show that the proposed method is effective to capture the important semantic information of video data during rapid playback.

Collaboration


Dive into the Junfeng Jiang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge