Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tianqiang Liu is active.

Publication


Featured researches published by Tianqiang Liu.


international conference on multimedia and expo | 2008

Directional correlation analysis of local Haar binary pattern for text detection

Rongrong Ji; Pengfei Xu; Hongxun Yao; Zhen Zhang; Xiaoshuai Sun; Tianqiang Liu

Two main restrictions exist in state-of-the-art text detection algorithms: 1. Illumination variance; 2. Text-background contrast variance. This paper presents a robust text characterization approach based on local Haar binary pattern (LHBP) to address these problems. Based on LHBP, a coarse-to-fine detection framework is presented to precisely locate text lines in scene images. Firstly, threshold-restricted local binary pattern is extracted from high-frequency coefficients of pyramid Haar wavelet. It preserves and uniforms inconsistent text-background contrasts while filtering gradual illumination variations. Subsequently, we propose a directional correlation analysis (DCA) approach to filter non-directional LHBP regions for locating candidate text regions. Finally, using LHBP histogram, an SVM-based post-classification is presented to refine detection results. Experimental results on ICDAR 03 demonstrate the effectiveness and robustness of our proposed method.


multimedia information retrieval | 2008

Cross-media manifold learning for image retrieval & annotation

Xianming Liu; Rongrong Ji; Hongxun Yao; Pengfei Xu; Xiaoshuai Sun; Tianqiang Liu

Fusion of visual content with textual information is an effective way for both content-based and keyword-based image retrieval. However, the performance of visual & textual fusion is affected greatly by the data noise and redundancy in both text (such as surrounding text in HTML pages) and visual (such as intra-class diversity) aspects. This paper presents a manifold-based cross-media optimization scheme to achieve visual & textual fusion within a unified framework. Cross-Media manifold co-training mechanism between Keyword-based Metric Space and Vision-Based Metric Space is proposed creatively to infer a best dual-space fusion by minimizing manifold-based visual & textual energy criterion. We present the Isomorphic Manifold Learning to map the annotation affection in image visual space onto keyword semantic space by manifold shrinkage. We also demonstrate its correctness and convergence from mathematical perspective. The retrieval can be performed using both keyword or sample images respectively on Keyword-Based Metric Space and Vision-Based Metric Space, while the simple distance classifiers will satisfy. Two groups of experiments are conducted: The first group is carried on Corel 5000 image database to validate our effectiveness by comparing with state-of-the-art Generalized Manifold Ranking Based Image Retrieval and SVM. The second group is done over real-world Flickr dataset with over 6,000 images to testify our effectiveness in real-world application. The promising results show that our model attains a significant improvement over state-of-the-art algorithms.


multimedia information retrieval | 2008

Place retrieval with graph-based place-view model

Xiaoshuai Sun; Rongrong Ji; Hongxun Yao; Pengfei Xu; Tianqiang Liu; Xianming Liu

Places in movies and sitcoms could indicate higher-level semantic cues about the story scenarios and actor relations. This paper presents a novel unsupervised framework for efficient place retrieval in movies and sitcoms. We leverage face detection to filter out close-up frames from video dataset, and adopt saliency map analysis to partition background places from foreground actions. Consequently, we extract pyramid-based spatial-encoding correlogram from shot key frames for robust place representation. For effectively describing variant place appearances, we cluster key frames and model inter-cluster belonging of identical place by inside-shot association. Then hierarchical normalized cut is utilized over the association graph to differentiate physical places within videos and gain their multi-view representation as a tree structure. For efficient place matching in large-scale database, inversed indexing is applied onto the hierarchical graph structure, based on which approximate nearest neighbor search is proposed to largely accelerate search process. Experimental results on over 36-hour Friends sitcom database demonstrate the effectiveness, efficiency, and semantic revealing ability of our framework.


international conference on image analysis and recognition | 2008

Text Particles Multi-band Fusion for Robust Text Detection

Pengfei Xu; Rongrong Ji; Hongxun Yao; Xiaoshuai Sun; Tianqiang Liu; Xianming Liu

Texts in images and videos usually carry important information for visual content understanding and retrieval. Two main restrictions exist in the state-of-the-art text detection algorithms: weak contrast and text-background variance. This paper presents a robust text detection method based on text particles (TP) multi-band fusion to solve there problems. Firstly, text particles are generated by their local binary pattern of pyramid Haar wavelet coefficients in YUV color space. It preserves and uniforms text-background contrasts while extracting multi-band information. Secondly, the candidate text regions are generated via density-based text particle multi-band fusion, and the LHBP histogram analysis is utilized to remove non-text regions. Our TP-based detection framework can robustly locate text regions regardless of diversity sizes, colors, rotations, illuminations and text-background contrasts. Experiment results on ICDAR 03 over the existing methods demonstrate the robustness and effectiveness of the proposed method.


pacific rim conference on multimedia | 2008

Vision-Based Semi-supervised Homecare with Spatial Constraint

Tianqiang Liu; Hongxun Yao; Rongrong Ji; Yan Liu; Xianming Liu; Xiaoshuai Sun; Pengfei Xu; Zhen Zhang

Vision-based homecare system receives increasing research interest owing to its efficiency, portability and low-cost characters. This paper presents a vision-based semi-supervised homecare system to automatically monitor the exceptional behaviors of self-helpless persons in home environment. Firstly, our proposed framework tracks the behavior of surveilled individual using dynamic conditional random field tracker fusion, based on which we extract motion descriptor by Fourier curve fitting to model behavior routines for exception detection. Secondly, we propose a Spatial Field constraint strategy to assist SVM-based exception action decision with a Bayesian inference model. Finally, a novel semi-supervised learning mechanism is also presented to overcome the exhaustive labeling behavior in previous works. Experiments over home environment video dataset with five normal and two exceptional behavior categories shows the advantage of our proposed system comparing with previous works.


visual communications and image processing | 2010

3D silhouette tracking with occlusion inference

Wenkai Li; Hongxun Yao; Rongrong Ji; Tianqiang Liu; Debin Zhao

It is a challenging problem to robustly track moving objects from image sequences because of occlusions. Previous methods did not exploit depth information sufficiently. Based on multiple camera scenes, we propose a 3D silhouette tracking framework to resolve occlusions and recover the appearances in 3D space, which enhances tracking effectiveness. In the framework, 2D object silhouettes are initially gained by Snake. Then a Voxel Space Carving procedure is introduced to simultaneously generate the occlusion model and visual hull of objects. Next, we adopt Particle Filter to select the valuable parts of occlusion model and combine them with the initial object silhouettes to generate the updated visual hull. Finally, updated visual hull of the objects are re-projected to each view to obtain their final contours. The experiments under the public LAB and SCULPTURE datasets validate the feasibility and effectiveness of our framework.


acm multimedia | 2008

Attention-driven action retrieval with DTW-based 3d descriptor matching

Rongrong Ji; Xiaoshuai Sun; Hongxun Yao; Pengfei Xu; Tianqiang Liu; Xianming Liu


Archive | 2010

Method for acquiring video scene relating value and video rapid browsing and searching method applying the method

Rongrong Ji; Hongxun Yao; Xiaoshuai Sun; Tianqiang Liu; Xianming Liu; Pengfei Xu


Archive | 2009

Method for detecting natural scene image words

Hongxun Yao; Pengfei Xu; Rongrong Ji; Xiaoshuai Sun; Tianqiang Liu; Xianming Liu


Archive | 2009

Method for acquiring action classification by combining with spacing restriction information

Hongxun Yao; Tianqiang Liu; Rongrong Ji; Xiaoshuai Sun

Collaboration


Dive into the Tianqiang Liu's collaboration.

Top Co-Authors

Avatar

Hongxun Yao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoshuai Sun

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Pengfei Xu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xianming Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhen Zhang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Debin Zhao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wenkai Li

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yan Liu

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge