Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Longyin Wen is active.

Publication


Featured researches published by Longyin Wen.


european conference on computer vision | 2016

The Visual Object Tracking VOT2014 Challenge Results

Matej Kristan; Roman P. Pflugfelder; Aleš Leonardis; Jiri Matas; Luka Cehovin; Georg Nebehay; Tomas Vojir; Gustavo Fernández; Alan Lukezic; Aleksandar Dimitriev; Alfredo Petrosino; Amir Saffari; Bo Li; Bohyung Han; CherKeng Heng; Christophe Garcia; Dominik Pangersic; Gustav Häger; Fahad Shahbaz Khan; Franci Oven; Horst Bischof; Hyeonseob Nam; Jianke Zhu; Jijia Li; Jin Young Choi; Jin-Woo Choi; João F. Henriques; Joost van de Weijer; Jorge Batista; Karel Lebeda

Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge.net).


computer vision and pattern recognition | 2014

The Fastest Deformable Part Model for Object Detection

Junjie Yan; Zhen Lei; Longyin Wen; Stan Z. Li

This paper solves the speed bottleneck of deformable part model (DPM), while maintaining the accuracy in detection on challenging datasets. Three prohibitive steps in cascade version of DPM are accelerated, including 2D correlation between root filter and feature map, cascade part pruning and HOG feature extraction. For 2D correlation, the root filter is constrained to be low rank, so that 2D correlation can be calculated by more efficient linear combination of 1D correlations. A proximal gradient algorithm is adopted to progressively learn the low rank filter in a discriminative manner. For cascade part pruning, neighborhood aware cascade is proposed to capture the dependence in neighborhood regions for aggressive pruning. Instead of explicit computation of part scores, hypotheses can be pruned by scores of neighborhoods under the first order approximation. For HOG feature extraction, look-up tables are constructed to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. Extensive experiments show that (a) the proposed method is 4 times faster than the current fastest DPM method with similar accuracy on Pascal VOC, (b) the proposed method achieves state-of-the-art accuracy on pedestrian and face detection task with frame-rate speed.


computer vision and pattern recognition | 2014

Multiple Target Tracking Based on Undirected Hierarchical Relation Hypergraph

Longyin Wen; Wenbo Li; Junjie Yan; Zhen Lei; Dong Yi; Stan Z. Li

Multi-target tracking is an interesting but challenging task in computer vision field. Most previous data association based methods merely consider the relationships (e.g. appearance and motion pattern similarities) between detections in local limited temporal domain, leading to their difficulties in handling long-term occlusion and distinguishing the spatially close targets with similar appearance in crowded scenes. In this paper, a novel data association approach based on undirected hierarchical relation hypergraph is proposed, which formulates the tracking task as a hierarchical dense neighborhoods searching problem on the dynamically constructed undirected affinity graph. The relationships between different detections across the spatiotemporal domain are considered in a high-order way, which makes the tracker robust to the spatially close targets with similar appearance. Meanwhile, the hierarchical design of the optimization process fuels our tracker to long-term occlusion with more robustness. Extensive experiments on various challenging datasets (i.e. PETS2009 dataset, ParkingLot), including both low and high density sequences, demonstrate that the proposed method performs favorably against the state-of-the-art methods.


IEEE Transactions on Image Processing | 2014

Robust Deformable and Occluded Object Tracking With Dynamic Graph

Zhaowei Cai; Longyin Wen; Zhen Lei; Nuno Vasconcelos; Stan Z. Li

While some efforts have been paid to handle deformation and occlusion in visual tracking, they are still great challenges. In this paper, a dynamic graph-based tracker (DGT) is proposed to address these two challenges in a unified framework. In the dynamic target graph, nodes are the target local parts encoding appearance information, and edges are the interactions between nodes encoding inner geometric structure information. This graph representation provides much more information for tracking in the presence of deformation and occlusion. The target tracking is then formulated as tracking this dynamic undirected graph, which is also a matching problem between the target graph and the candidate graph. The local parts within the candidate graph are separated from the background with Markov random field, and spectral clustering is used to solve the graph matching. The final target state is determined through a weighted voting procedure according to the reliability of part correspondence, and refined with recourse to a foreground/background segmentation. An effective online updating mechanism is proposed to update the model, allowing DGT to robustly adapt to variations of target structure. Experimental results show improved performance over several state-of-the-art trackers, in various challenging scenarios.


computer vision and pattern recognition | 2015

JOTS: Joint Online Tracking and Segmentation

Longyin Wen; Dawei Du; Zhen Lei; Stan Z. Li; Ming-Hsuan Yang

We present a novel Joint Online Tracking and Segmentation (JOTS) algorithm which integrates the multi-part tracking and segmentation into a unified energy optimization framework to handle the video segmentation task. The multi-part segmentation is posed as a pixel-level label assignment task with regularization according to the estimated part models, and tracking is formulated as estimating the part models based on the pixel labels, which in turn is used to refine the model. The multi-part tracking and segmentation are carried out iteratively to minimize the proposed objective function by a RANSAC-style approach. Extensive experiments on the SegTrack and SegTrack v2 databases demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods.


european conference on computer vision | 2012

Online spatio-temporal structural context learning for visual tracking

Longyin Wen; Zhaowei Cai; Zhen Lei; Dong Yi; Stan Z. Li

Visual tracking is a challenging problem, because the target frequently change its appearance, randomly move its location and get occluded by other objects in unconstrained environments. The state changes of the target are temporally and spatially continuous, in this paper therefore, a robust Spatio-Temporal structural context based Tracker (STT) is presented to complete the tracking task in unconstrained environments. The temporal context capture the historical appearance information of the target to prevent the tracker from drifting to the background in a long term tracking. The spatial context model integrates contributors, which are the key-points automatically discovered around the target, to build a supporting field. The supporting field provides much more information than appearance of the target itself so that the location of the target will be predicted more precisely. Extensive experiments on various challenging databases demonstrate the superiority of our proposed tracker over other state-of-the-art trackers.


IEEE Transactions on Image Processing | 2014

Robust Online Learned Spatio-Temporal Context Model for Visual Tracking

Longyin Wen; Zhaowei Cai; Zhen Lei; Dong Yi; Stan Z. Li

Visual tracking is an important but challenging problem in the computer vision field. In the real world, the appearances of the target and its surroundings change continuously over space and time, which provides effective information to track the target robustly. However, enough attention has not been paid to the spatio-temporal appearance information in previous works. In this paper, a robust spatio-temporal context model based tracker is presented to complete the tracking task in unconstrained environments. The tracker is constructed with temporal and spatial appearance context models. The temporal appearance context model captures the historical appearance of the target to prevent the tracker from drifting to the background in a long-term tracking. The spatial appearance context model integrates contributors to build a supporting field. The contributors are the patches with the same size of the target at the key-points automatically discovered around the target. The constructed supporting field provides much more information than the appearance of the target itself, and thus, ensures the robustness of the tracker in complex environments. Extensive experiments on various challenging databases validate the superiority of our tracker over other state-of-the-art trackers.


asian conference on computer vision | 2012

Structured visual tracking with dynamic graph

Zhaowei Cai; Longyin Wen; Jianwei Yang; Zhen Lei; Stan Z. Li

Structure information has been increasingly incorporated into computer vision field, whereas only a few tracking methods have employed the inner structure of the target. In this paper, we introduce a dynamic graph with pairwise Markov property to model the structure information between the inner parts of the target. The target tracking is viewed as tracking a dynamic undirected graph whose nodes are the target parts and edges are the interactions between parts. These target parts within the graph waiting for matching are separated from the background with graph cut, and a spectral matching technique is exploited to accomplish the graph tracking. With the help of an intuitive updating mechanism, our dynamic graph can robustly adapt to the variations of target structure. Experimental results demonstrate that our structured tracker outperforms several state-of-the-art trackers in occlusion and structure deformations.


IEEE Transactions on Image Processing | 2016

Online Deformable Object Tracking Based on Structure-Aware Hyper-Graph

Dawei Du; Honggang Qi; Wenbo Li; Longyin Wen; Qingming Huang; Siwei Lyu

Recent advances in online visual tracking focus on designing part-based model to handle the deformation and occlusion challenges. However, previous methods usually consider only the pairwise structural dependences of target parts in two consecutive frames rather than the higher order constraints in multiple frames, making them less effective in handling large deformation and occlusion challenges. This paper describes a new and efficient method for online deformable object tracking. Different from most existing methods, this paper exploits higher order structural dependences of different parts of the tracking target in multiple consecutive frames. We construct a structure-aware hyper-graph to capture such higher order dependences, and solve the tracking problem by searching dense subgraphs on it. Furthermore, we also describe a new evaluating data set for online deformable object tracking (the Deform-SOT data set), which includes 50 challenging sequences with full annotations that represent realistic tracking challenges, such as large deformations and severe occlusions. The experimental result of the proposed method shows considerable improvement in performance over the state-of-the-art tracking methods.


international conference on computer vision | 2015

Category-Blind Human Action Recognition: A Practical Recognition System

Wenbo Li; Longyin Wen; Mooi Choo Chuah; Siwei Lyu

Existing human action recognition systems for 3D sequences obtained from the depth camera are designed to cope with only one action category, either single-person action or two-person interaction, and are difficult to be extended to scenarios where both action categories co-exist. In this paper, we propose the category-blind human recognition method (CHARM) which can recognize a human action without making assumptions of the action category. In our CHARM approach, we represent a human action (either a single-person action or a two-person interaction) class using a co-occurrence of motion primitives. Subsequently, we classify an action instance based on matching its motion primitive co-occurrence patterns to each class representation. The matching task is formulated as maximum clique problems. We conduct extensive evaluations of CHARM using three datasets for single-person actions, two-person interactions, and their mixtures. Experimental results show that CHARM performs favorably when compared with several state-of-the-art single-person action and two-person interaction based methods without making explicit assumptions of action category.

Collaboration


Dive into the Longyin Wen's collaboration.

Top Co-Authors

Avatar

Zhen Lei

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Stan Z. Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Siwei Lyu

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Zhaowei Cai

University of California

View shared research outputs
Top Co-Authors

Avatar

Dawei Du

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Honggang Qi

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dong Yi

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge