Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junjie Yan is active.

Publication


Featured researches published by Junjie Yan.


european conference on computer vision | 2014

Salient Color Names for Person Re-identification

Yang Yang; Jimei Yang; Junjie Yan; Shengcai Liao; Dong Yi; Stan Z. Li

Color naming, which relates colors with color names, can help people with a semantic analysis of images in many computer vision applications. In this paper, we propose a novel salient color names based color descriptor (SCNCD) to describe colors. SCNCD utilizes salient color names to guarantee that a higher probability will be assigned to the color name which is nearer to the color. Based on SCNCD, color distributions over color names in different color spaces are then obtained and fused to generate a feature representation. Moreover, the effect of background information is employed and analyzed for person re-identification. With a simple metric learning method, the proposed approach outperforms the state-of-the-art performance (without user’s feedback optimization) on two challenging datasets (VIPeR and PRID 450S). More importantly, the proposed feature can be obtained very fast if we compute SCNCD of each color in advance.


international conference on biometrics | 2012

A face antispoofing database with diverse attacks

Zhiwei Zhang; Junjie Yan; Sifei Liu; Zhen Lei; Dong Yi; Stan Z. Li

Face antispoofing has now attracted intensive attention, aiming to assure the reliability of face biometrics. We notice that currently most of face antispoofing databases focus on data with little variations, which may limit the generalization performance of trained models since potential attacks in real world are probably more complex. In this paper we release a face antispoofing database which covers a diverse range of potential attack variations. Specifically, the database contains 50 genuine subjects, and fake faces are made from the high quality records of the genuine faces. Three imaging qualities are considered, namely the low quality, normal quality and high quality. Three fake face attacks are implemented, which include warped photo attack, cut photo attack and video attack. Therefore each subject contains 12 videos (3 genuine and 9 fake), and the final database contains 600 video clips. Test protocol is provided, which consists of 7 scenarios for a thorough evaluation from all possible aspects. A baseline algorithm is also given for comparison, which explores the high frequency information in the facial region to determine the liveness. We hope such a database can serve as an evaluation platform for future researches in the literature.


computer vision and pattern recognition | 2014

The Fastest Deformable Part Model for Object Detection

Junjie Yan; Zhen Lei; Longyin Wen; Stan Z. Li

This paper solves the speed bottleneck of deformable part model (DPM), while maintaining the accuracy in detection on challenging datasets. Three prohibitive steps in cascade version of DPM are accelerated, including 2D correlation between root filter and feature map, cascade part pruning and HOG feature extraction. For 2D correlation, the root filter is constrained to be low rank, so that 2D correlation can be calculated by more efficient linear combination of 1D correlations. A proximal gradient algorithm is adopted to progressively learn the low rank filter in a discriminative manner. For cascade part pruning, neighborhood aware cascade is proposed to capture the dependence in neighborhood regions for aggressive pruning. Instead of explicit computation of part scores, hypotheses can be pruned by scores of neighborhoods under the first order approximation. For HOG feature extraction, look-up tables are constructed to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. Extensive experiments show that (a) the proposed method is 4 times faster than the current fastest DPM method with similar accuracy on Pascal VOC, (b) the proposed method achieves state-of-the-art accuracy on pedestrian and face detection task with frame-rate speed.


international conference on computer vision | 2015

Convolutional Channel Features

Bin Yang; Junjie Yan; Zhen Lei; Stan Z. Li

Deep learning methods are powerful tools but often suffer from expensive computation and limited flexibility. An alternative is to combine light-weight models with deep representations. As successful cases exist in several visual problems, a unified framework is absent. In this paper, we revisit two widely used approaches in computer vision, namely filtered channel features and Convolutional Neural Networks (CNN), and absorb merits from both by proposing an integrated method called Convolutional Channel Features (CCF). CCF transfers low-level features from pre-trained CNN models to feed the boosting forest model. With the combination of CNN features and boosting forest, CCF benefits from the richer capacity in feature representation compared with channel features, as well as lower cost in computation and storage compared with end-to-end CNN methods. We show that CCF serves as a good way of tailoring pre-trained CNN models to diverse tasks without fine-tuning the whole network to each task by achieving state-of-the-art performances in pedestrian detection, face detection, edge detection and object proposal generation.


computer vision and pattern recognition | 2015

High-fidelity Pose and Expression Normalization for face recognition in the wild

Xiangyu Zhu; Zhen Lei; Junjie Yan; Dong Yi; Stan Z. Li

Pose and expression normalization is a crucial step to recover the canonical view of faces under arbitrary conditions, so as to improve the face recognition performance. An ideal normalization method is desired to be automatic, database independent and high-fidelity, where the face appearance should be preserved with little artifact and information loss. However, most normalization methods fail to satisfy one or more of the goals. In this paper, we propose a High-fidelity Pose and Expression Normalization (HPEN) method with 3D Morphable Model (3DMM) which can automatically generate a natural face image in frontal pose and neutral expression. Specifically, we firstly make a landmark marching assumption to describe the non-correspondence between 2D and 3D landmarks caused by pose variations and propose a pose adaptive 3DMM fitting algorithm. Secondly, we mesh the whole image into a 3D object and eliminate the pose and expression variations using an identity preserving 3D transformation. Finally, we propose an inpainting method based on Possion Editing to fill the invisible region caused by self occlusion. Extensive experiments on Multi-PIE and LFW demonstrate that the proposed method significantly improves face recognition performance and outperforms state-of-the-art methods in both constrained and unconstrained environments.


computer vision and pattern recognition | 2013

Robust Multi-resolution Pedestrian Detection in Traffic Scenes

Junjie Yan; Xucong Zhang; Zhen Lei; Shengcai Liao; Stan Z. Li

The serious performance decline with decreasing resolution is the major bottleneck for current pedestrian detection techniques. In this paper, we take pedestrian detection in different resolutions as different but related problems, and propose a Multi-Task model to jointly consider their commonness and differences. The model contains resolution aware transformations to map pedestrians in different resolutions to a common space, where a shared detector is constructed to distinguish pedestrians from background. For model learning, we present a coordinate descent procedure to learn the resolution aware transformations and deformable part model (DPM) based detector iteratively. In traffic scenes, there are many false positives located around vehicles, therefore, we further build a context model to suppress them according to the pedestrian-vehicle relationship. The context model can be learned automatically even when the vehicle annotations are not available. Our method reduces the mean miss rate to 60% for pedestrians taller than 30 pixels on the Caltech Pedestrian Benchmark, which noticeably outperforms previous state-of-the-art (71%).


International Journal of Central Banking | 2011

Competition on counter measures to 2-D facial spoofing attacks

Murali Mohan Chakka; André Anjos; Sébastien Marcel; Roberto Tronci; Daniele Muntoni; Gianluca Fadda; Maurizio Pili; Nicola Sirena; Gabriele Murgia; Marco Ristori; Fabio Roli; Junjie Yan; Dong Yi; Zhen Lei; Zhiwei Zhang; Stan Z. Li; William Robson Schwartz; Anderson Rocha; Helio Pedrini; Javier Lorenzo-Navarro; Modesto Castrillón-Santana; Jukka Määttä; Abdenour Hadid; Matti Pietikäinen

Spoofing identities using photographs is one of the most common techniques to attack 2-D face recognition systems. There seems to exist no comparative studies of different techniques using the same protocols and data. The motivation behind this competition is to compare the performance of different state-of-the-art algorithms on the same database using a unique evaluation method. Six different teams from universities around the world have participated in the contest. Use of one or multiple techniques from motion, texture analysis and liveness detection appears to be the common trend in this competition. Most of the algorithms are able to clearly separate spoof attempts from real accesses. The results suggest the investigation of more complex attacks.


international conference on computer vision | 2013

Learn to Combine Multiple Hypotheses for Accurate Face Alignment

Junjie Yan; Zhen Lei; Dong Yi; Stan Z. Li

In this paper, we present the details of our method in attending the 300 Faces in-the-wild (300W) challenge. We build our method on cascade regression framework, where a series of regressors are utilized to progressively refine the shape initialized by face detector. In cascade regression, we use the HOG feature in a multi-scale manner, where the large pose validation is handled in early stages by HOG feature at large scale, and then shape is refined at later stages with HOG feature at small scale. We observe that the performance of the cascade regression method decreases when the initialization provided by face detector is not accurate enough (for faces with large appearance variations, face detection is still a challenging problem). To handle the problem, we propose to generate multiple hypotheses, and then learn to rank or combine these hypotheses to get the final result. The parameters in both learn to rank and learn to combine can be learned in a structural SVM framework. Despite the simplicity of our method, it achieves state-of-the-art performance on LFPW, and dramatically outperforms the baseline AAM on the 300-W challenge.


computer vision and pattern recognition | 2014

Multiple Target Tracking Based on Undirected Hierarchical Relation Hypergraph

Longyin Wen; Wenbo Li; Junjie Yan; Zhen Lei; Dong Yi; Stan Z. Li

Multi-target tracking is an interesting but challenging task in computer vision field. Most previous data association based methods merely consider the relationships (e.g. appearance and motion pattern similarities) between detections in local limited temporal domain, leading to their difficulties in handling long-term occlusion and distinguishing the spatially close targets with similar appearance in crowded scenes. In this paper, a novel data association approach based on undirected hierarchical relation hypergraph is proposed, which formulates the tracking task as a hierarchical dense neighborhoods searching problem on the dynamically constructed undirected affinity graph. The relationships between different detections across the spatiotemporal domain are considered in a high-order way, which makes the tracker robust to the spatially close targets with similar appearance. Meanwhile, the hierarchical design of the optimization process fuels our tracker to long-term occlusion with more robustness. Extensive experiments on various challenging datasets (i.e. PETS2009 dataset, ParkingLot), including both low and high density sequences, demonstrate that the proposed method performs favorably against the state-of-the-art methods.


Image and Vision Computing | 2014

Face detection by structural models

Junjie Yan; Xuzong Zhang; Zhen Lei; Stan Z. Li

Despite the successes in the last two decades, the state-of-the-art face detectors still have problems in dealing with images in the wild due to large appearance variations. Instead of leaving appearance variations directly to statistical learning algorithms, we propose a hierarchical part based structural model to explicitly capture them. The model enables part subtype option to handle local appearance variations such as closed and open month, and part deformation to capture the global appearance variations such as pose and expression. In detection, candidate window is fitted to the structural model to infer the part location and part subtype, and detection score is then computed based on the fitted configuration. In this way, the influence of appearance variation is reduced. Besides the face model, we exploit the co-occurrence between face and body, which helps to handle large variations, such as heavy occlusions, to further boost the face detection performance. We present a phrase based representation for body detection, and propose a structural context model to jointly encode the outputs of face detector and body detector. Benefit from the rich structural face and body information, as well as the discriminative structural learning algorithm, our method achieves state-of-the-art performance on FDDB, AFW and a self-annotated dataset, under wide comparisons with commercial and academic methods

Collaboration


Dive into the Junjie Yan's collaboration.

Top Co-Authors

Avatar

Stan Z. Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhen Lei

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaogang Wang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Dong Yi

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Bin Yang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongsheng Li

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yu Liu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jing Shao

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Shuai Yi

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge