Hanjiang Lai
Sun Yat-sen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hanjiang Lai.
computer vision and pattern recognition | 2015
Hanjiang Lai; Yan Pan; Ye Liu; Shuicheng Yan
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.
european conference on computer vision | 2016
Shengtao Xiao; Jiashi Feng; Junliang Xing; Hanjiang Lai; Shuicheng Yan; Ashraf A. Kassim
In this work, we introduce a novel Recurrent Attentive-Refinement (RAR) network for facial landmark detection under unconstrained conditions, suffering from challenges like facial occlusions and/or pose variations. RAR follows the pipeline of cascaded regressions that refines landmark locations progressively. However, instead of updating all the landmark locations together, RAR refines the landmark locations sequentially at each recurrent stage. In this way, more reliable landmark points are refined earlier and help to infer locations of other challenging landmarks that may stay with occlusions and/or extreme poses. RAR can thus effectively control detection errors from those challenging landmarks and improve overall performance even in presence of heavy occlusions and/or extreme conditions. To determine the sequence of landmarks, RAR employs an attentive-refinement mechanism. The attention LSTM (A-LSTM) and refinement LSTM (R-LSTM) models are introduced in RAR. At each recurrent stage, A-LSTM implicitly identifies a reliable landmark as the attention center. Following the sequence of attention centers, R-LSTM sequentially refines the landmarks near or correlated with the attention centers and provides ultimate detection results finally. To further enhance algorithmic robustness, instead of using mean shape for initialization, RAR adaptively determines the initialization by selecting from a pool of shape centers clustered from all training shapes. As an end-to-end trainable model, RAR demonstrates superior performance in detecting challenging landmarks in comprehensive experiments and it also establishes new state-of-the-arts on the 300-W, COFW and AFLW benchmark datasets.
IEEE Transactions on Computers | 2013
Hanjiang Lai; Yan Pan; Cong Liu; Liang Lin; Jie Wu
Learning-to-rank for information retrieval has gained increasing interest in recent years. Inspired by the success of sparse models, we consider the problem of sparse learning-to-rank, where the learned ranking models are constrained to be with only a few nonzero coefficients. We begin by formulating the sparse learning-to-rank problem as a convex optimization problem with a sparse-inducing
IEEE Transactions on Image Processing | 2016
Hanjiang Lai; Pan Yan; Xiangbo Shu; Yunchao Wei; Shuicheng Yan
(\ell_1)
international conference on computer vision | 2015
Xiangbo Shu; Jinhui Tang; Hanjiang Lai; Luoqi Liu; Shuicheng Yan
constraint. Since the
IEEE Transactions on Neural Networks | 2013
Hanjiang Lai; Yan Pan; Yong Tang; Rong Yu
(\ell_1)
IEEE Transactions on Circuits and Systems for Video Technology | 2018
Hanjiang Lai; Shengtao Xiao; Yan Pan; Zhen Cui; Jiashi Feng; Chunyan Xu; Jian Yin; Shuicheng Yan
constraint is nondifferentiable, the critical issue arising here is how to efficiently solve the optimization problem. To address this issue, we propose a learning algorithm from the primal dual perspective. Furthermore, we prove that, after at most
european conference on computer vision | 2014
Hanjiang Lai; Yan Pan; Canyi Lu; Yong Tang; Shuicheng Yan
(O({1\over \epsilon } ))
computer supported cooperative work in design | 2011
Hanjiang Lai; Yong Tang; Hai-Xia Luo; Yan Pan
iterations, the proposed algorithm can guarantee the obtainment of an
Pattern Recognition | 2016
Xiangbo Shu; Jinhui Tang; Hanjiang Lai; Zhiheng Niu; Shuicheng Yan
(\epsilon)