Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhaoyang Lu is active.

Publication


Featured researches published by Zhaoyang Lu.


International Journal of Advanced Robotic Systems | 2014

Fast Aerial Video Stitching

Jing Li; Tao Yang; Jingyi Yu; Zhaoyang Lu; Ping Lu; Xia Jia; Wenjie Chen

The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs) is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS). In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.


Complexity | 2018

Transferable Feature Representation for Visible-to-Infrared Cross-Dataset Human Action Recognition

Yang Liu; Zhaoyang Lu; Jing Li; Chao Yao; Yanzi Deng

Recently, infrared human action recognition has attracted increasing attention for it has many advantages over visible light, that is, being robust to illumination change and shadows. However, the infrared action data is limited until now, which degrades the performance of infrared action recognition. Motivated by the idea of transfer learning, an infrared human action recognition framework using auxiliary data from visible light is proposed to solve the problem of limited infrared action data. In the proposed framework, we first construct a novel Cross-Dataset Feature Alignment and Generalization (CDFAG) framework to map the infrared data and visible light data into a common feature space, where Kernel Manifold Alignment (KEMA) and a dual aligned-to-generalized encoders (AGE) model are employed to represent the feature. Then, a support vector machine (SVM) is trained, using both the infrared data and visible light data, and can classify the features derived from infrared data. The proposed method is evaluated on InfAR, which is a publicly available infrared human action dataset. To build up auxiliary data, we set up a novel visible light action dataset XD145. Experimental results show that the proposed method can achieve state-of-the-art performance compared with several transfer learning and domain adaptation methods.


IEEE Access | 2017

A Novel Visual-Vocabulary-Translator-Based Cross-Domain Image Matching

Jing Li; Congcong Li; Tao Yang; Zhaoyang Lu

Cross-domain image matching, which investigates the problem of searching images across different visual domains such as photo, sketch, or painting, has attracted intensive attention in computer vision due to its widespread application. Unlike intra-domain matching, cross-domain images appear quite different in various characteristics. This leads to the failure of most existing approaches. However, the great difference between cross-domain images is just like the huge gap between English and Chinese. The two languages are linked up by an English-Chinese translation dictionary. Inspired by this idea, in this paper, we purpose a novel visual vocabulary translator for cross-domain image matching. This translator consists of two main modules: one is a pair of vocabulary trees which can be regarded as the codebooks in their respective fields, whereas the other is the index file based on cross-domain image pair. Through such a translator, a feature from one visual domain can be translated into another. The proposed algorithm is extensively evaluated on two kinds of cross-domain matching tasks, i.e., photo-to-sketch matching and photo-to-painting matching. Experimental results demonstrate that the effectiveness and efficiency of the visual vocabulary translator. And by employing this translator, the proposed algorithm achieves satisfactory performance in different matching systems. Furthermore, our work shows great potential in multiple visual domains.


Wireless Personal Communications | 2018

Improving Deep Learning Feature with Facial Texture Feature for Face Recognition

Yunfei Li; Zhaoyang Lu; Jing Li; Yanzi Deng

Face recognition in the reality, is a challenging problem, due to varieties in illumination, background, pose etc. Recently, the deep learning based face recognition algorithm is able to learn effective face features to obtain a very impressive performance. However, this kind of face recognition algorithm completely relies on the machine learning based face features, while ignores the useful experience in hand-craft features which have been studied in a long period. Therefore, a face recognition based on facial texture feature aided deep learning feature (FTFA-DLF) is proposed in this paper. The proposed FTFA-DLF is able to combine the benefits of deep learning and hand-craft features. In the proposed FTFA-DLF method, the hand-craft features are texture features extracted from the eyes, nose, and mouth regions. Then, the hand-craft features are used to aid deep learning features by adding both deep learning and hand-craft features into the objective function layer, which adaptively adjusts the deep learning features so that it can better cooperate with the hand-craft features and obtain a better face recognition performance. Experimental results show that the proposed face recognition algorithm on the LFW face database to achieve the accuracy rate of 97.02%.


Sensors | 2017

Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model

Jing Li; Fangbing Zhang; Lisong Wei; Tao Yang; Zhaoyang Lu

Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.


chinese conference on biometric recognition | 2016

Combining Multiple Features for Cross-Domain Face Sketch Recognition

Yang Liu; Jing Li; Zhaoyang Lu; Tao Yang; ZiJian Liu

Cross-domain face sketch recognition plays an important role in biometrics research and industry. In this paper, we propose a novel algorithm combing an intra-modality method called the Eigentransformation and two inter-modality methods based on modality invariant features, namely the Multiscale Local Binary Pattern (MLBP) and the Histogram of Averaged Orientation Gradients (HAOG). Meanwhile, a sum-score fusion of min-max normalized scores is applied to fuse these recognition outputs. Experimental results on the CUFS (Chinese University of Hong Kong (CUHK) Face Sketch Database) and the CUFSF (CUHK Face Sketch FERET Database) datasets reveal that the intra-modality method and inter-modality methods provide complementary information and fusing of them yields better performance.


IEEE Signal Processing Letters | 2018

Global Temporal Representation Based CNNs for Infrared Action Recognition

Yang Liu; Zhaoyang Lu; Jing Li; Tao Yang; Chao Yao


IEEE Transactions on Circuits and Systems for Video Technology | 2018

Hierarchically Learned View-Invariant Representations for Cross-View Action Recognition

Yang Liu; Zhaoyang Lu; Jing Li; Tao Yang


IEEE Access | 2018

Cross-Domain Co-Occurring Feature for Visible-Infrared Image Matching

Jing Li; Congcong Li; Tao Yang; Zhaoyang Lu


IEEE Conference Proceedings | 2016

大規模語いツリーに基づくオンライン実時間画像検索【Powered by NICT】

Shiwei Han; Jing Li; Tao Yang; Zhaoyang Lu; Fangbing Zhang; Lisong Wei

Collaboration


Dive into the Zhaoyang Lu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tao Yang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chao Yao

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge