Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qiulei Dong is active.

Publication


Featured researches published by Qiulei Dong.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Pointwise Motion Image (PMI): A Novel Motion Representation and Its Applications to Abnormality Detection and Behavior Recognition

Qiulei Dong; Yihong Wu; Zhanyi Hu

In this paper, we propose a novel motion representation and apply it to abnormality detection and behavior recognition. At first, pointwise correspondences for the foreground in two consecutive video frames are established by performing a salient-region-based pointwise matching algorithm. Then, based on the established pointwise correspondences, a pointwise motion image (PMI) for each frame is built up to represent the motion status of the foreground. The PMI is more suitable for video analysis as it encapsulates a variety of motion information such as pointwise motion speed, pointwise motion orientation, pointwise motion duration, as well as the global shape of the foreground. In addition, it represents all of these pieces of information by a color image in the HSV space, by which many popular techniques in the image processing field can be straightforwardly adopted. By combining the PMI and AdaBoost, a method for abnormality detection and behavior recognition is proposed. The proposed method is shown to possess a high discriminative ability and is capable of dealing with local motion, global motion, and similar motions with different speeds. Experiments including a comparison with two existing methods demonstrate the effectiveness of the proposed representation in abnormality detection and behavior recognition.


asian conference on computer vision | 2006

Gesture recognition using quadratic curves

Qiulei Dong; Yihong Wu; Zhanyi Hu

This paper presents a novel method for human gesture recognition based on quadratic curves. Firstly, face and hands in the images are extracted by skin color and their central points are kept tracked by a modified Greedy Exchange algorithm. Then in each trajectory, the central points are fitted into a quadratic curve and 6 invariants from this quadratic curve are computed. Following these computations, a gesture feature vector composed of 6n such invariants is constructed, where n is the number of the trajectories in this gesture. Lastly, the gesture models are learnt from the feature vectors of gesture samples and an input gesture is recognized by comparing its feature vector with those of gesture models. In this gesture recognition method, the computational cost is low because the gesture duration does not need to be considered and only simple curvilinear integral and matrix computation are involved. Experiments on hip-hop dance show that our method can achieve a recognition rate as high as 97.65% on a database of 16 different gestures, each performed by 8 different people for 8 different times.


IEEE Transactions on Image Processing | 2012

Weighted Similarity-Invariant Linear Algorithm for Camera Calibration With Rotating 1-D Objects

Kunfeng Shi; Qiulei Dong; Fuchao Wu

In this paper, a weighted similarity-invariant linear algorithm for camera calibration with rotating 1-D objects is proposed. First, we propose a new estimation method for computing the relative depth of the free endpoint on the 1-D object and prove its robustness against noise compared with those used in previous literature. The introduced estimator is invariant to image similarity transforms, resulting in a similarity-invariant linear calibration algorithm which is slightly more accurate than the well-known normalized linear algorithm. Then, we use the reciprocals of the standard deviations of the estimated relative depths from different images as the weights on the constraint equations of the similarity-invariant linear calibration algorithm, and propose a weighted similarity-invariant linear calibration algorithm with higher accuracy. Experimental results on synthetic data as well as on real image data show the effectiveness of our proposed algorithm.


international conference on image processing | 2016

Pursuing face identity from view-specific representation to view-invariant representation

Ting Zhang; Qiulei Dong; Zhanyi Hu

How to learn view-invariant facial representations is an important task for view-invariant face recognition. The recent work [1] discovered that the brain of the macaque monkey has a face-processing network, where some neurons are view-specific. Motivated by this discovery, this paper proposes a deep convolutional learning model for face recognition, which explicitly enforces this view-specific mechanism for learning view-invariant facial representations. The proposed model consists of two concatenated modules: the first one is a convolutional neural network (CNN) for learning the corresponding viewing pose to the input face image; the second one consists of multiple CNNs, each of which learns the corresponding frontal image of an image under a specific viewing pose. This method is of low computational cost, and it can be well trained with a relatively small number of samples. The experimental results on the MultiPIE dataset demonstrate the effectiveness of our proposed convolutional model in contrast to three state-of-the-art works.


international conference on pattern recognition | 2006

Gesture Segmentation from a Video Sequence Using Greedy Similarity Measure

Qiulei Dong; Yihong Wu; Zhanyi Hu

We propose a novel method of greedy similarity measure to segment long spatial-temporal video sequences. Firstly, a principal curve of motion region along frames of a video sequence is constructed to represent trajectory. Then from the constructed principal curves of trajectories of predefined gestures, HMMs are applied to modeling them. For a long input video sequence, greedy similarity measure is established to automatically segment it into gestures along with gesture recognition, where true breakpoints of its principal curve are found by maximizing the joint probability of two successive candidate segments conditioned on the gesture models obtained from HMMs. The method is flexible, of high accuracy, and robust to noise due to the exploitation of principal curves, the combination of two successive candidate segments, and the simultaneous recognition. Experiments including comparison with two established methods demonstrate the effectiveness of the proposed method


Applied Biochemistry and Biotechnology | 2014

Effects of Exogenous Methyl Jasmonate on the Biosynthesis of Shikonin Derivatives in Callus Tissues of Arnebia euchroma

He Hao; Caiyan Lei; Qiulei Dong; Yalin Shen; Jianting Chi; Hechun Ye; Hong Wang

The shikonin derivatives, accumulated in the roots of Arnebia euchroma (Boraginaceae), showed antibacterial, anti-inflammatory, and anti-tumor activities. To explore their possible biosynthesis regulation mechanism, this paper investigated the effects of exogenous methyl jasmonate (MJ) on the biosynthesis of shikonin derivatives in callus cultures of A. euchroma. The main results include: Under MJ treatment, the growth of A. euchroma callus cultures was not inhibited, but the expression level of both the genes involved in the biosynthesis of shikonin derivatives and their precursors and the genes responsible for intracellular localization of shikonin derivatives increased significantly in the Red Strain (shikonin derivatives high-producing strain). The quantitative analysis showed that six out of the seven naphthoquinone compounds under investigation increased their contents in the MJ-treated Red Strain, and in particular, the bioactive component acetylshikonin nearly doubled its content in the MJ-treated Red Strain. In addition, it was also observed that the metabolic profiling of naphthoquinone compounds changed significantly after MJ treatment, and the MJ-treated and MJ-untreated strains clearly formed distinct clusters in the score plot of PLS-DA. Our results provide some new insights into the regulation mechanism of the biosynthesis of shikonin derivatives and a possible way to increase the production of naphthoquinone compounds in A. euchroma callus cultures in the future.


International Journal of Advanced Robotic Systems | 2016

Fast depth extraction from a single image

Lei He; Qiulei Dong; Guanghui Wang

Predicting depth from a single image is an important problem for understanding the 3-D geometry of a scene. Recently, the nonparametric depth sampling (DepthTransfer) has shown great potential in solving this problem, and its two key components are a Scale Invariant Feature Transform (SIFT) flow–based depth warping between the input image and its retrieved similar images and a pixel-wise depth fusion from all warped depth maps. In addition to the inherent heavy computational load in the SIFT flow computation even under a coarse-to-fine scheme, the fusion reliability is also low due to the low discriminativeness of pixel-wise description nature. This article aims at solving these two problems. First, a novel sparse SIFT flow algorithm is proposed to reduce the complexity from subquadratic to sublinear. Then, a reweighting technique is introduced where the variance of the SIFT flow descriptor is computed at every pixel and used for reweighting the data term in the conditional Markov random fields. Our proposed depth transfer method is tested on the Make3D Range Image Data and NYU Depth Dataset V2. It is shown that, with comparable depth estimation accuracy, our method is 2–3 times faster than the DepthTransfer.


Science in China Series F: Information Sciences | 2012

Automatic real-time SLAM relocalization based on a hierarchical bipartite graph model

Qiulei Dong; Zhaopeng Gu; Zhanyi Hu

The need to increase the robustness of a real-time monocular SLAM system raises the important problem of relocalization; namely, how to automatically recover a SLAM system after tracking failures. We address this problem by proposing a real-time relocalization algorithm based on a hierarchical bipartite graph model. When the SLAM system is lost, we use the latter model to find sufficient correspondences between the detected image and stored map features, thus achieving efficient, real-time relocalization. The model accounts for both temporal and spatial constraints. Experimental results on both synthetic and real data support the effectiveness of the proposed algorithm.


Science in China Series F: Information Sciences | 2018

Learning stratified 3D reconstruction

Qiulei Dong; Mao Shu; Hainan Cui; Huarong Xu; Zhanyi Hu

Stratified 3D reconstruction, or a layer-by-layer 3D reconstruction upgraded from projective to affine, then to the final metric reconstruction, is a well-known 3D reconstruction method in computer vision. It is also a key supporting technology for various well-known applications, such as streetview, smart3D, oblique photogrammetry. Generally speaking, the existing computer vision methods in the literature can be roughly classified into either the geometry-based approaches for spatial vision or the learning-based approaches for object vision. Although deep learning has demonstrated tremendous success in object vision in recent years, learning 3D scene reconstruction from multiple images is still rare, even not existent, except for those on depth learning from single images. This study is to explore the feasibility of learning the stratified 3D reconstruction from putative point correspondences across images, and to assess whether it could also be as robust to matching outliers as the traditional geometry-based methods do. In this study, a special parsimonious neural network is designed for the learning. Our results show that it is indeed possible to learn a stratified 3D reconstruction from noisy image point correspondences, and the learnt reconstruction results appear satisfactory although they are still not on a par with the state-of-the-arts in the structure-from-motion community due to largely its lack of an explicit robust outlier detector such as random sample consensus (RANSAC). To the best of our knowledge, our study is the first attempt in the literature to learn 3D scene reconstruction from multiple images. Our results also show that how to implicitly or explicitly integrate an outlier detector in learning methods is a key problem to solve in order to learn comparable 3D scene structures to those by the current geometry-based state-of-the-arts. Otherwise any significant advancement of learning 3D structures from multiple images seems difficult, if not impossible. Besides, we even speculate that deep learning might be, in nature, not suitable for learning 3D structure from multiple images, or more generally, for solving spatial vision problems.


IEEE Signal Processing Letters | 2017

Two-Stream Deep Correlation Network for Frontal Face Recovery

Ting Zhang; Qiulei Dong; Ming Tang; Zhanyi Hu

Pose and textural variations are two dominant factors to affect the performance of face recognition. It is widely believed that generating the corresponding frontal face from a face image of an arbitrary pose is an effective step toward improving the recognition performance. In the literature, however, the frontal face is generally recovered by only exploring textural characteristic. In this letter, we propose a two-stream deep correlation network, which incorporates both geometric and textural features for frontal face recovery. Given a face image under an arbitrary pose as input, geometric and textural characteristics are first extracted from two separate streams. The extracted characteristics are then fused through the proposed multiplicative patch correlation layer. These two steps are integrated into one network for end-to-end training and prediction, which is demonstrated effective compared with state-of-the-art methods on the benchmark datasets.

Collaboration


Dive into the Qiulei Dong's collaboration.

Top Co-Authors

Avatar

Zhanyi Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yihong Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hong Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ting Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Huarong Xu

Xiamen University of Technology

View shared research outputs
Top Co-Authors

Avatar

Fuchao Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Kunfeng Shi

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Mao Shu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhaopeng Gu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Bo Liu

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge