Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yue Deng is active.

Publication


Featured researches published by Yue Deng.


IEEE Transactions on Neural Networks | 2013

Low-Rank Structure Learning via Nonconvex Heuristic Recovery

Yue Deng; Qionghai Dai; Risheng Liu; Zengke Zhang; Sanqing Hu

In this paper, we propose a nonconvex framework to learn the essential low-rank structure from corrupted data. Different from traditional approaches, which directly utilizes convex norms to measure the sparseness, our method introduces more reasonable nonconvex measurements to enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions. We will, respectively, introduce how to combine the widely used ℓp norm (0 <; p <; 1) and log-sum term into the framework of low-rank structure learning. Although the proposed optimization is no longer convex, it still can be effectively solved by a majorization-minimization (MM)-type algorithm, with which the nonconvex objective function is iteratively replaced by its convex surrogate and the nonconvex problem finally falls into the general framework of reweighed approaches. We prove that the MM-type algorithm can converge to a stationary point after successive iterations. The proposed model is applied to solve two typical problems: robust principal component analysis and low-rank representation. Experimental results on low-rank structure learning demonstrate that our nonconvex heuristic methods, especially the log-sum heuristic recovery algorithm, generally perform much better than the convex-norm-based method (0 <; p <; 1) for both data with higher rank and with denser corruptions.


IEEE Transactions on Image Processing | 2011

Graph Laplace for Occluded Face Completion and Recognition

Yue Deng; Qionghai Dai; Zengke Zhang

This paper proposes a spectral-graph-based algorithm for face image repairing, which can improve the recognition performance on occluded faces. The face completion algorithm proposed in this paper includes three main procedures: 1) sparse representation for partially occluded face classification; 2) image-based data mining; and 3) graph Laplace (GL) for face image completion. The novel part of the proposed framework is GL, as named from graphical models and the Laplace equation, and can achieve a high-quality repairing of damaged or occluded faces. The relationship between the GL and the traditional Poisson equation is proven. We apply our face repairing algorithm to produce completed faces, and use face recognition to evaluate the performance of the algorithm. Experimental results verify the effectiveness of the GL method for occluded face completion.


IEEE Journal of Selected Topics in Signal Processing | 2012

Noisy Depth Maps Fusion for Multiview Stereo Via Matrix Completion

Yue Deng; Yebin Liu; Qionghai Dai; Zengke Zhang; Yao Wang

This paper introduces a general framework to fuse noisy point clouds from multiview images of the same object. We solve this classical vision problem using a newly emerging signal processing technique known as matrix completion. With this framework, we construct the initial incomplete matrix from the observed point clouds by all the cameras, with the invisible points by any camera denoted as unknown entries. The observed points corresponding to the same object point are put into the same row. When properly completed, the recovered matrix should have rank one, since all the columns describe the same object. Therefore, an intuitive approach to complete the matrix is by minimizing its rank subject to consistency with observed entries. In order to improve the fusion accuracy, we propose a general noisy matrix completion method called log-sum penalty completion (LPC), which is particularly effective in removing outliers. Based on the majorization–minimization algorithm (MM), the non-convex LPC problem is effectively solved by a sequence of convex optimizations. Experimental results on both point cloud fusion and MVS reconstructions verify the effectiveness of the proposed framework and the LPC algorithm.


IEEE Transactions on Fuzzy Systems | 2017

A Hierarchical Fused Fuzzy Deep Neural Network for Data Classification

Yue Deng; Zhiquan Ren; Youyong Kong; Feng Bao; Qionghai Dai

Deep learning (DL) is an emerging and powerful paradigm that allows large-scale task-driven feature learning from big data. However, typical DL is a fully deterministic model that sheds no light on data uncertainty reductions. In this paper, we show how to introduce the concepts of fuzzy learning into DL to overcome the shortcomings of fixed representation. The bulk of the proposed fuzzy system is a hierarchical deep neural network that derives information from both fuzzy and neural representations. Then, the knowledge learnt from these two respective views are fused altogether forming the final data representation to be classified. The effectiveness of the model is verified on three practical tasks of image categorization, high-frequency financial data prediction and brain MRI segmentation that all contain high level of uncertainties in the raw data. The fuzzy dDL paradigm greatly outperforms other nonfuzzy and shallow learning approaches on these tasks.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Visual Words Assignment Via Information-Theoretic Manifold Embedding

Yue Deng; Yipeng Li; Yanjun Qian; Xiangyang Ji; Qionghai Dai

Codebook-based learning provides a flexible way to extract the contents of an image in a data-driven manner for visual recognition. One central task in such frameworks is codeword assignment, which allocates local image descriptors to the most similar codewords in the dictionary to generate histogram for categorization. Nevertheless, existing assignment approaches, e.g., nearest neighbors strategy (hard assignment) and Gaussian similarity (soft assignment), suffer from two problems: 1) too strong Euclidean assumption and 2) neglecting the label information of the local descriptors. To address the aforementioned two challenges, we propose a graph assignment method with maximal mutual information (GAMI) regularization. GAMI takes the power of manifold structure to better reveal the relationship of massive number of local features by nonlinear graph metric. Meanwhile, the mutual information of descriptor-label pairs is ultimately optimized in the embedding space for the sake of enhancing the discriminant property of the selected codewords. According to such objective, two optimization models, i.e., inexact-GAMI and exact-GAMI, are respectively proposed in this paper. The inexact model can be efficiently solved with a closed-from solution. The stricter exact-GAMI nonparametrically estimates the entropy of descriptor-label pairs in the embedding space and thus leads to a relatively complicated but still trackable optimization. The effectiveness of GAMI models are verified on both the public and our own datasets.


IEEE Transactions on Neural Networks | 2017

Deep Direct Reinforcement Learning for Financial Signal Representation and Trading

Yue Deng; Feng Bao; Youyong Kong; Zhiquan Ren; Qionghai Dai

Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.


Computer Vision and Image Understanding | 2012

Commute time guided transformation for feature extraction

Yue Deng; Qionghai Dai; Ruiping Wang; Zengke Zhang

This paper presents a random-walk-based feature extraction method called commute time guided transformation (CTG) in the graph embedding framework. The paper contributes to the corresponding field in two aspects. First, it introduces the usage of a robust probability metric, i.e., the commute time (CT), to extract visual features for face recognition via a manifold way. Second, the paper designs the CTG optimization to find linear orthogonal projections that would implicitly preserve the commute time of high dimensional data in a low dimensional subspace. Compared with previous CT embedding algorithms, the proposed CTG is a graph-independent method. Existing CT embedding methods are graph-dependent that could only embed the data on the training graph in the subspace. Differently, CTG paradigm can be used to project the out-of-sample data into the same embedding space as the training graph. Moreover, CTG projections are robust to the graph topology that it can always achieve good recognition performance in spite of different initial graph structures. Owing to these positive properties, when applied to face recognition, the proposed CTG method outperforms other state-of-the-art algorithms on benchmark datasets. Specifically, it is much efficient and effective to recognize faces with noise.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

Free-Viewpoint Video of Human Actors Using Multiple Handheld Kinects

Genzhi Ye; Yebin Liu; Yue Deng; Nils Hasler; Xiangyang Ji; Qionghai Dai; Christian Theobalt

We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.


IEEE Transactions on Image Processing | 2014

Joint Non-Gaussian Denoising and Superresolving of Raw High Frame Rate Videos

Jinli Suo; Yue Deng; Liheng Bian; Qionghai Dai

High frame rate cameras capture sharp videos of highly dynamic scenes by trading off signal-noise-ratio and image resolution, so combinational super-resolving and denoising is crucial for enhancing high speed videos and extending their applications. The solution is nontrivial due to the fact that two deteriorations co-occur during capturing and noise is nonlinearly dependent on signal strength. To handle this problem, we propose conducting noise separation and super resolution under a unified optimization framework, which models both spatiotemporal priors of high quality videos and signal-dependent noise. Mathematically, we align the frames along temporal axis and pursue the solution under the following three criterion: 1) the sharp noise-free image stack is low rank with some missing pixels denoting occlusions; 2) the noise follows a given nonlinear noise model; and 3) the recovered sharp image can be reconstructed well with sparse coefficients and an over complete dictionary learned from high quality natural images. In computation aspects, we propose to obtain the final result by solving a convex optimization using the modern local linearization techniques. In the experiments, we validate the proposed approach in both synthetic and real captured data.High frame rate cameras capture sharp videos of highly dynamic scenes by trading off signal-noise-ratio and image resolution, so combinational super-resolving and denoising is crucial for enhancing high speed videos and extending their applications. The solution is nontrivial due to the fact that two deteriorations co-occur during capturing and noise is nonlinearly dependent on signal strength. To handle this problem, we propose conducting noise separation and super resolution under a unified optimization framework, which models both spatiotemporal priors of high quality videos and signal-dependent noise. Mathematically, we align the frames along temporal axis and pursue the solution under the following three criterion: 1) the sharp noise-free image stack is low rank with some missing pixels denoting occlusions; 2) the noise follows a given nonlinear noise model; and 3) the recovered sharp image can be reconstructed well with sparse coefficients and an over complete dictionary learned from high quality natural images. In computation aspects, we propose to obtain the final result by solving a convex optimization using the modern local linearization techniques. In the experiments, we validate the proposed approach in both synthetic and real captured data.


PLOS ONE | 2013

Differences Help Recognition: A Probabilistic Interpretation

Yue Deng; Yanyu Zhao; Yebin Liu; Qionghai Dai

This paper presents a computational model to address one prominent psychological behavior of human beings to recognize images. The basic pursuit of our method can be concluded as that differences among multiple images help visual recognition. Generally speaking, we propose a statistical framework to distinguish what kind of image features capture sufficient category information and what kind of image features are common ones shared in multiple classes. Mathematically, the whole formulation is subject to a generative probabilistic model. Meanwhile, a discriminative functionality is incorporated into the model to interpret the differences among all kinds of images. The whole Bayesian formulation is solved in an Expectation-Maximization paradigm. After finding those discriminative patterns among different images, we design an image categorization algorithm to interpret how these differences help visual recognition within the bag-of-feature framework. The proposed method is verified on a variety of image categorization tasks including outdoor scene images, indoor scene images as well as the airborne SAR images from different perspectives.

Collaboration


Dive into the Yue Deng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruiping Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge