Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhenni Li is active.

Publication


Featured researches published by Zhenni Li.


Neural Computation | 2015

A fast algorithm for learning overcomplete dictionary for sparse representation based on proximal operators

Zhenni Li; Shuxue Ding; Yujie Li

We present a fast, efficient algorithm for learning an overcomplete dictionary for sparse representation of signals. The whole problem is considered as a minimization of the approximation error function with a coherence penalty for the dictionary atoms and with the sparsity regularization of the coefficient matrix. Because the problem is nonconvex and nonsmooth, this minimization problem cannot be solved efficiently by an ordinary optimization method. We propose a decomposition scheme and an alternating optimization that can turn the problem into a set of minimizations of piecewise quadratic and univariate subproblems, each of which is a single variable vector problem, of either one dictionary atom or one coefficient vector. Although the subproblems are still nonsmooth, remarkably they become much simpler so that we can find a closed-form solution by introducing a proximal operator. This leads to an efficient algorithm for sparse representation. To our knowledge, applying the proximal operator to the problem with an incoherence term and obtaining the optimal dictionary atoms in closed form with a proximal operator technique have not previously been studied. The main advantages of the proposed algorithm are that, as suggested by our analysis and simulation study, it has lower computational complexity and a higher convergence rate than state-of-the-art algorithms. In addition, for real applications, it shows good performance and significant reductions in computational time.


Digital Signal Processing | 2016

Dictionary learning with the cosparse analysis model based on summation of blocked determinants as the sparseness measure

Yujie Li; Shuxue Ding; Zhenni Li

Dictionary learning is crucially important for sparse representation of signals. Most existing methods are based on the so called synthesis model, in which the dictionary is column redundant. This paper addresses the dictionary learning and sparse representation with the so-called analysis model. In this model, the analysis dictionary multiplying the signal can lead to a sparse outcome. Though it has been studied in the literature, there is still not an investigation in the context of dictionary learning for nonnegative signal representation, while the algorithms designed for general signal are found not sufficient when applied to the nonnegative signals. In this paper, for a more efficient dictionary learning, we propose a novel cost function that is termed as the summation of blocked determinants measure of sparseness (SBDMS). Based on this measure, a new analysis sparse model is derived, and an iterative sparseness maximization scheme is proposed to solve this model. In the scheme, the analysis sparse representation problem can be cast into row-to-row optimizations with respect to the analysis dictionary, and then the quadratic programming (QP) technique is used to optimize each row. Therefore, we present an algorithm for the dictionary learning and sparse representation for nonnegative signals. Numerical experiments on recovery of analysis dictionary show the effectiveness of the proposed method.


international conference on digital signal processing | 2014

A dictionary-learning algorithm for the analysis sparse model with a determinant-type of sparsity measure

Yujie Li; Shuxue Ding; Zhenni Li

Dictionary learning for sparse representation of signals has been successfully applied in signal processing. Most the existing methods are based on the synthesis model, in which the dictionary is overcomplete. This paper addresses the dictionary learning and sparse representation with the so-called analysis model. In this new model, the analysis dictionary multiplying the signal can lead to a sparse outcome. Though it has been studied in the literature, there is still not an investigation in the context of nonnegative signal representation, which should not be a trivial problem. In this paper, moreover, we propose to learn an analysis dictionary from signals using a determinant-type of sparsity measure. In the formulation, we adopt the Euclidean distance as the error measure. Based on these, we present a new algorithm for the dictionary learning and sparse representation. Numerical experiments on recovery of analysis dictionary show the effectiveness of the proposed method.


Abstract and Applied Analysis | 2013

Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent

Zunyi Tang; Shuxue Ding; Zhenni Li; Linlin Jiang

Sparse representation of signals via an overcomplete dictionary has recently received much attention as it has produced promising results in various applications. Since the nonnegativities of the signals and the dictionary are required in some applications, for example, multispectral data analysis, the conventional dictionary learning methods imposed simply with nonnegativity may become inapplicable. In this paper, we propose a novel method for learning a nonnegative, overcomplete dictionary for such a case. This is accomplished by posing the sparse representation of nonnegative signals as a problem of nonnegative matrix factorization (NMF) with a sparsity constraint. By employing the coordinate descent strategy for optimization and extending it to multivariable case for processing in parallel, we develop a so-called parallel coordinate descent dictionary learning (PCDDL) algorithm, which is structured by iteratively solving the two optimal problems, the learning process of the dictionary and the estimating process of the coefficients for constructing the signals. Numerical experiments demonstrate that the proposed algorithm performs better than the conventional nonnegative K-SVD (NN-KSVD) algorithm and several other algorithms for comparison. What is more, its computational consumption is remarkably lower than that of the compared algorithms.


2013 IEEE International Conference on Cybernetics (CYBCO) | 2013

Dictionary learning by nonnegative matrix factorization with 1/2-norm sparsity constraint

Zhenni Li; Zunyi Tang; Shuxue Ding

In this paper, we propose an overcomplete, nonnegative dictionary learning method for sparse representation of signals, which is based on the nonnegative matrix factorization (NMF) with 1/2-norm as the sparsity constraint. By introducing the 1/2-norm as the sparsity constraint into NMF, we show that the problem can be cast as sequential optimization problems of quadratic functions and quartic functions. The optimization problem of each quadratic function can be solved easily since the problem has closed-form unique solution. The optimization problem of quartic function can also be formulated as solving a cubic equation, which can be efficiently solved by the Cardano formula and selecting one of solutions with a rule. To implement this nonnegative dictionary learning, we develop an algorithm by employing coordinate-wise decent strategy, i.e., coordinate-wise decent based nonnegative dictionary learning (CDNDL). Numerical experiments show that the proposed algorithm performs better than the nonnegative K-SVD (NN-KSVD) and the other two compared algorithms.


Neural Networks | 2018

Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer

Zhenni Li; Shuxue Ding; Yujie Li; Zuyuan Yang; Shengli Xie; Wuhui Chen

Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ1∕2 norm as a regularizer. The very recent study on ℓ1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms.


International Journal of Machine Learning and Cybernetics | 2018

Dictionary learning with the {{\ell }_{1/2}}-regularizer and the coherence penalty and its convergence analysis

Zhenni Li; Takafumi Hayashi; Shuxue Ding; Yujie Li

The


international conference on digital signal processing | 2015

Dictionary learning with log-regularizer for sparse representation

Zhenni Li; Shuxue Ding; Yujie Li


international conference on signal and information processing | 2014

Improving dictionary learning using the Itakura-Saito divergence

Zhenni Li; Shuxue Ding; Yujie Li; Zunyi Tang; Wuhui Chen

{{\ell }_{1/2}}


Neurocomputing | 2017

Analysis dictionary learning using block coordinate descent framework with proximal operators

Zhenni Li; Shuxue Ding; Takafumi Hayashi; Yujie Li

Collaboration


Dive into the Zhenni Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiang Li

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shengli Xie

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zuyuan Yang

Guangdong University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge