Luca Zanni
University of Modena and Reggio Emilia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luca Zanni.
Inverse Problems | 2009
Silvia Bonettini; Riccardo Zanella; Luca Zanni
A class of scaled gradient projection methods for optimization problems with simple constraints is considered. These iterative algorithms can be useful in variational approaches to image deblurring that lead to minimized convex nonlinear functions subject to non-negativity constraints and, in some cases, to an additional flux conservation constraint. A special gradient projection method is introduced that exploits effective scaling strategies and steplength updating rules, appropriately designed for improving the convergence rate. We give convergence results for this scheme and we evaluate its effectiveness by means of an extensive computational study on the minimization problems arising from the maximum likelihood approach to image deblurring. Comparisons with the standard expectation maximization algorithm and with other iterative regularization schemes are also reported to show the computational gain provided by the proposed method.
parallel computing | 2003
Gaetano Zanghirati; Luca Zanni
This work is concerned with the solution of the convex quadratic programming problem arising in training the learning machines named support vector machines. The problem is subject to box constraints and to a single linear equality constraint; it is dense and, for many practical applications, it becomes a large-scale problem. Thus, approaches based on explicit storage of the matrix of the quadratic form are not practicable. Here we present an easily parallelizable approach based on a decomposition technique that splits the problem into a sequence of smaller quadratic programming subproblems. These subproblems are solved by a variable projection method that is well suited to a parallel implementation and is very effective in the case of Gaussian support vector machines. Performance results are presented on well known large-scale test problems, in scalar and parallel environments. The numerical results show that the approach is comparable on scalar machines with a widely used technique and can achieve good efficiency and scalability on a multiprocessor system.
Inverse Problems | 2009
Riccardo Zanella; Patrizia Boccacci; Luca Zanni; M. Bertero
Several methods based on different image models have been proposed and developed for image denoising. Some of them, such as total variation (TV) and wavelet thresholding, are based on the assumption of additive Gaussian noise. Recently the TV approach has been extended to the case of Poisson noise, a model describing the effect of photon counting in applications such as emission tomography, microscopy and astronomy. For the removal of this kind of noise we consider an approach based on a constrained optimization problem, with an objective function describing TV and other edge-preserving regularizations of the Kullback–Leibler divergence. We introduce a new discrepancy principle for the choice of the regularization parameter, which is justified by the statistical properties of the Poisson noise. For solving the optimization problem we propose a particular form of a general scaled gradient projection (SGP) method, recently introduced for image deblurring. We derive the form of the scaling from a decomposition of the gradient of the regularization functional into a positive and a negative part. The beneficial effect of the scaling is proved by means of numerical simulations, showing that the performance of the proposed form of SGP is superior to that of the most efficient gradient projection methods. An extended numerical analysis of the dependence of the solution on the regularization parameter is also performed to test the effectiveness of the proposed discrepancy principle.
Optimization Methods & Software | 2005
Thomas Serafini; Gaetano Zanghirati; Luca Zanni
Gradient projection methods based on the Barzilai–Borwein spectral steplength choices are considered for quadratic programming (QP) problems with simple constraints. Well-known nonmonotone spectral projected gradient methods and variable projection methods are discussed. For both approaches, the behavior of different combinations of the two spectral steplengths is investigated. A new adaptive steplength alternating rule is proposed, which becomes the basis for a generalized version of the variable projection method (GVPM). Convergence results are given for the proposed approach and its effectiveness is shown by means of an extensive computational study on several test problems, including the special quadratic programs arising in training support vector machines (SVMs). Finally, the GVPM behavior as inner QP solver in decomposition techniques for large-scale SVMs is also evaluated.
Inverse Problems | 2010
M. Bertero; Patrizia Boccacci; G. Talenti; Riccardo Zanella; Luca Zanni
In applications of imaging science, such as emission tomography, fluorescence microscopy and optical/infrared astronomy, image intensity is measured via the counting of incident particles (photons, γ-rays, etc). Fluctuations in the emission-counting process can be described by modeling the data as realizations of Poisson random variables (Poisson data). A maximum-likelihood approach for image reconstruction from Poisson data was proposed in the mid-1980s. Since the consequent maximization problem is, in general, ill-conditioned, various kinds of regularizations were introduced in the framework of the so-called Bayesian paradigm. A modification of the well-known Tikhonov regularization strategy results in the data-fidelity function being a generalized Kullback–Leibler divergence. Then a relevant issue is to find rules for selecting a proper value of the regularization parameter. In this paper we propose a criterion, nicknamed discrepancy principle for Poisson data, that applies to both denoising and deblurring problems and fits quite naturally the statistical properties of the data. The main purpose of the paper is to establish conditions, on the data and the imaging matrix, ensuring that the proposed criterion does actually provide a unique value of the regularization parameter for various classes of regularization functions. A few numerical experiments are performed to demonstrate its effectiveness. More extensive numerical analysis and comparison with other proposed criteria will be the object of future work.
Astronomy and Astrophysics | 2012
Marco Prato; Roberto Cavicchioli; Luca Zanni; Patrizia Boccacci; M. Bertero
Context. The Richardson-Lucy method is the most popular deconvolution method in astronomy because it preserves the number of counts and the non-negativity of the original object. Regularization is, in general, obtained by an early stopping of Richardson-Lucy iterations. In the case of point-wise objects such as binaries or open star clusters, iterations can be pushed to convergence. However, it is well-known that Richardson-Lucy is an inefficient method. In most cases and, in particular, for low noise levels, acceptable solutions are obtained at the cost of hundreds or thousands of iterations, thus several approaches to accelerating Richardson-Lucy have been proposed. They are mainly based on Richardson-Lucy being a scaled gradient method for the minimization of the Kullback-Leibler divergence, or Csiszar I-divergence, which represents the data-fidelity function in the case of Poisson noise. In this framework, a line search along the descent direction is considered for reducing the number of iterations. Aims. A general optimization method, referred to as the scaled gradient projection method, has been proposed for the constrained minimization of continuously differentiable convex functions. It is applicable to the non-negative minimization of the Kullback-Leibler divergence. If the scaling suggested by Richardson-Lucy is used in this method, then it provides a considerable increase in the efficiency of Richardson-Lucy. Therefore the aim of this paper is to apply the scaled gradient projection method to a number of imaging problems in astronomy such as single image deconvolution, multiple image deconvolution, and boundary effect correction. Methods. Deconvolution methods are proposed by applying the scaled gradient projection method to the minimization of the Kullback-Leibler divergence for the imaging problems mentioned above and the corresponding algorithms are derived and implemented in interactive data language. For all the algorithms, several stopping rules are introduced, including one based on a recently proposed discrepancy principle for Poisson data. To attempt to achieve a further increase in efficiency, we also consider an implementation on graphic processing units. Results. The proposed algorithms are tested on simulated images. The acceleration of scaled gradient projection methods achieved with respect to the corresponding Richardson-Lucy methods strongly depends on both the problem and the specific object to be reconstructed, and in our simulations the improvement achieved ranges from about a factor of 4 to more than 30. Moreover, significant accelerations of up to two orders of magnitude have been observed between the serial and parallel implementations of the algorithms. The codes are available upon request.
Scientific Reports | 2013
Riccardo Zanella; Gaetano Zanghirati; Roberto Cavicchioli; Luca Zanni; Patrizia Boccacci; M. Bertero; Giuseppe Vicidomini
Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings.
Optimization Methods & Software | 2005
Thomas Serafini; Luca Zanni
This work deals with special decomposition techniques for the large quadratic program arising in training support vector machines. These approaches split the problem into a sequence of quadratic programming (QP) subproblems which can be solved by efficient gradient projection methods recently proposed. Owing to the ability of decomposing the problem into much larger subproblems than standard decomposition packages, these techniques show promising performance and are well suited for parallelization. Here, we discuss a crucial aspect for their effectiveness: the selection of the working set; that is, the index set of the variables to be optimized at each step through the QP subproblem. We analyze the most popular working set selections and develop a new selection strategy that improves the convergence rate of the decomposition schemes based on large sized working sets. The effectiveness of the proposed strategy within the gradient projection-based decomposition techniques is shown by numerical experiments on large benchmark problems, both in serial and in parallel environments.
Computational Management Science | 2006
Luca Zanni
In this paper we propose some improvements to a recent decomposition technique for the large quadratic program arising in training support vector machines. As standard decomposition approaches, the technique we consider is based on the idea to optimize, at each iteration, a subset of the variables through the solution of a quadratic programming subproblem. The innovative features of this approach consist in using a very effective gradient projection method for the inner subproblems and a special rule for selecting the variables to be optimized at each step. These features allow to obtain promising performance by decomposing the problem into few large subproblems instead of many small subproblems as usually done by other decomposition schemes. We improve this technique by introducing a new inner solver and a simple strategy for reducing the computational cost of each iteration. We evaluate the effectiveness of these improvements by solving large-scale benchmark problems and by comparison with a widely used decomposition package.
Journal of Optimization Theory and Applications | 2000
Valeria Ruggiero; Luca Zanni
In this paper, we propose a modified projection-type method for solving strictly-convex quadratic programs. This iterative scheme requires essentially the solution of an easy quadratic programming subproblem and a matrix-vector multiplication at each iteration. The main feature of the method consists in updating the Hessian matrix of the subproblems by a convenient scaling parameter. The convergence of the scheme is obtained by introducing a correction formula for the solution of the subproblems and very weak conditions on the scaling parameter. A practical nonexpensive updating rule for the scaling parameter is suggested. The results of numerical experimentation enable this approach to be compared with some classical projection-type methods and its effectiveness as a solver of large and very sparse quadratic programs to be evaluated.