Ivan Gerace
University of Perugia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ivan Gerace.
CVGIP: Graphical Models and Image Processing | 1994
Luigi Bedini; Ivan Gerace; Anna Tonazzini
Abstract The most common approach for incorporating discontinuities in visual reconstruction problems makes use of Bayesian techniques, based on Markov random field models, coupled with stochastic relaxation and simulated annealing. Despite their convergence properties and flexibility in exploiting a priori knowledge on physical and geometric features of discontinuities, stochastic relaxation algorithms often present insurmountable computational complexity. Recently, considerable attention has been given to suboptimal deterministic algorithms, which can provide solutions with much lower computational costs. These algorithms consider the discontinuities implicitly rather than explicitly and have been mostly derived when there are no interactions between two or more discontinuities in the image model. In this paper we propose an algorithm that allows for interacting discontinuities, in order to exploit the constraint that discontinuities must be connected and thin. The algorithm, called E-GNC, can be considered an extension of the graduated nonconvexity (GNC), first proposed by Blake and Zisserman for noninteracting discontinuities. When applied to the problem of image reconstruction from sparse and noisy data, the method is shown to give satisfactory results with a low number of iterations.
IEEE Transactions on Image Processing | 2010
Anna Tonazzini; Ivan Gerace; Francesca Martinelli
In this paper, we apply Bayesian blind source separation (BSS) from noisy convolutive mixtures to jointly separate and restore source images degraded through unknown blur operators, and then linearly mixed. We found that this problem arises in several image processing applications, among which there are some interesting instances of degraded document analysis. In particular, the convolutive mixture model is proposed for describing multiple views of documents affected by the overlapping of two or more text patterns. We consider two different models, the interchannel model, where the data represent multispectral views of a single-sided document, and the intrachannel model, where the data are given by two sets of multispectral views of the recto and verso side of a document page. In both cases, the aim of the analysis is to recover clean maps of the main foreground text, but also the enhancement and extraction of other document features, such as faint or masked patterns. We adopt Bayesian estimation for all the unknowns and describe the typical local correlation within the individual source images through the use of suitable Gibbs priors, accounting also for well-behaved edges in the images. This a priori information is particularly suitable for the kind of objects depicted in the images treated, i.e., homogeneous texts in homogeneous background, and, as such, is capable to stabilize the ill-posed, inverse problem considered. The method is validated through numerical and real experiments that are representative of various real scenarios.
Pattern Recognition Letters | 1994
Luigi Bedini; Ivan Gerace; Anna Tonazzini
Abstract Image reconstruction is formulated as the problem of minimizing a non-convex functional F(ƒ) in which the smoothness stabilizer implicitly refers to a continuous-valued line process. Typical functionals proposed in the literature are considered. The minimum of F(ƒ) is computed using a GNC algorithm that employs a sequence F (p) (ƒ) of approximating functionals for F(ƒ) , to be minimized in turn by gradient descent techniques. The results of a simulation evidence that GNC algorithms are computationally more efficient than simulated annealing algorithms, even when the latter are implemented in a simplified form. A comparison between the performance of these functionals and that of a functional that refers to an implicit binary line process is also carried out; this shows that assuming a continuous-valued line process gives a better reconstruction of the smooth, planar or quadratic regions of the image, even with first-order models.
international conference on image processing | 2003
Ivan Gerace; R. panolfi; Patrizia Pucci
The blind restoration problem is a well-known ill-posed problem. Both regularization and Bayesian approaches define the solution as the minimum of an appropriate energy function. The energy function has to take into consideration that the ideal image presents some discontinuities, and that close discontinuities should not be parallel. In this paper we propose a new estimation for the blur. We define as solution the blur that, used as known in the restoration of a image, permits to obtain the image that better fits our a priori knowledge of ideal image. We call this estimation IMAP (indirect maximum a priori). To compute the IMAP estimation we use a SA (simulated annealing) algorithm, where the restoration of image is performed by the CATILED (convex approximation technique for interacting line element deblurring) algorithm. The experimental results show the goodness of the IMAP estimation.
international conference on independent component analysis and signal separation | 2004
Ivan Gerace; Francesco Cricco; Anna Tonazzini
In this paper we consider the problem of separating autocorrelated source images from linear mixtures with unknown coefficients, in presence of even significant noise. Assuming the statistical independence of the sources, we formulate the problem in a Bayesian estimation framework, and describe local correlation within the individual source images through the use of suitable Gibbs priors, accounting also for well-behaved edges in the images. Based on an extension of the Maximum Likelihood approach to ICA, we derive an algorithm for recovering the mixing matrix that makes the estimated sources fit the known properties of the original sources. The preliminary experimental results on synthetic mixtures showed that a significant robustness against noise, both stationary and non-stationary, can be achieved even by using generic autocorrelation models.
Pattern Recognition Letters | 1995
Luigi Bedini; Ivan Gerace; Anna Tonazzini
Abstract Image restoration is formulated as the problem of minimizing a non-convex cost function E ( f , l ) in which a binary self-interacting line process is introduced. Each line element is then approximated by a sigmoidal function of the local intensity gradient, which depends on a parameter T , thus obtaining a sequence of functions F T ( f ) converging to a function F ( f ) that implicitly refers to the line process. In the case of a non-interacting line process, function F ( f ) coincides with the one derived for the weak membrane problem. The minimum of F ( f ) is computed through a GNC-type algorithm which minimizes in sequence the various F T ( f )s using gradient descent techniques. When generalized to the case of self-interacting line elements, the method is flexible in introducing any kind of constraint on the configurations of the discontinuity field. The results of simulations highlight that the method improves the quality of the reconstruction when constraints on the line process are introduced, without any increase in the computational costs with respect to the case where there are no self-interactions between lines.
Mathematical Structures in Computer Science | 2008
Ivan Gerace; Federico Greco
The Symmetric Circulant Travelling Salesman Problem asks for the minimum cost tour in a symmetric circulant matrix. The computational complexity of this problem is not known – only upper and lower bounds have been determined. This paper provides a characterisation of the two-stripe case. Instances where the minimum cost of a tour is equal to either the upper or lower bound are recognised. A new construction providing a tour is proposed for the remaining instances, and this leads to a new upper bound that is closer than the previous one.
Micron | 1995
Ivan Gerace; Luigi Bedini; Anna Tonazzini; Paolo Gualtieri
Abstract The low light inmtensity of fluorescence has always been a serious limitation to its routine use in biology and biomedicine. Since average users in digital microscopy usually possess a commercial TV camera, which is not always a sophisticated or very expensive camera, procedures for software image restoration can represent an alternative and valid solution for studying ‘ in vivo ’ biological events. In this paper we present a novel algorithm for edge-preserving image restoration, and show the results of its application on a fluorescence test image. The quality of the restored image is comparable to the quality of the image acquired by a high quality camera.
Archive | 2008
Federico Greco; Ivan Gerace
An n × n matrix D = d[i, j] is said to be circulant, if the entries d[i, j] verifying (j − i) = k mod n, for some k, have the same value (for a survey on circulant matrix properties, see Davis (1979)). A directed (respectively, undirected) graph is circulant, if its adjacency matrix is circulant (respectively, symmetric, and circulant). Similarly, a weighted graph is circulant, if its weighted adjacency matrix is circulant. In the last years, it had been often investigated if a graph problem becomes easier when it is restricted to the circulant graphs. For example, the Maximum Clique problem, and the Minimum Graph Coloring problem remain NP-hard, and not approximable within a constant factor, when the general instance is forced to be a circulant undirected graphs, as shown by Codenotti, et al. (1998). On the other hand, Muzychuk (2004) has proved that the Graph Isomorphism problem restricted to circulant undirected graphs is in P, while the general case is, probably, harder. It is still an open question whether the Directed Hamiltonian Circuit problem, restricted to circulant (directed) graphs, remains NP-hard, or not. A solution in some special cases has been found by Garfinkel (1977), Fan Yang, et al. (1997), and Bogdanowicz (2005). The Hamiltonian Circuit problem admits, instead, a polynomial time algorithm on the circulant undirected graphs, as shown by Burkard, and Sandholzer (1991). It leads to a polynomial time algorithm for the Bottleneck Traveling Salesman Problem on the symmetric circulant matrices. Finally, in Gilmore, et al. (1985) it is shown that the Shortest Hamiltonian Path problem is polynomial time solvable on the circulant matrices, while the general case is NP-hard. The positive results contained in Burkard, and Sandholzer (1991), and in Gilmore, et al. (1985) have encouraged the research on the Symmetric Circulant Traveling Salesman problem, that is, the Sum Traveling Salesman Problem restricted to the symmetric, and circulant matrices. In this chapter we deal with such problem, called for short SCTSP. In §1–§3 the problem is introduced, and the notation is fixed. In §4–§6 an overview is given on the last 16 year results. Firstly, an upper bound (§4.1), a lower bound (§4.2), and a polynomial time 2approximation algorithm for the general case of SCTSP (§4.3) are discussed. No better result concerning the computational complexity of SCTSP is known. Secondly, some sufficient theorems solving particular cases of SCTSP are presented (§5). Finally, §6 is devoted to a recently introduced subcase of SCTSP. §7 completes the chapter by presenting open problems, remarks, and future developments. O pe n A cc es s D at ab as e w w w .ite ch on lin e. co m
international conference on image processing | 2004
Anna Tonazzini; Ivan Gerace; Francesco Cricco
We consider the problem of extracting clean images from noisy mixtures of images degraded by blur operators. This special case of source separation arises, for instance, when analyzing document images showing bleed-through or show-through. We propose to jointly perform demixing and deblurring by augmenting blind source separation with a step of image restoration. Within the independent component analysis (ICA) approach, i.e. assuming the statistical independence of the sources, we adopt a Bayesian formulation where the priors on the ideal images are given in the form of Markov random field (MRF), and a MAP estimation is employed for the joint recovery of the mixing matrix and the images. We show that taking into account the blur model and a proper image model improves the separation process and makes it more robust against noise. Preliminary results on synthetic examples of documents exhibiting bleed-through are provided, considering edge-preserving priors that are suitable to describe text images.