Luigi Bedini
Istituto di Scienza e Tecnologie dell'Informazione
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luigi Bedini.
Monthly Notices of the Royal Astronomical Society | 2002
D. Maino; A. Farusi; C. Baccigalupi; F. Perrotta; A. J. Banday; Luigi Bedini; C. Burigana; G. De Zotti; K. M. Górski; Emanuele Salerno
We present a new, fast, algorithm for the separation of astrophysical components superposed in maps of the sky. The algorithm, based on the Independent Component Analysis (ICA) technique, is aimed at recovering both the spatial pattern and the frequency scalings of the emissions from statistically independent astrophysical processes, present along the line-of-sight, from multi-frequency observations, without any a priori assumption on properties of the components to be separated, except that all of them, but at most one, must have non-Gaussian distributions. The analysis starts from very simple toy-models of the sky emission in order to assess the quality of the reconstruction when inputs are well known and controlled. In particular we study the dependence of the results of separation conducted on and off the Galactic plane independently, showing that optimal separation is achieved for sky regions where components are smoothly distributed. Then we move to more realistic applications on simulated observations of the microwave sky with angular resolution and instrumental noise at the mean nominal levels for the Planck satellite. We consider several Planck observation channels containing the most important known diffuse signals: the Cosmic Microwave Background (CMB), Galactic synchrotron, dust and free-free emissions. A method for calibrating the reconstructed maps of each component at each frequency has been devised. The spatial pattern of all the components have been recovered on all scales probed by the instrument. In particular, the CMB angular power spectra is recovered at the percent level up to lmax ≃ 2000. Frequency scalings and normalization have been recovered with better than 1% precision for all the components at frequencies and in sky regions where their signalto-noise ratio > ∼ 1.5; the error increases at ∼ 10% level for signal-to-noise ratios ≃ 1.
International Journal on Document Analysis and Recognition | 2007
Anna Tonazzini; Emanuele Salerno; Luigi Bedini
Ancient documents are usually degraded by the presence of strong background artifacts. These are often caused by the so-called bleed-through effect, a pattern that interferes with the main text due to seeping of ink from the reverse side. A similar effect, called show-through and due to the nonperfect opacity of the paper, may appear in scans of even modern, well-preserved documents. These degradations must be removed to improve human or automatic readability. For this purpose, when a color scan of the document is available, we have shown that a simplified linear pattern overlapping model allows us to use very fast blind source separation techniques. This approach, however, cannot be applied to grayscale scans. This is a serious limitation, since many collections in our libraries and archives are now only available as grayscale scans or microfilms. We propose here a new model for bleed-through in grayscale document images, based on the availability of the recto and verso pages, and show that blind source separation can be successfully applied in this case too. Some experiments with real-ancient documents arepresented and described.
International Journal on Document Analysis and Recognition | 2004
Anna Tonazzini; Luigi Bedini; Emanuele Salerno
Abstract.We propose a novel approach to restoring digital document images, with the aim of improving text legibility and OCR performance. These are often compromised by the presence of artifacts in the background, derived from many kinds of degradations, such as spots, underwritings, and show-through or bleed-through effects. So far, background removal techniques have been based on local, adaptive filters and morphological-structural operators to cope with frequent low-contrast situations. For the specific problem of bleed-through/show-through, most work has been based on the comparison between the front and back pages. This, however, requires a preliminary registration of the two images. Our approach is based on viewing the problem as one of separating overlapped texts and then reformulating it as a blind source separation problem, approached through independent component analysis techniques. These methods have the advantage that no models are required for the background. In addition, we use the spectral components of the image at different bands, so that there is no need for registration. Examples of bleed-through cancellation and recovery of underwriting from palimpsests are provided.
IEEE Transactions on Image Processing | 2006
Anna Tonazzini; Luigi Bedini; Emanuele Salerno
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
EURASIP Journal on Advances in Signal Processing | 2005
Luigi Bedini; D. Herranz; Emanuele Salerno; C. Baccigalupi; E. E. Kuruoǧlu; Anna Tonazzini
This paper proposes a new strategy to separate astrophysical sources that are mutually correlated. This strategy is based on second-order statistics and exploits prior information about the possible structure of the mixing matrix. Unlike ICA blind separation approaches, where the sources are assumed mutually independent and no prior knowledge is assumed about the mixing matrix, our strategy allows the independence assumption to be relaxed and performs the separation of even significantly correlated sources. Besides the mixing matrix, our strategy is also capable to evaluate the source covariance functions at several lags. Moreover, once the mixing parameters have been identified, a simple deconvolution can be used to estimate the probability density functions of the source processes. To benchmark our algorithm, we used a database that simulates the one expected from the instruments that will operate onboard ESAs Planck Surveyor Satellite to measure the CMB anisotropies all over the celestial sphere.
Monthly Notices of the Royal Astronomical Society | 2006
A. Bonaldi; Luigi Bedini; Emanuele Salerno; C. Baccigalupi; G. De Zotti
We present the first tests of a new method, the correlated component analysis (CCA) based on second-order statistics, to estimate the mixing matrix, a key ingredient to separate astrophysical foregrounds superimposed to the Cosmic Microwave Background (CMB). In the present application, the mixing matrix is parametrized in terms of the spectral indices of Galactic synchrotron and thermal dust emissions, while the free-free spectral index is prescribed by basic physics, and is thus assumed to be known. We consider simulated observations of the microwave sky with angular resolution and white stationary noise at the nominal levels for the Planck satellite, and realistic foreground emissions, with a position-dependent synchrotron spectral index. We work with two sets of Planck frequency channels: the low-frequency set, from 30 to 143 GHz, complemented with the Haslam 408 MHz map, and the high-frequency set, from 217 to 545 GHz. The concentration of intense free-free emission on the Galactic plane introduces a steep dependence of the spectral index of the global Galactic emission with Galactic latitude, close to the Galactic equator. This feature makes difficult for the CCA to recover the synchrotron spectral index in this region, given the limited angular resolution of Planck, especially at low frequencies. A cut of a narrow strip around the Galactic equator (|b| < 3°), however, allows us to overcome this problem. We show that, once this strip is removed, the CCA allows an effective foreground subtraction, with residual uncertainties inducing a minor contribution to errors on the recovered CMB power spectrum.
CVGIP: Graphical Models and Image Processing | 1994
Luigi Bedini; Ivan Gerace; Anna Tonazzini
Abstract The most common approach for incorporating discontinuities in visual reconstruction problems makes use of Bayesian techniques, based on Markov random field models, coupled with stochastic relaxation and simulated annealing. Despite their convergence properties and flexibility in exploiting a priori knowledge on physical and geometric features of discontinuities, stochastic relaxation algorithms often present insurmountable computational complexity. Recently, considerable attention has been given to suboptimal deterministic algorithms, which can provide solutions with much lower computational costs. These algorithms consider the discontinuities implicitly rather than explicitly and have been mostly derived when there are no interactions between two or more discontinuities in the image model. In this paper we propose an algorithm that allows for interacting discontinuities, in order to exploit the constraint that discontinuities must be connected and thin. The algorithm, called E-GNC, can be considered an extension of the graduated nonconvexity (GNC), first proposed by Blake and Zisserman for noninteracting discontinuities. When applied to the problem of image reconstruction from sparse and noisy data, the method is shown to give satisfactory results with a low number of iterations.
Neural Networks | 2003
Ercan E. Kuruoglu; Luigi Bedini; Maria Teresa Paratore; Emanuele Salerno; Anna Tonazzini
A microwave sky map results from a combination of signals from various astrophysical sources, such as cosmic microwave background radiation, synchrotron radiation and galactic dust radiation. To derive information about these sources, one needs to separate them from the measured maps on different frequency channels. Our insufficient knowledge of the weights to be given to the individual signals at different frequencies makes this a difficult task. Recent work on the problem led to only limited success due to ignoring the noise and to the lack of a suitable statistical model for the sources. In this paper, we derive the statistical distribution of some source realizations, and check the appropriateness of a Gaussian mixture model for them. A source separation technique, namely, independent factor analysis, has been suggested recently in the literature for Gaussian mixture sources in the presence of noise. This technique employs a three layered neural network architecture which allows a simple, hierarchical treatment of the problem. We modify the algorithm proposed in the literature to accommodate for space-varying noise and test its performance on simulated astrophysical maps. We also compare the performances of an expectation-maximization and a simulated annealing learning algorithm in estimating the mixture matrix and the source model parameters. The problem with expectation-maximization is that it does not ensure global optimization, and thus the choice of the starting point is a critical task. Indeed, we did not succeed to reach good solutions for random initializations of the algorithm. Conversely, our experiments with simulated annealing yielded initialization-independent results. The mixing matrix and the means and coefficients in the source model were estimated with a good accuracy while some of the variances of the components in the mixture model were not estimated satisfactorily.
Image and Vision Computing | 1992
Luigi Bedini; Anna Tonazzini
Abstract Recently, methods which permit discontinuities to be taken into account have been investigated with respect to solving visual reconstruction problems. These methods, both deterministic and probabilistic, present formidable computational costs, due to the complexity of the algorithms used and the dimension of the problems treated. To reduce execution times, new computational implementations based on parallel architectures such as neural networks have been proposed. In this paper the edge preserving restoration of piecewise smooth images is formulated in terms of a probabilistic approach, and a MAP estimate algorithm is proposed which could be implemented on a hybrid neural network. We adopt a model for the image consisting of two coupled MRFs, one representing the intensity and the other the discontinuities, in such a way as to introduce prior probabilistic knowledge about global and local features. According to an annealing schedule, the solution is obtained iteratively by means of a sequence in which deterministic steps alternate with probabilistic ones. The algorithm is suitable for implementation on a hybrid architecture made up of a grid of digital processors interacting with a linear neural network which supports most of the computational costs.
International Journal on Document Analysis and Recognition | 2003
Anna Tonazzini; Stefano Vezzosi; Luigi Bedini
Abstract.This paper proposes an integrated system for the processing and analysis of highly degraded printed documents for the purpose of recognizing text characters. As a case study, ancient printed texts are considered. The system is comprised of various blocks operating sequentially. Starting with a single page of the document, the background noise is reduced by wavelet-based decomposition and filtering, the text lines are detected, extracted, and segmented by a simple and fast adaptive thresholding into blobs corresponding to characters, and the various blobs are analyzed by a feedforward multilayer neural network trained with a back-propagation algorithm. For each character, the probability associated with the recognition is then used as a discriminating parameter that determines the automatic activation of a feedback process, leading the system back to a block for refining segmentation. This block acts only on the small portions of the text where the recognition cannot be relied on and makes use of blind deconvolution and MRF-based segmentation techniques whose high complexity is greatly reduced when applied to a few subimages of small size. The experimental results highlight that the proposed system performs a very precise segmentation of the characters and then a highly effective recognition of even strongly degraded texts.