Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luisa D'Amore is active.

Publication


Featured researches published by Luisa D'Amore.


international conference on image analysis and processing | 1999

The Wiener filter and regularization methods for image restoration problems

Almerico Murli; Luisa D'Amore; V. De Simone

Discretization of image restoration problems often leads to a discrete inverse ill-posed problem: the discretized operator is so badly conditioned that it can be actually considered as undetermined. In this case one should single out the solution which is the nearest to the desired solution. The usual way to do it is to regularize the problem. In this paper we focus on the computational aspects of the Wiener filter within the framework of the regularization methods. The emphasis is on its reliability and its efficiency, both of which become more and more important as the size and the complexity of the real problem grow and the demand for advanced real-time processing increases.


ACM Transactions on Mathematical Software | 1999

Algorithm 796: a Fortran software package for the numerical inversion of the Laplace transform based on a Fourier series method

Luisa D'Amore; Guiliano Laccetti; Almerico Murli

A software package for the numerical inversion of a Laplace Transform function is described. Besides function values of <italic>F</italic> (<italic>z</italic>) for complex and real <italic>z</italic>, the user has only to provide the numerical value of the Laplace convergence abscissa &sgr;<subscrpt>0</subscrpt> or, failing this, an upper bound to this quantity, and the accuracy he or she requires in the computed value of the inverse Transform. The method implemented is based on a Fourier series expansion of the inverse transform, and it is especially suitable when such inverse Laplace Transform is sectionally continuous.


parallel computing | 2006

Towards a parallel component for imaging in PETSc programming environment: a case study in 3-D echocardiography

Luisa Carracciuolo; Luisa D'Amore; Almerico Murli

A key issue in designing application codes which are able to effectively take advantage of the available high performance computing resources is to deal with the software management and development complexity. This is even more true for imaging applications. This work is the first dowel in a rather wide mosaic, aimed at construction of a PSE (Problem Solving Environment) oriented to imaging applications. We discuss computational efforts towards the development of a distributed software environment, enabling the transparent use of high performance computers and storage systems for 3-D Echocardiographic sequences denoising via nonlinear diffusion filtering. More precisely, we describe a component-based approach for the development of an integrated software environment relying on the Portable, Extensible Toolkit for Scientific Computation (PETSc). Our approach uses a distributed memory model where we hide, within the PETSc parallel objects, the details of internode communications while intranode communications have been handled at a higher level. We report some experiences we made on an in vivo acquired 3-D Echocardiographic sequence obtained by means of a rotational acquisition technique using Tomtec Imaging system. btained by means of a rotational acquisition technique using Tomtec Imaging system.


international parallel and distributed processing symposium | 2003

MedIGrid: a medical imaging application for computational Grids

M. Bertero; Paola Bonetto; Luisa Carracciuolo; Luisa D'Amore; A. R. Formiconi; Mario Rosario Guarracino; Giuliano Laccetti; Almerico Murli; Gennaro Oliva

In the last decades, diagnosing medical images has heavily relied on digital imaging. As a consequence, huge amounts of data produced by modern medical instruments need to be processed, organized, and visualized in a suitable response time. Many efforts have been devoted to the development of digital Picture Archiving and Communications Systems (PACS) which archive and distribute image information across a hospital and provide Web access to avoid the expensive deployment of a large number of such systems. On the other hand, this approach does not solve problems related to the increasing demand of high performance computing and storage facilities, which cannot be placed within a hospital. In this work we describe MedIGrid, an application that enables nuclear doctors to transparently use high performance computers and storage systems for the PET/SPECT (Positron Emission Tomography/Single Photon Emission Computed Tomography) image processing, management, visualization and analysis. MedIGrid is the result of the joint efforts of a group of researchers committed to the development of a distributed application to test and deploy new reconstruction methods in clinical environments. The outcomes of this work include a set of platform independent software tools to read medical images, control the execution of computing intensive tomographic algorithms, and explore the reconstructed tomographic volumes. In the paper we describe how the collaboration among different research groups has contributed to the integration of the application into a single framework. The results of our work are discussed.


parallel processing and applied mathematics | 2011

Deconvolution of 3d fluorescence microscopy images using graphics processing units

Luisa D'Amore; Livia Marcellino; Valeria Mele; Diego Romano

We consider the deconvolution of 3D Fluorescence Microscopy RGB images, describing the benefits arising from facing medical imaging problems on modern graphics processing units (GPUs), that are non expensive parallel processing devices available on many up-to-date personal computers. We found that execution time of CUDA version is about 2 orders of magnitude less than the one of sequential algorithm. Anyway, the experiments lead some reflections upon the best setting for the CUDA-based algorithm. That is, we notice the need to model the GPUs architectures and their characteristics to better describe the performance of GPU-algorithms and what we can expect of them.


Inverse Problems | 2007

Numerical regularization of a real inversion formula based on the Laplace transform's eigenfunction expansion of the inverse function

Almerico Murli; S. Cuomo; Luisa D'Amore; Ardelio Galletti

We describe the numerical approximation of the inverse Laplace function based on the Laplace transforms eigenfunction expansion of the inverse function, in a real case. The error analysis allows us to introduce a regularization technique involving computable upper bounds of amplification factors of local errors introduced by the computational process. A regularized solution is defined as one which is obtained within the maximum attainable accuracy. Moreover the regularization parameter, that in this case coincides with the truncation parameter of the eigenfunction expansion, is dynamically computed by the algorithm itself in such a way that it provides the minimum of the global error bound.


international conference on computational science | 2002

Total Variation Regularization for Edge Preserving 3D SPECT Imaging in High Performance Computing Environments

Laura Antonelli; Luisa Carracciuolo; Marco Ceccarelli; Luisa D'Amore; Almerico Murli

Clinical diagnosis environments often require the availability of processed data in real-time, unfortunately, reconstruction times are prohibitive on conventional computers, neither the adoption of expensive parallel computers seems to be a viable solution.Here, we focus on development of mathematical software on high performance architectures for Total Variation based regularization reconstruction of 3D SPECT images. The software exploits the low-cost of Beowulf parallel architectures.


Journal of Scientific Computing | 2014

A Scalable Approach for Variational Data Assimilation

Luisa D'Amore; Rossella Arcucci; Luisa Carracciuolo; Almerico Murli

Data assimilation (DA) is a methodology for combining mathematical models simulating complex systems (the background knowledge) and measurements (the reality or observational data) in order to improve the estimate of the system state (the forecast). The DA is an inverse and ill posed problem usually used to handle a huge amount of data, so, it is a large and computationally expensive problem. Here we focus on scalable methods that makes DA applications feasible for a huge number of background data and observations. We present a scalable algorithm for solving variational DA which is highly parallel. We provide a mathematical formalization of this approach and we also study the performance of the resulted algorithm.


International Journal of Computer Mathematics | 2015

Towards a parallel component in a GPU–CUDA environment: a case study with the L-BFGS Harwell routine

Luisa D'Amore; Giuliano Laccetti; Diego Romano; Giuseppe Scotti; Almerico Murli

Modern graphics processing units (GPUs) have been at the leading edge of increasing parallelism over the last 10 years. This fact has encouraged the use of GPUs in a broader range of applications, where developers are required to lever age this technology with new programming models which ease the task of writing programs to run efficiently on GPUs. In this paper, we discuss the main guidelines to assist the developer when porting sequential scientific code on modern GPUs. These guidelines were carried out by porting the L-BFGS, the (Limited memory-) BFGS algorithm for large scale optimization, available as Harwell routine VA15. The specific interest in the L-BFGS algorithm arises from the fact that this is the computational module with the longest running time of a Oceanographic Data Assimilation application software, on which some of the authors are working.


International Journal of Biomedical Imaging | 2011

Numerical solution of diffusion models in biomedical imaging on multicore processors

Luisa D'Amore; Daniela Casaburi; Livia Marcellino; Almerico Murli

In this paper, we consider nonlinear partial differential equations (PDEs) of diffusion/advection type underlying most problems in image analysis. As case study, we address the segmentation of medical structures. We perform a comparative study of numerical algorithms arising from using the semi-implicit and the fully implicit discretization schemes. Comparison criteria take into account both the accuracy and the efficiency of the algorithms. As measure of accuracy, we consider the Hausdorff distance and the residuals of numerical solvers, while as measure of efficiency we consider convergence history, execution time, speedup, and parallel efficiency. This analysis is carried out in a multicore-based parallel computing environment.

Collaboration


Dive into the Luisa D'Amore's collaboration.

Top Co-Authors

Avatar

Almerico Murli

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giuliano Laccetti

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Valeria Mele

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Livia Marcellino

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Daniela Casaburi

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Rossella Arcucci

Central Maine Community College

View shared research outputs
Top Co-Authors

Avatar

Rosanna Campagna

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Ardelio Galletti

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Laura Antonelli

University of Naples Federico II

View shared research outputs
Researchain Logo
Decentralizing Knowledge