Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paolo Carnevali is active.

Publication


Featured researches published by Paolo Carnevali.


Ibm Journal of Research and Development | 1985

Image processing by simulated annealing

Paolo Carnevali; Lattanzio Coletti; Stefano Patarnello

It is shown that simulated annealing, a statistical mechanics method recently proposed as a tool in solving complex optimization problems, can be used in problems arising in image processing. The problems examined are the estimation of the parameters necessary to describe a geometrical pattern corrupted by noise, the smoothing of bi-level images, and the process of halftoning a continuous-level image. The analogy between the system to be optimized and an equivalent physical system, whose ground state is sought, is put forward by showing that some of these problems are formally equivalent to ground state problems for two-dimensional Ising spin systems. In the case of low signal-to-noise ratios (particularly in image smoothing), the methods proposed here give better results than those obtained with standard techniques.


Ibm Journal of Research and Development | 1986

Microtasking on IBM multiprocessors

Paolo Carnevali; Piero Sguazzero; Vittorio Zecca

The demand for very high computing performance has become increasingly common in many scientific and engineering environments. In addition to vector processing, parallel computing is now considered a useful way to enhance performance. However, parallel computing tends to be unpopular among users because, with presently available technology and software, it requires explicit programmer intervention to exploit architectural parallelism. This intervention can be minor in some cases, but it often requires a non-negligible amount of program restructuring, or even a reformulation of some of the algorithms used. In addition, it makes program debugging considerably more difficult. Tools for interprocedural program analysis, able to analyze at high levels large FORTRAN programs composed of many subroutines and to perform an automatic high-level parallelism could also be exploited automatically.


Proceedings of the NATO Advanced Research Workshop on Neural computers | 1989

Learning networks of neurons with Boolean logic

Stefano Patarnello; Paolo Carnevali

Through a training procedure based on simulated annealing, Boolean networks can ‘learn’ to perform specific tasks. As an example, a network implementing a binary adder has been obtained after a training procedure based on a small number of examples of binary addition, thus showing a generalization capability. Depending on problem complexity, network size, and number of examples used in the training, different learning regimes occur. For small networks an exact analysis of the statistical mechanics of the system shows that learning takes place as a phase transition. The ‘simplicity’ of a problem can be related to its entropy. Simple problems are those that are thermodynamically favored.


parallel computing | 1988

Timing results of some internal sorting algorithms on the IBM 3090

Paolo Carnevali

Abstract The information contained in a paper by Ronsch adn Strauss on the performance of sorting algorithms on vector computers is completed with performance data of the IBM 3090 on sorting. For scalar sorting, the IBM 3090 turns out to be faster than all the machines considered by Ronnsch and Strauss. Due to this high scalar speed, FORTRAN-coded vector sorting is never more efficient than scalar sorting on the IBM 3090, at least for the vector sorting algorithms considered. On the other hand, for optimized library routines the IBM 3090 outperforms the CRAY X-MP (but not the AMDAHL 1200) at most vector lenghts.


Archive | 1988

Boolean networks which learn to compute

Stefano Patarnello; Paolo Carnevali

Through a training procedure based on simulated annealing, Boolean networks can ‘learn’ to perform specific tasks. As an example, a network implementing a binary adder has been obtained after a training procedure based on a small number of examples of binary addition, thus showing a generalization capability. Depending on problem complexity, network size, and number of examples used in the training, different learning regimes occur. For small networks an exact analysis of the statistical mechanics of the system shows that learning takes place as a phase transition. The ‘simplicity’ of a problem can be related to its entropy. Simple problems are those that are thermodynamically favored.


parallel computing | 1990

A simplified model to predict the performance of FORTRAN vector loops of the IBM 3090/VF

Paolo Carnevali; Manuel Kindelan

Abstract A simple model is described, which can be used to estimate execution times of vectorized FORTRAN loops on the IBM 3090/VF. Although a number of simplifying assumptions are made in the model (especially concerning the modeling of the cache), the resulting estimates are roughly correct in many cases. In situations where this model is oversimplified or unrealistic, it can still be used as a starting point for more sophisticated performance predictions.


Neural computing architectures | 1989

Learning capabilities of Boolean networks

Stefano Patarnello; Paolo Carnevali


Mathematical Modelling and Numerical Analysis | 1989

Efficient FORTRAN implementation of the gaussian elimination and Householder reduction algorithms on the IBM 3090 vector multiprocessor

Paolo Carnevali; Giuseppe Radicati; Yves Robert; Piero Sguazzero


International Journal of Neural Systems | 1989

A NEURAL NETWORK MODEL TO SIMULATE A CONDITIONING EXPERIMENT

Stefano Patarnello; Paolo Carnevali


Archive | 1989

MODLISATION MATHMATIQUE ET ANALYSE NUMRIQUE

Paolo Carnevali; Giuseppe Radicati; Yves Robert; Piero Sguazzero

Researchain Logo
Decentralizing Knowledge