Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Massimo Tistarelli is active.

Publication


Featured researches published by Massimo Tistarelli.


Archive | 2009

Advances in Biometrics

Massimo Tistarelli; Mark S. Nixon

This chapter describes the principles of operation of a new class of fingerprint sensor based on multispectral imaging (MSI). The MSI sensor captures multiple images of the finger under different illumination conditions that include different wavelengths, different illumination orientations, and different polarization conditions. The resulting data contain information about both the surface and subsurface features of the skin. These data can be processed to generate a single composite fingerprint image equivalent to that produced by a conventional fingerprint reader, but with improved performance characteristics. In particular, the MSI imaging sensor is able to collect usable biometric images in conditions where other conventional sensors fail such as when topical contaminants, moisture, and bright ambient lights are present or there is poor contact between the finger and sensor. Furthermore, the MSI data can be processed to ensure that the measured optical characteristics match those of living human skin, providing a strong means to protect against attempts to spoof the sensor.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1993

On the advantages of polar and log-polar mapping for direct estimation of time-to-impact from optical flow

Massimo Tistarelli; Giulio Sandini

The application of an anthropomorphic retina-like visual sensor and the advantages of polar and log-polar mapping for visual navigation are investigated. It is demonstrated that the motion equations that relate the egomotion and/or the motion of the objects in the scene to the optical flow are considerably simplified if the velocity is represented in a polar or log-polar coordinate system, as opposed to a Cartesian representation. The analysis is conducted for tracking egomotion but is then generalized to arbitrary sensor and object motion. The main result stems from the abundance of equations that can be written directly that relate the polar or log-polar optical flow with the time to impact. Experiments performed on images acquired from real scenes are presented. >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1990

Active tracking strategy for monocular depth inference over multiple frames

Giulio Sandini; Massimo Tistarelli

The extraction of depth information from a sequence of images is investigated. An algorithm that exploits the constraint imposed by active motion of the camera is described. Within this framework, in order to facilitate measurement of the navigation parameters, a constrained egomotion strategy was adopted in which the position of the fixation point is stabilized during the navigation (in an anthropomorphic fashion). This constraint reduces the dimensionality of the parameter space without increasing the complexity of the equations. A further distinctive point is the use of two sampling rates: the faster (related to the computation of the instantaneous optical flow) is fast enough to allow the local operator to sense the passing edge (or, in other words, to allow the tracking of moving contour points), while the slower (used to perform the triangulation procedure necessary to derive depth) is slow enough to provide a sufficiently large baseline for triangulation. Experimental results on real image sequences are presented. >


Cvgip: Image Understanding | 1992

Dynamic aspects in active vision

Massimo Tistarelli; Giulio Sandini

Abstract The term active stresses the role of the motion or, generally speaking, the dynamic interaction of the observer with the environment. This concept emphasizes the relevance of determining scene properties from the temporal evolution of image features. Within the active vision approach, two different aspects are considered: the advantages of space-variant vision for dynamic visual processing and the qualitative analysis of optical flow. A space-variant sampling of the image plane has many good properties in relation to active vision. In particular, two examples are presented in which a log-polar representation is used for active vergence control and to estimate the time-to-impact during tracking egomotion. These are just two modules which could fit into a complete active vision system, but already highlight the advantages of space-variant sensing within the active vision paradigm. In the second part of the paper the extraction of qualitative properties from the optical flow is discussed. A new methodology is proposed in which optical flows are analyzed in terms of anomalies as unexpected velocity patterns or inconsistencies with respect to some predicted dynamic feature. Two kinds of knowledge are necessary: about the dynamic of the scene (for example, the approximate motion of the camera) and about the task to be accomplished, which is analogous to (qualitatively) knowing the kind of scene I would expect to see. The first simply implies the measurement of some motion parameters (for example, with inertial sensors) directly on the camera, or to put some constraints in the egomotion. The second requirement implies that the visual process is task-driven. Some examples are presented in which the method is successfully applied to robotic tasks.


systems man and cybernetics | 1989

3D object reconstruction using stereo and motion

Enrico Grosso; Giulio Sandini; Massimo Tistarelli

The extraction of reliable range data from images is investigated, considering, as a possible solution, the integration of different sensor modalities. Two different algorithms are used to obtain independent estimates of depth from a sequence of stereo images. The results are integrated on the basis of the uncertainty of each measure. The stereo algorithm uses a coarse-to-fine control strategy to compute disparity. An algorithm for depth-from-motion is used, exploiting the constraint imposed by active motion of the cameras. To obtain a 3D description of the objects, the motion of the cameras is purposefully controlled, in such a manner as to move around the objects in view while the gaze is directed toward a fixed point in space. This egomotion strategy, which is similar to that adopted by the human visuomotor system, allows a better exploration of partially occluded objects and simplifies the motion equations. When tested on real scenes, the algorithm demonstrated a low sensitivity to image noise, mainly due to the integration of independent measures. An experiment performed on a real scene containing several objects is presented. >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1995

Active/dynamic stereo vision

Enrico Grosso; Massimo Tistarelli

Visual navigation is a challenging issue in automated robot control. In many robot applications, like object manipulation in hazardous environments or autonomous locomotion, it is necessary to automatically detect and avoid obstacles while planning a safe trajectory. In this context the detection of corridors of free space along the robot trajectory is a very important capability which requires nontrivial visual processing. In most cases it is possible to take advantage of the active control of the cameras. In this paper we propose a cooperative schema in which motion and stereo vision are used to infer scene structure and determine free space areas. Binocular disparity, computed on several stereo images over time, is combined with optical flow from the same sequence to obtain a relative-depth map of the scene. Both the time to impact and depth scaled by the distance of the camera from the fixation point in space are considered as good, relative measurements which are based on the viewer, but centered on the environment. The need for calibrated parameters is considerably reduced by using an active control strategy. The cameras track a point in space independently of the robot motion and the full rotation of the head, which includes the unknown robot motion, is derived from binocular image data. The feasibility of the approach in real robotic applications is demonstrated by several experiments performed on real image data acquired from an autonomous vehicle and a prototype camera head. >


Image and Vision Computing | 2000

Active vision-based face authentication

Massimo Tistarelli; Enrico Grosso

Abstract The use of biometric data for automated identity verification, is one of the major challenges in secure access control systems. In this paper, several issues related to the application of active vision techniques for identity verification, using facial images, are discussed and a practical system (developed within an European research project), encompassing the active vision paradigm, is described. The system, originally devised for banking applications, uses a pair of active tracking cameras to fixate the face of the subject and extract space-variant images (namely “fixations”) from the most relevant facial features. These features are automatically extracted with a two-level algorithm which uses a morphological filtering stage for a coarse localization, followed by an adaptive template matching. A simple matching algorithm, based on a space-variant representation of facial features, is applied for identity verification and compared with a technique based on the Principal Component Analysis. Several experiments on identity verification, performed on real images, are presented.


international conference on robotics and automation | 1990

Using camera motion to estimate range for robotic parts manipulation

David Vernon; Massimo Tistarelli

A technique is described for determining a depth map of parts in bins using optical flow derived from camera motion. Simple programmed camera motions are generated by mounting the camera on the robot end effector and directing the effector along a known path. The results achieved using two simple trajectories, where one is along the optical axis and the other is in rotation about a fixation point, are detailed. Optical flow is estimated by computing the time derivative of a sequence of images, i.e. by forming differences between two successive images and, in particular, matching between contours in images that have been generated from the zero crossings of Laplacian of Gaussian-filtered images. Once the flow field has been determined, a depth map is computed utilizing the parameters of the known camera trajectory. Empirical results are presented for a calibration object and two bins of parts; these are compared with the theoretical precision of the technique, and it is demonstrated that a ranging accuracy on the order of two parts in 100 is achievable. >


Image and Vision Computing | 1990

Estimation of depth from motion using an anthropomorphic visual sensor

Massimo Tistarelli; Giulio Sandini

The application of an anthropomorphic, retina-like visual sensor for optical flow and depth estimation is presented. The main advantage, obtained with the non-uniform sampling, is considerable data reduction, while a high spatial resolution is preserved in the part of the field of view corresponding to the focus of attention. As for depth estimation, a tracking egomotion strategy is adopted which greatly simplifies the motion equations, and naturally fits with the characteristics of the retinal sensor (the displacement is smaller wherever the image resolution is higher). A quantitative error analysis is carried out, determining the uncertainty of range measurements. An experiment, performed on a real image sequence, is presented.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

Multiple constraints to compute optical flow

Massimo Tistarelli

The computation of the optical flow field from an image sequence requires the definition of constraints on the temporal change of image features. In this paper, we consider the implications of using multiple constraints in the computational schema. In the first step, it is shown that differential constraints correspond to an implicit feature tracking. Therefore, the best results (either in terms of measurement accuracy, and speed in the computation) are obtained by selecting and applying the constraints which are best tuned to the particular image feature under consideration. Considering also multiple image points not only allows us to obtain a (locally) better estimate of the velocity field, but also to detect erroneous measurements due to discontinuities in the velocity field. Moreover, by hypothesizing a constant acceleration motion model, also the derivatives of the optical flow are computed. Several experiments are presented from real image sequences.

Collaboration


Dive into the Massimo Tistarelli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giulio Sandini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dakshina Ranjan Kisku

National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark S. Nixon

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Phalguni Gupta

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Ajita Rattani

University of Missouri–Kansas City

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge