Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vittorio Murino is active.

Publication


Featured researches published by Vittorio Murino.


computer vision and pattern recognition | 2010

Person re-identification by symmetry-driven accumulation of local features

Michela Farenzena; Loris Bazzani; Alessandro Perina; Vittorio Murino; Marco Cristani

In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.


british machine vision conference | 2011

Custom Pictorial Structures for Re-identification

Dong Seon Cheng; Marco Cristani; Michele Stoppa; Loris Bazzani; Vittorio Murino

We propose a novel methodology for re-identification, based on Pictorial Structures (PS). Whenever face or other biometric information is missing, humans recognize an individual by selectively focusing on the body parts, looking for part-to-part correspondences. We want to take inspiration from this strategy in a re-identification context, using PS to achieve this objective. For single image re-identification, we adopt PS to localize the parts, extract and match their descriptors. When multiple images of a single individual are available, we propose a new algorithm to customize the fit of PS on that specific person, leading to what we call a Custom Pictorial Structure (CPS). CPS learns the appearance of an individual, improving the localization of its parts, thus obtaining more reliable visual characteristics for re-identification. It is based on the statistical learning of pixel attributes collected through spatio-temporal reasoning. The use of PS and CPS leads to state-of-the-art results on all the available public benchmarks, and opens a fresh new direction for research on re-identification.


Computer Graphics Forum | 2008

Sparse points matching by combining 3D mesh saliency with statistical descriptors

Umberto Castellani; Marco Cristani; Simone Fantoni; Vittorio Murino

This paper proposes new methodology for the detection and matching of salient points over several views of an object. The process is composed by three main phases. In the first step, detection is carried out by adopting a new perceptually‐inspired 3D saliency measure. Such measure allows the detection of few sparse salient points that characterize distinctive portions of the surface. In the second step, a statistical learning approach is considered to describe salient points across different views. Each salient point is modelled by a Hidden Markov Model (HMM), which is trained in an unsupervised way by using contextual 3D neighborhood information, thus providing a robust and invariant point signature. Finally, in the third step, matching among points of different views is performed by evaluating a pairwise similarity measure among HMMs. An extensive and comparative experimental session has been carried out, considering real objects acquired by a 3D scanner from different points of view, where objects come from standard 3D databases. Results are promising, as the detection of salient points is reliable, and the matching is robust and accurate.


Computer Vision and Image Understanding | 2013

Symmetry-driven accumulation of local features for human characterization and re-identification

Loris Bazzani; Marco Cristani; Vittorio Murino

This work proposes a method to characterize the appearance of individuals exploiting body visual cues. The method is based on a symmetry-driven appearance-based descriptor and a matching policy that allows to recognize an individual. The descriptor encodes three complementary visual characteristics of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. The characteristics are extracted by following symmetry and asymmetry perceptual principles, that allow to segregate meaningful body parts and to focus on the human body only, pruning out the background clutter. The descriptor exploits the case where we have a single image of the individual, as so as the eventuality that multiple pictures of the same identity are available, as in a tracking scenario. The descriptor is dubbed Symmetry-Driven Accumulation of Local Features (SDALFs). Our approach is applied to two different scenarios: re-identification and multi-target tracking. In the former, we show the capabilities of SDALF in encoding peculiar aspects of an individual, focusing on its robustness properties across dramatic low resolution images, in presence of occlusions and pose changes, and variations of viewpoints and scene illumination. SDALF has been tested on various benchmark datasets, obtaining in general convincing performances, and setting the state of the art in some cases. The latter scenario shows the benefits of using SDALF as observation model for different trackers, boosting their performances under different respects on the CAVIAR dataset.


The Plant Cell | 2012

The grapevine expression atlas reveals a deep transcriptome shift driving the entire plant into a maturation program.

Marianna Fasoli; Silvia Dal Santo; Sara Zenoni; Giovanni Battista Tornielli; Lorenzo Farina; Anita Zamboni; Andrea Porceddu; Luca Venturini; Manuele Bicego; Vittorio Murino; Alberto Ferrarini; Massimo Delledonne; Mario Pezzotti

The authors developed a comprehensive transcriptome atlas in grapevine by comparing the genes expressed in 54 diverse samples accounting for ∼91% of all known grapevine genes. Using a panel of different statistical techniques, they found that the whole plant undergoes transcriptomic reprogramming, driving it towards maturity. We developed a genome-wide transcriptomic atlas of grapevine (Vitis vinifera) based on 54 samples representing green and woody tissues and organs at different developmental stages as well as specialized tissues such as pollen and senescent leaves. Together, these samples expressed ∼91% of the predicted grapevine genes. Pollen and senescent leaves had unique transcriptomes reflecting their specialized functions and physiological status. However, microarray and RNA-seq analysis grouped all the other samples into two major classes based on maturity rather than organ identity, namely, the vegetative/green and mature/woody categories. This division represents a fundamental transcriptomic reprogramming during the maturation process and was highlighted by three statistical approaches identifying the transcriptional relationships among samples (correlation analysis), putative biomarkers (O2PLS-DA approach), and sets of strongly and consistently expressed genes that define groups (topics) of similar samples (biclustering analysis). Gene coexpression analysis indicated that the mature/woody developmental program results from the reiterative coactivation of pathways that are largely inactive in vegetative/green tissues, often involving the coregulation of clusters of neighboring genes and global regulation based on codon preference. This global transcriptomic reprogramming during maturation has not been observed in herbaceous annual species and may be a defining characteristic of perennial woody plants.


EURASIP Journal on Advances in Signal Processing | 2010

Background subtraction for automated multisensor surveillance: a comprehensive review

Marco Cristani; Michela Farenzena; Domenico Daniele Bloisi; Vittorio Murino

Background subtraction is a widely used operation in the video surveillance, aimed at separating the expected scene (the background) from the unexpected entities (the foreground). There are several problems related to this task, mainly due to the blurred boundaries between background and foreground definitions. Therefore, background subtraction is an open issue worth to be addressed under different points of view. In this paper, we propose a comprehensive review of the background subtraction methods, that considers also channels other than the sole visible optical one (such as the audio and the infrared channels). In addition to the definition of novel kinds of background, the perspectives that these approaches open up are very appealing: in particular, the multisensor direction seems to be well-suited to solve or simplify several hoary background subtraction problems. All the reviewed methods are organized in a novel taxonomy that encapsulates all the brand-new approaches in a seamless way.


Proceedings of the IEEE | 2000

Three-dimensional image generation and processing in underwater acoustic vision

Vittorio Murino; Andrea Trucco

Underwater exploration is becoming more and more important for many applications involving physical, biological, geological, archaeological, and industrial issues. This paper aims at surveying the up-to-date advances in acoustic acquisition systems and data processing techniques, especially focusing on three-dimensional (3-D) short-range imaging for scene reconstruction and understanding. In fact, the advent of smarter and more efficient imaging systems has allowed the generation of good quality high-resolution images and the related design of proper techniques for underwater scene understanding. The term acoustic vision is introduced to generally describe all data processing (especially image processing) methods devoted to the interpretation of a scene. Since acoustics is also used for medical applications, a short overview of the related systems for biomedical acoustic image for motion is provided. The final goal of the paper is to establish the state of-the art of the techniques and algorithms for acoustic image generation and processing, providing technical details and results for the most promising techniques, and pointing out the potential capabilities of this technology for underwater scene understanding.


Pattern Recognition Letters | 2012

Multiple-shot person re-identification by chromatic and epitomic analyses

Loris Bazzani; Marco Cristani; Alessandro Perina; Vittorio Murino

Highlights? We propose a novel appearance-based method for person re-identification. ? It condenses a set of frames of the same individual into a informative signature. ? It incorporates complementary global and local statistical descriptions of the human appearance. ? Semantic segmentation of objects is exploited to define a part-based descriptor. ? The resulting descriptor define state-of-the-art results on the considered datasets. We propose a novel appearance-based method for person re-identification, that condenses a set of frames of an individual into a highly informative signature, called the Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content via histogram representation, and on the presence of recurrent local patches via epitomic analysis. The re-identification performance of HPE is then augmented by applying it as human part descriptor, defining a structured feature called asymmetry-based HPE (AHPE). The matching between (A)HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining state-of-the-art results on all the considered datasets.


IEEE Transactions on Multimedia | 2007

Audio-Visual Event Recognition in Surveillance Video Sequences

Marco Cristani; Manuele Bicego; Vittorio Murino

In the context of the automated surveillance field, automatic scene analysis and understanding systems typically consider only visual information, whereas other modalities, such as audio, are typically disregarded. This paper presents a new method able to integrate audio and visual information for scene analysis in a typical surveillance scenario, using only one camera and one monaural microphone. Visual information is analyzed by a standard visual background/foreground (BG/FG) modelling module, enhanced with a novelty detection stage and coupled with an audio BG/FG modelling scheme. These processes permit one to detect separate audio and visual patterns representing unusual unimodal events in a scene. The integration of audio and visual data is subsequently performed by exploiting the concept of synchrony between such events. The audio-visual (AV) association is carried out online and without need for training sequences, and is actually based on the computation of a characteristic feature called audio-video concurrence matrix, allowing one to detect and segment AV events, as well as to discriminate between them. Experimental tests involving classification and clustering of events show all the potentialities of the proposed approach, also in comparison with the results obtained by employing the single modalities and without considering the synchrony issue


british machine vision conference | 2011

Social interaction discovery by statistical analysis of F-formations.

Marco Cristani; Loris Bazzani; Giulia Paggetti; Andrea Fossati; Diego Tosato; Alessio Del Bue; Gloria Menegaz; Vittorio Murino

We present a novel approach for detecting social interactions in a crowded scene by employing solely visual cues. The detection of social interactions in unconstrained scenarios is a valuable and important task, especially for surveillance purposes. Our proposal is inspired by the social signaling literature, and in particular it considers the sociological notion of F-formation. An F-formation is a set of possible configurations in space that people may assume while participating in a social interaction. Our system takes as input the positions of the people in a scene and their (head) orientations; then, employing a voting strategy based on the Hough transform, it recognizes F-formations and the individuals associated with them. Experiments on simulations and real data promote our idea.

Collaboration


Dive into the Vittorio Murino's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diego Sona

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Alessio Del Bue

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Loris Bazzani

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge