Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Del Bue is active.

Publication


Featured researches published by A. Del Bue.


computer vision and pattern recognition | 2006

Non-Rigid Metric Shape and Motion Recovery from Uncalibrated Images Using Priors

A. Del Bue; X. Llad; Lourdes Agapito

In this paper we focus on the estimation of the 3D Euclidean shape and motion of a non-rigid object which is moving rigidly while deforming and is observed by a perspective camera. Our method exploits the fact that it is often a reasonable assumption that some of the points are deforming throughout the sequence while others remain rigid. First we use an automatic segmentation algorithm to identify the set of rigid points which in turn is used to estimate the internal camera calibration parameters and the overall rigid motion. Finally we formalise the problem of non-rigid shape estimation as a constrained non-linear minimization adding priors on the degree of deformability of each point. We perform experiments on synthetic and real data which show firstly that even when using a minimal set of rigid points it is possible to obtain reliable metric information and secondly that the shape priors help to disambiguate the contribution to the image motion caused by the deformation and the perspective distortion.


Image and Vision Computing | 2007

Non-rigid structure from motion using ranklet-based tracking and non-linear optimization

A. Del Bue; Fabrizio Smeraldi; Lourdes Agapito

In this paper, we address the problem of estimating the 3D structure and motion of a deformable object given a set of image features tracked automatically throughout a video sequence. Our contributions are twofold: firstly, we propose a new approach to improve motion and structure estimates using a non-linear optimization scheme and secondly we propose a tracking algorithm based on ranklets, a recently developed family of orientation selective rank features. It has been shown that if the 3D deformations of an object can be modeled as a linear combination of shape bases then both its motion and shape may be recovered using an extension of Tomasi and Kanades factorization algorithm for affine cameras. Crucially, these new factorization methods are model free and work purely from video in an unconstrained case: a single uncalibrated camera viewing an arbitrary 3D surface which is moving and articulating. The main drawback of existing methods is that they do not provide correct structure and motion estimates: the motion matrix has a repetitive structure which is not respected by the factorization algorithm. In this paper, we present a non-linear optimization method to refine the motion and shape estimates which minimizes the image reprojection error and imposes the correct structure onto the motion matrix by choosing an appropriate parameterization. Factorization algorithms require as input a set of feature tracks or correspondences found throughout the image sequence. The challenge here is to track the features while the object is deforming and the appearance of the image therefore changing. We propose a model free tracking algorithm based on ranklets, a multi-scale family of rank features that present an orientation selectivity pattern similar to Haar wavelets. A vector of ranklets is used to encode an appearance based description of a neighborhood of each tracked point. Robustness is enhanced by adapting, for each point, the shape of the filters to the structure of the particular neighborhood. A stack of models is maintained for each tracked point in order to manage large appearance variations with limited drift. Our experiments on sequences of a human subject performing different facial expressions show that this tracker provides a good set of feature correspondences for the non-rigid 3D reconstruction algorithm.


computer vision and pattern recognition | 2008

A factorization approach to structure from motion with shape priors

A. Del Bue

This paper presents an approach for including 3D prior models into a factorization framework for structure from motion. The proposed method computes a closed-form affine fit which mixes the information from the data and the 3D prior on the shape structure. Moreover, it is general in regards to different classes of objects treated: rigid, articulated and deformable. The inclusion of the shape prior may aid the inference of camera motion and 3D structure components whenever the data is degenerate (i.e. nearly planar motion of the projected shape). A final non-linear optimization stage, which includes the shape priors as a quadratic cost, upgrades the affine fit to metric. Results on real and synthetic image sequences, which present predominant degenerate motion, make clear the improvements over the 3D reconstruction.


international conference on image processing | 2002

Smart cameras with real-time video object generation

A. Del Bue; Dorin Comaniciu; Visvanathan Ramesh; C. Regazzoni

The paper presents a system for video object generation and selective encoding with applications in surveillance, mobile videophones, and the automotive industry. Object tracking and MPEG-4 compression are performed in real-time. The system belongs to a new generation of intelligent vision sensors called smart cameras, which execute autonomous vision tasks and report events and data to a remote base-station. A detection module signals the presence of an object of interest within the camera field of view, while the tracking part follows the target to generate temporal trajectories. The compression is MPEG-4 compliant and implements the simple profile of the standard, which is capable of encoding up to four video objects. At the same time, the compression is selective, maintaining a higher quality for foreground objects and a lower quality for background representation. This property contributes to bandwidth reduction while preserving the essential information of foreground objects. The system performance is demonstrated in experiments that involve objects representing faces and vehicles seen from both static and moving cameras.


computer vision and pattern recognition | 2004

Non-Rigid Structure from Motion using non-Parametric Tracking and Non-Linear Optimization

A. Del Bue; Fabrizio Smeraldi; Lourdes Agapito

In this paper we address the problem of estimating the 3D structure and motion of a deformable non-rigid object from a sequence of uncalibrated images. It has been recently shown that if the deformation is modelled as a linear combination of basis shapes both the motion and the 3D structure of the object may be recovered using an extension of Tomasi and Kanades factorization algorithm for affine cameras. The main drawback of the existing methods is that the non-rigid factorization algorithm does not provide a correct estimate of the motion: the motion matrix has a repetitive structure which is not respected by the factorization algorithm. This also affects the estimation of the 3D shape. In this paper we present a non-linear optimization method which minimizes image reprojection error and imposes the correct structure onto the motion matrix by choosing an appropriate parameterization. In addition, we propose a novel non-rigid tracking algorithm based on the use of ranklets, a multiscale family of rank features. Finally, we show that improved motion and shape estimates are obtained on a real image sequence of a persons face which is moving and changing expression.


Archive | 2013

Anomaly Detection in Crowded Scenes: A Novel Framework Based on Swarm Optimization and Social Force Modeling

R. Raghavendra; Marco Cristani; A. Del Bue; Enver Sangineto; Vittorio Murino

This chapter presents a novel scheme for analyzing the crowd behavior from visual crowded scenes. The proposed method starts from the assumption that the interaction force, as estimated by the Social Force Model (SFM), is a significant feature to analyze crowd behavior. We step forward this hypothesis by optimizing this force using Particle Swarm Optimization (PSO) to perform the advection of a particle population spread randomly over the image frames. The population of particles is drifted towards the areas of the main image motion, driven by the PSO fitness function aimed at minimizing the interaction force, so as to model the most diffused, normal behavior of the crowd. We then use this proposed particle advection scheme to detect both global and local anomaly events in the crowded scene. A large set of experiments are carried out on public available datasets and results show the consistent higher performances of the proposed method as compared to other state-of-the-art algorithms.


Journal of Mathematical Imaging and Vision | 2016

Direct Differential Photometric Stereo Shape Recovery of Diffuse and Specular Surfaces

Silvia Tozza; Roberto Mecca; Martí Duocastella; A. Del Bue

Recovering the 3D shape of an object from shading is a challenging problem due to the complexity of modeling light propagation and surface reflections. Photometric Stereo (PS) is broadly considered a suitable approach for high-resolution shape recovery, but its functionality is restricted to a limited set of object surfaces and controlled lighting setup. In particular, PS models generally consider reflection from objects as purely diffuse, with specularities being regarded as a nuisance that breaks down shape reconstruction. This is a serious drawback for implementing PS approaches, since most common materials have prominent specular components. In this paper, we propose a PS model that solves the problem for both diffuse and specular components aimed at shape recovery of generic objects with the approach being independent of the albedo values thanks to the image ratio formulation used. Notably, we show that by including specularities, it is possible to solve the PS problem for a minimal number of three images using a setup with three calibrated lights and a standard industrial camera. Even if an initial separation of diffuse and specular components is still required for each input image, experimental results on synthetic and real objects demonstrate the feasibility of our approach for shape reconstruction of complex geometries.


international conference on pattern recognition | 2006

Euclidean Reconstruction of Deformable Structure Using a Perspective Camera with Varying Intrinsic Parameters

Xavier Lladó; A. Del Bue; Lourdes Agapito

In this paper we present a novel approach for the 3D Euclidean reconstruction of deformable objects observed by a perspective camera with variable intrinsic parameters. We formulate the non-rigid shape and motion estimation problem as a non-linear optimization where the objective function to be minimised is the image reprojection error. Our approach is based on the observation that often some of the points on the observed object behave rigidly, while others deform from frame to frame. We propose to use the set of rigid points to obtain an initial estimate of the cameras varying internal parameters and the overall rigid motion. The prior information that some of the points in the object are rigid can also be added to the non-linear minimization scheme in order to avoid ambiguous configurations. Results on synthetic and real data prove the performance of our algorithm even when using a minimal set of rigid points and when varying the intrinsic camera parameters


international conference on image processing | 2012

A joint structural and functional analysis of in-vitro neuronal networks

Simona Ullo; A. Del Bue; Alessandro Maccione; Luca Berdondini; Vittorio Murino

The acquisition, analysis and representation of experimental data describing both anatomical and functional information at cellular level is an innovative opportunity to investigate neuronal network processing and organization. In this paper we propose an image processing pipeline to study in-vitro neuronal networks with a joint analysis of anatomy and electrophysiology. Neuronal nuclei are detected by segmenting fluorescence images of neuronal cultures. The high resolution Multi Electrode Arrays (MEAs) technology is used to collect functional information on cellular electrophysiological activity. Finally, detailed maps, representing both structural and functional information, are obtained which provide statistics on neuron distribution and spiking activity.


international symposium on safety, security, and rescue robotics | 2013

Visual coverage using autonomous mobile robots for search and rescue applications

A. Del Bue; Marco Tamassia; F. Signorini; Vittorio Murino; Alessandro Farinelli

This paper focuses on visual sensing of 3D large-scale environments. Specifically, we consider a setting where a group of robots equipped with a camera must fully cover a surrounding area. To address this problem we propose a novel descriptor for visual coverage that aims at measuring visual information of an area based on a regular discretization of the environment in voxels. Moreover, we propose an autonomous cooperative exploration approach which controls the robot movements so to maximize information accuracy (defined based on our visual coverage descriptor) and minimizing movement costs. Finally, we define a simulation scenario based on real visual data and on widely used robotic tools (such as ROS and Stage) to empirically evaluate our approach. Experimental results show that the proposed method outperforms a baseline random approach and an uncoordinated one, thus being a valid solution for visual coverage in large scale outdoor scenarios.

Collaboration


Dive into the A. Del Bue's collaboration.

Top Co-Authors

Avatar

Vittorio Murino

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Lourdes Agapito

University College London

View shared research outputs
Top Co-Authors

Avatar

Fabrizio Smeraldi

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alessandro Maccione

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

E. Muñoz

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Simona Ullo

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge