Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Massimo Camplani is active.

Publication


Featured researches published by Massimo Camplani.


Proceedings of SPIE | 2012

Efficient spatio-temporal hole filling strategy for Kinect depth maps

Massimo Camplani; Luis Salgado

In this paper we present an efficient hole filling strategy that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a joint-bilateral filtering framework that includes spatial and temporal information. The missing depth values are obtained applying iteratively a joint-bilateral filter to their neighbor pixels. The filter weights are selected considering three different factors: visual data, depth information and a temporal-consistency map. Video and depth data are combined to improve depth map quality in presence of edges and homogeneous regions. Finally, the temporal-consistency map is generated in order to track the reliability of the depth measurements near the hole regions. The obtained depth values are included iteratively in the filtering process of the successive frames and the accuracy of the hole regions depth values increases while new samples are acquired and filtered.


international conference on communications | 2015

A multi-modal sensor infrastructure for healthcare in a residential environment

Przemyslaw Woznowski; Xenofon Fafoutis; Terence Song; Sion Hannuna; Massimo Camplani; Lili Tao; Adeline Paiement; Evangelos Mellios; Mo Haghighi; Ni Zhu; Geoffrey S Hilton; Dima Damen; Tilo Burghardt; Majid Mirmehdi; Robert J. Piechocki; Dritan Kaleshi; Ian J Craddock

Ambient Assisted Living (AAL) systems based on sensor technologies are seen as key enablers to an ageing society. However, most approaches in this space do not provide a truly generic ambient space - one that is not only capable of assisting people with diverse medical conditions, but can also recognise the habits of healthy habitants, as well as those with developing medical conditions. The recognition of Activities of Daily Living (ADL) is key to the understanding and provisioning of appropriate and efficient care. However, ADL recognition is particularly difficult to achieve in multi-resident spaces; especially with single-mode (albeit carefully crafted) solutions, which only have limited capabilities. To address these limitations we propose a multi-modal system architecture for AAL remote healthcare monitoring in the home, gathering information from multiple, diverse (sensor) data sources. In this paper we report on developments made to-date in various technical areas with respect to critical issues such as cost, power consumption, scalability, interoperability and privacy.


british machine vision conference | 2014

Online quality assessment of human movement from skeleton data

Adeline Paiement; Lili Tao; Sion Hannuna; Massimo Camplani; Dima Damen; Majid Mirmehdi

This work addresses the challenge of analysing the quality of human movements from visual information which has use in a broad range of applications, from diagnosis and rehabilitation to movement optimisation in sports science. Traditionally, such assessment is performed as a binary classification between normal and abnormal by comparison against normal and abnormal movement models, e.g. [5]. Since a single model of abnormal movement cannot encompass the variety of abnormalities, another class of methods only compares against one model of normal movement, e.g. [4]. We adopt this latter strategy and propose a continuous assessment of movement quality, rather than a binary classification, by quantifying the deviation from a normal model. In addition, while most methods can only analyse a movement after its completion e.g. [6], this assessment is performed on a frame-by-frame basis in order to allow fast system response in case of an emergency, such as a fall. Methods such as [4, 6] are specific to one type of movement, mostly due to the features used. In this work, we aim to represent a large variety of movements by exploiting full body information. We use a depth camera and a skeleton tracker [3] to obtain the position of the main joints of the body, as seen in Fig. 1. We normalise this skeleton for global position and orientation of the camera, and for the varying height of the subjects, e.g. using Procrustes analysis. The normalised skeletons have high dimensionality and tend to contain outliers. Thus, the dimensionality is reduced using Diffusion Maps [1] which is modified by including the extension that Gerber et al. [2] presented to deal with outliers in Laplacian Eigenmaps. The resulting high level feature vector Y, obtained from the normalised skeleton at one frame, represents an individual pose and is used to build a statistical model of normal movement. Our statistical model is made up of two components that describe the normal poses and the normal dynamics of the movement. The pose model is in the form of the probability density function (pdf) fY (y) of a random variable Y that takes as value y = Y our pose feature vector Y. The pdf is learnt from all the frames of training sequences that contain normal instances of the movement, using a Parzen window estimator. The quality of a new pose yt at frame t is then assessed as the log-likelihood of being described by the pose model, i.e.


Springer US | 2017

SPHERE: A Sensor Platform for Healthcare in a Residential Environment

Pete R Woznowski; Alison Burrows; Tom Diethe; Xenofon Fafoutis; Jake Hall; Sion Hannuna; Massimo Camplani; Niall Twomey; Michal Kozlowski; Bo Tan; Ni Zhu; Atis Elsts; Antonis Vafeas; Adeline Paiement; Lili Tao; Majid Mirmehdi; Tilo Burghardt; Dima Damen; Peter A. Flach; Robert J. Piechocki; Ian J Craddock; George C. Oikonomou

It can be tempting to think about smart homes like one thinks about smart cities. On the surface, smart homes and smart cities comprise coherent systems enabled by similar sensing and interactive technologies. It can also be argued that both are broadly underpinned by shared goals of sustainable development, inclusive user engagement and improved service delivery. However, the home possesses unique characteristics that must be considered in order to develop effective smart home systems that are adopted in the real world [37].


Sensors | 2014

Foreground Segmentation in Depth Imagery Using Depth and Spatial Dynamic Models for Video Surveillance Applications

Carlos R. del-Blanco; Tomás Mantecón; Massimo Camplani; Fernando Jaureguizar; Luis Salgado; Narciso N. García

Low-cost systems that can obtain a high-quality foreground segmentation almost independently of the existing illumination conditions for indoor environments are very desirable, especially for security and surveillance applications. In this paper, a novel foreground segmentation algorithm that uses only a Kinect depth sensor is proposed to satisfy the aforementioned system characteristics. This is achieved by combining a mixture of Gaussians-based background subtraction algorithm with a new Bayesian network that robustly predicts the foreground/background regions between consecutive time steps. The Bayesian network explicitly exploits the intrinsic characteristics of the depth data by means of two dynamic models that estimate the spatial and depth evolution of the foreground/background regions. The most remarkable contribution is the depth-based dynamic model that predicts the changes in the foreground depth distribution between consecutive time steps. This is a key difference with regard to visible imagery, where the color/gray distribution of the foreground is typically assumed to be constant. Experiments carried out on two different depth-based databases demonstrate that the proposed combination of algorithms is able to obtain a more accurate segmentation of the foreground/background than other state-of-the art approaches.


Iet Computer Vision | 2017

Multiple Human Tracking in RGB-D Data: A Survey

Massimo Camplani; Adeline Paiement; Majid Mirmehdi; Dima Damen; Sion Hannuna; Tilo Burghardt; Lili Tao

Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-depth devices has led to many new approaches to MHT, and many of these integrate colour and depth cues to improve each and every stage of the process. In this survey, the authors present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. They identify and introduce existing, publicly available, benchmark datasets and software resources that fuse colour and depth data for MHT. Finally, they present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets.


international conference on e health networking application services | 2015

A comparative home activity monitoring study using visual and inertial sensors

Lili Tao; Tilo Burghardt; Sion Hannuna; Massimo Camplani; Adeline Paiement; Dima Damen; Majid Mirmehdi; Ian J Craddock

Monitoring actions at home can provide essential information for rehabilitation management. This paper presents a comparative study and a dataset for the fully automated, sample-accurate recognition of common home actions in the living room environment using commercial-grade, inexpensive inertial and visual sensors. We investigate the practical home-use of body-worn mobile phone inertial sensors together with an Asus Xmotion RGB-Depth camera to achieve monitoring of daily living scenarios. To test this setup against realistic data, we introduce the challenging SPHERE-H130 action dataset containing 130 sequences of 13 household actions recorded in a home environment. We report automatic recognition results at maximal temporal resolution, which indicate that a vision-based approach outperforms accelerometer provided by two phone-based inertial sensors by an average of 14.85% accuracy for home actions. Further, we report improved accuracy of a vision-based approach over accelerometry on particularly challenging actions as well as when generalising across subjects.


Journal of Real-time Image Processing | 2016

DS-KCF: a real-time tracker for RGB-D data

Sion Hannuna; Massimo Camplani; Jake Hall; Majid Mirmehdi; Dima Damen; Tilo Burghardt; Adeline Paiement; Lili Tao

We propose an RGB-D single-object tracker, built upon the extremely fast RGB-only KCF tracker that is able to exploit depth information to handle scale changes, occlusions, and shape changes. Despite the computational demands of the extra functionalities, we still achieve real-time performance rates of 35–43 fps in MATLAB and 187 fps in our C++ implementation. Our proposed method includes fast depth-based target object segmentation that enables, (1) efficient scale change handling within the KCF core functionality in the Fourier domain, (2) the detection of occlusions by temporal analysis of the target’s depth distribution, and (3) the estimation of a target’s change of shape through the temporal evolution of its segmented silhouette allows. Finally, we provide an in-depth analysis of the factors affecting the throughput and precision of our proposed tracker and perform extensive comparative analysis. Both the MATLAB and C++ versions of our software are available in the public domain.


international conference on image analysis and processing | 2017

A Benchmarking Framework for Background Subtraction in RGBD Videos

Massimo Camplani; Lucia Maddalena; Gabriel Moyá Alcover; Alfredo Petrosino; Luis Salgado

The complementary nature of color and depth synchronized information acquired by low cost RGBD sensors poses new challenges and design opportunities in several applications and research areas. Here, we focus on background subtraction for moving object detection, which is the building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. The aim of this paper is to describe a novel benchmarking framework that we set up and made publicly available in order to evaluate and compare scene background modeling methods for moving object detection on RGBD videos. The proposed framework involves the largest RGBD video dataset ever made for this specific purpose. The 33 videos span seven categories, selected to include diverse scene background modeling challenges for moving object detection. Seven evaluation metrics, chosen among the most widely used, are adopted to evaluate the results against a wide set of pixel-wise ground truths. Moreover, we present a preliminary analysis of results, devoted to assess to what extent the various background modeling challenges pose troubles to background subtraction methods exploiting color and depth information.


biomedical circuits and systems conference | 2015

Remote pulmonary function testing using a depth sensor

Vahid Soleimani; Majid Mirmehdi; Dima Damen; Sion Hannuna; Massimo Camplani; Jason Viner; James W. Dodd

We propose a remote non-invasive approach to Pulmonary Function Testing using a time-of-flight depth sensor (Microsoft Kinect V2), and correlate our results to clinical-standard spirometry results. Given point clouds, we approximate 3D models of the subjects chest, estimate the volume throughout a sequence and construct volume-time and flow-time curves for two prevalent spirometry tests: Forced Vital Capacity and Slow Vital Capacity. From these, we compute clinical measures, such as FVC, FEV1, VC and IC. We correlate automatically extracted measures with clinical spirometry tests on 40 patients in an outpatient hospital setting. These demonstrate high within-test correlations.

Collaboration


Dive into the Massimo Camplani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lili Tao

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis Salgado

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge