Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Chippendale is active.

Publication


Featured researches published by Paul Chippendale.


CLEaR | 2006

A generative approach to audio-visual person tracking

Roberto Brunelli; Alessio Brutti; Paul Chippendale; Oswald Lanz; Maurizio Omologo; Piergiorgio Svaizer; Francesco Tobia

This paper focuses on the integration of acoustic and visual information for people tracking. The system presented relies on a probabilistic framework within which information from multiple sources is integrated at an intermediate stage. An advantage of the method proposed is that of using a generative approach which supports easy and robust integration of multi source information by means of sampled projection instead of triangulation. The system described has been developed in the EU funded CHIL Project research activities. Experimental results from the CLEAR evaluation workshop are reported.


The Medical Roundtable | 2007

Multimodal corpus of multi-party meetings for automatic social behavior analysis and personality traits detection

Nadia Mana; Bruno Lepri; Paul Chippendale; Alessandro Cappelletti; Fabio Pianesi; Piergiorgio Svaizer; Massimo Zancanaro

This paper describes an automatically annotated multimodal corpus of multi-party meetings. The corpus provides for each subject involved in the experimental sessions information on her/his social behavior and personality traits, as well as audiovisual cues (speech rate, pitch and energy, head orientation, head, hand and body fidgeting). The corpus is based on the audio and video recordings of thirteen sessions, which took place in a lab setting equipped with cameras and microphones. Our main concern in collecting this corpus was to investigate the possibility of creating a system capable of automatically analyzing social behaviors and predicting personality traits using audio-visual cues.


distributed computing and artificial intelligence | 2009

Multimodal Classification of Activities of Daily Living Inside Smart Homes

Vit Libal; Bhuvana Ramabhadran; Nadia Mana; Fabio Pianesi; Paul Chippendale; Oswald Lanz; Gerasimos Potamianos

Smart homes for the aging population have recently started attracting the attention of the research community. One of the problems of interest is this of monitoring the activities of daily living (ADLs) of the elderly aiming at their protection and well-being. In this work, we present our initial efforts to automatically recognize ADLs using multimodal input from audio-visual sensors. For this purpose, and as part of Integrated Project Netcarity, far-field microphones and cameras have been installed inside an apartment and used to collect a corpus of ADLs, acted by multiple subjects. The resulting data streams are processed to generate perception-based acoustic features, as well as human location coordinates that are employed as visual features. The extracted features are then presented to Gaussian mixture models for their classification into a set of predefined ADLs. Our experimental results show that both acoustic and visual features are useful in ADL classification, but performance of the latter deteriorates when subject tracking becomes inaccurate. Furthermore, joint audio-visual classification by simple concatenative feature fusion significantly outperforms both unimodal classifiers.


international geoscience and remote sensing symposium | 2008

Spatial and Temporal Attractiveness Analysis Through Geo-Referenced Photo Alignment

Paul Chippendale; Michele Zanin; Claudio Andreatta

This paper presents a system to create a spatiotemporal attractiveness GIS layer for mountainous areas brought about by the implementation of novel image processing and pattern matching algorithms. We utilize the freely available Digital Terrain Model of the planet provided by NASA [1] to generate a three-dimensional synthetic model around a viewers location. Using an array of image processing algorithms we then align photographs to this model. We will demonstrate the accuracy of the resulting system through the overlaying of geo-referenced content, such as mountain names and then we will suggest ways in which visitors/photographers can exploit the results of this research, such as suggesting temporally appropriate photo-hotspots close to their current location.


IEEE MultiMedia | 2011

Personalized Coverage of Large Athletic Events

Charalampos Z. Patrikakis; Nikolaos Papaoulakis; Panagiotis Papageorgiou; Aristodemos Pnevmatikakis; Paul Chippendale; Mário Serafim Nunes; Rui Santos Cruz; Stefan Poslad; Zhenchen Wang

This article presents a platform that lets users direct their own coverage of large athletic events, letting them set up their own virtual director and orchestrate event viewing according to their preferences.


workshop on image analysis for multimedia interactive services | 2009

Directing your own live and interactive sports channel

Stefan Poslad; Aristodemos Pnevmatikakis; Mário Serafim Nunes; Elena Garrido Ostermann; Paul Chippendale; Peter Brightwell; Charalampos Z. Patrikakis

The ability to mark-up live sports event content, viewed from multiple camera angles, such that athletes and other objects of interest can be tracked, facilitates an exciting new personalised and interactive viewing experience for spectators, enabling spectators to act as directors of their own customised live sports videos. In this paper, such an approach is described as part of the My-e-Director 2012 project. The design of this platform is described here and a discussion of a prototype system is given.


international conference on machine learning | 2008

Optimised Meeting Recording and Annotation Using Real-Time Video Analysis

Paul Chippendale; Oswald Lanz

The research detailed in this paper represents the confluence of various vision technologies to provide a powerful, real-time tool for human behavioural analysis. Gesture recognition algorithms are amalgamated with a robust multi-person tracker based on particle filtering to monitor the position and orientation of multiple people, and moreover to understand their focus of attention and gesticular activity. Additionally, an integrated virtual video director is demonstrated that can automatically control active cameras to produce an optimum record of visual events in real-time.


european conference on computer vision | 2014

Personal Shopping Assistance and Navigator System for Visually Impaired People

Paul Chippendale; Valeria Tomaselli; Viviana D’Alto; Giulio Urlini; Carla Maria Modena; Stefano Messelodi; Sebastiano Mauro Strano; Günter Alce; Klas Hermodsson; Mathieu Razafimahazo; Thibaud Michel; Giovanni Maria Farinella

In this paper, a personal assistant and navigator system for visually impaired people will be described. The showcase presented intends to demonstrate how partially sighted people could be aided by the technology in performing an ordinary activity, like going to a mall and moving inside it to find a specific product. We propose an Android application that integrates Pedestrian Dead Reckoning and Computer Vision algorithms, using an off-the-shelf Smartphone connected to a Smartwatch. The detection, recognition and pose estimation of specific objects or features in the scene derive an estimate of user location with sub-meter accuracy when combined with a hardware-sensor pedometer. The proposed prototype interfaces with a user by means of Augmented Reality, exploring a variety of sensorial modalities other than just visual overlay, namely audio and haptic modalities, to create a seamless immersive user experience. The interface and interaction of the preliminary platform have been studied through specific evaluation methods. The feedback gathered will be taken into consideration to further improve the proposed system.


conference on visual media production | 2009

Collective Photography

Paul Chippendale; M. Zanin; C. Andreatta

This paper offers the reader an insight into how photography and home video production could evolve in the near future through the evolution of geo-tagging (adding location and orientation parameters to an object). Technological advances in portable imaging and communications devices, e.g. digital cameras and smartphones, are bringing about a new era in media creation that could see us all automatically contributing to the documentation of society. We will demonstrate how enriched multimedia can be generated through the exploitation of socially generated spatiotemporal knowledge, extracted from the photos of others, or from any form of geo-referenced material. An overview of how geo-tags can be created is presented, ranging from integrated hardware to purely software solutions; we focus on a selection of promising cutting edge research projects in this field that aim to fully automate the geo-tagging process. Finally, we will propose ways to extract content and visualise geo-referenced material intelligently inside registered imagery using geographical reasoning.


Computers in the Human Interaction Loop | 2009

Extracting Interaction Cues: Focus of Attention, Body Pose, and Gestures

Oswald Lanz; Roberto Brunelli; Paul Chippendale; Michael Voit; Rainer Stiefelhagen

Studies in social psychology [7] have experimentally validated the common feeling that nonverbal behavior, including, but not limited to, gaze and facial expressions, is extremely significant in human interactions. Proxemics [4] describes the social aspects of distance between interacting individuals. This distance is an indicator of the interactions that occur and provides information valuable to understanding human relationships.

Collaboration


Dive into the Paul Chippendale's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michele Zanin

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oswald Lanz

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabio Pianesi

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nadia Mana

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge