Marco Campanella
Philips
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marco Campanella.
electronic imaging | 2007
Marco Campanella; Hans Weda; Mauro Barbieri
In recent years, more and more people capture their experiences in home videos. However, home video editing still is a difficult and time-consuming task. We present the Edit While Watching system that allows users to automatically create and change a summary of a home video in an easy, intuitive and lean-back way. Based on content analysis, video is indexed, segmented, and combined with proper music and editing effects. The result is an automatically generated home video summary that is shown to the user. While watching it, users can indicate whether they like certain content, so that the system will adapt the summary to contain more content that is similar or related to the displayed content. During the video playback users can also modify and enrich the content, seeing immediately the effects of their changes. Edit While Watching does not require a complex user interface: a TV and a few keys of a remote control are sufficient. A user study has shown that it is easy to learn and to use, even if users expressed the need for more control in the editing operations and in the editing process.
Signal, Image and Video Processing | 2009
Marco Campanella; Riccardo Leonardi; Pierangelo Migliorati
In this paper, we present an intuitive graphic framework introduced for the effective visualization of video content and associated audio-visual description, with the aim to facilitate a quick understanding and annotation of the semantic content of a video sequence. The basic idea consists in the visualization of a 2D feature space in which the shots of the considered video sequence are located. Moreover, the temporal position and the specific content of each shot can be displayed and analysed in more detail. The selected features are decided by the user, and can be updated during the navigation session. In the main window, shots of the considered video sequence are displayed in a Cartesian plane, and the proposed environment offers various functionalities for automatically and semi-automatically finding and annotating the shot clusters in such feature space. With this tool the user can therefore explore graphically how the basic segments of a video sequence are distributed in the feature space, and can recognize and annotate the significant clusters and their structure. The experimental results show that browsing and annotating documents with the aid of the proposed visualization paradigms is easy and quick, since the user has a fast and intuitive access to the audio-video content, even if he or she has not seen the document yet.
international conference on image processing | 2005
Marco Campanella; Riccardo Leonardi; Pierangelo Migliorati
In this paper we address the problem of semi-automatic annotation of audio-visual sequences. Specifically, we propose the use of an innovative graphic framework, named Future-Viewer, to perform a quick and efficient annotation of a given multimedia document. The basic idea consists in visualising a 2-dimensional feature space in which the shots of the considered video sequence are located. In this window, shots with similar content fall near each other, and the proposed tool offers various functionalities for automatically and semi-automatically finding and annotating the shot clusters in such feature space. The proposed system has been used to analyze the content in terms of logical story units of few video sequences and the obtained results appear very interesting.
international conference on multimedia and expo | 2005
Marco Campanella; Riccardo Leonardi; Pierangelo Migliorati
In this work, we propose an intuitive graphic framework for the effective visualization of MPEG-7 low-level features, in the context of classification and annotation of audio-visual documents. This graphic tool is proposed to facilitate the access to the content, and to improve a quick understanding of the semantics associated to the considered document. The main visualization paradigm employed consists in representing a 2D feature space in which the shots of the audiovisual document are located. In another window, the same shots are drawn in a temporal bar that gives the users also the information related to the time domain. In the main window, shots with similar content fall near each other, and the proposed tool offers various functionalities for automatically and semi-automatically finding and annotating shot clusters in the feature space. The use of the proposed system to analyze the content of few video sequences has shown very interesting capabilities
Archive | 2007
Mauro Barbieri; Johannes Weda; Lalitha Agnihotri; Marco Campanella; Prarthana Shrestha
Archive | 2008
Johannes Weda; Marco Campanella; Mauro Barbieri; Prarthana Shrestha
Archive | 2007
Johannes Weda; Mauro Barbieri; Marco Campanella; Sander Wilhelmus Johannes Kouwenberg
Archive | 2009
Francesco Emanuele Bonarrigo; Marco Campanella; Mauro Barbieri; Johannes Weda
BCS-HCI '08 Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction - Volume 2 | 2008
Marco Campanella; Jettie Hoonhout
Archive | 2007
Johannes Weda; Mauro Barbieri; Marco Campanella; Prarthana Shrestha