Eddie Cooke
Dublin City University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eddie Cooke.
international conference on information fusion | 2006
C. O'Conaire; Noel E. O'Connor; Eddie Cooke; Alan F. Smeaton
In this paper, we evaluate the appearance tracking performance of multiple fusion schemes that combine information from standard CCTV and thermal infrared spectrum video for the tracking of surveillance objects, such as people, faces, bicycles and vehicles. We show results on numerous real world multimodal surveillance sequences, tracking challenging objects whose appearance changes rapidly. Based on these results we can determine the most promising fusion schemes
Signal Processing-image Communication | 2002
Christoph Fehn; Eddie Cooke; Oliver Schreer; Peter Kauff
Abstract Depth perception in images and video has been a relevant research issue for years, with the main focus on the basic idea of “stereoscopic” viewing. However, it is well known from the literature that stereovision is only one of the relevant depth cues and that motion parallax, as well as color, brightness and geometric appearance of video objects are at least of the same importance and that their individual influence mainly depending on the object distance. Thus, for depth perception it may sometimes be sufficient to watch pictures or movies on large screens with brilliant quality or to provide head-motion parallax viewing on conventional 2D displays. Based on this observation we introduce an open, flexible and modular immersive TV system that is backwards compatible to todays 2D digital television and that is able to support a wide range of different 2D and 3D displays. The system is based on a three-stage concept and aims to add more and more depth cues at each additional layer.
international conference on image processing | 2006
Ciarán Ó Conaire; Noel E. O'Connor; Eddie Cooke; Alan F. Smeaton
This paper describes a system for object segmentation and feature extraction for surveillance video. Segmentation is performed by a dynamic vision system that fuses information from thermal infrared video with standard CCTV video in order to detect and track objects. Separate background modelling in each modality and dynamic mutual information based thresholding are used to provide initial foreground candidates for tracking. The belief in the validity of these candidates is ascertained using knowledge of foreground pixels and temporal linking of candidates. The transferable belief model is used to combine these sources of information and segment objects. Extracted objects are subsequently tracked using adaptive thermo-visual appearance models. In order to facilitate search and classification of objects in large archives, retrieval features from both modalities are extracted for tracked objects. Overall system performance is demonstrated in a simple retrieval scenario.
Signal Processing-image Communication | 2006
Eddie Cooke; Peter Kauff; Thomas Sikora
Abstract Interactive audio-visual applications such as free viewpoint video (FVV) endeavour to provide unrestricted spatio-temporal navigation within a multiple camera environment. Current novel view creation approaches for scene navigation within FVV applications are either purely image-based, implying large information redundancy and dense sampling of the scene; or involve reconstructing complex 3-D models of the scene. In this paper we present a new multiple image view synthesis algorithm for novel view creation that requires only implicit scene geometry information. The multi-view synthesis approach can be used in any multiple camera environment and is scalable, as virtual views can be created given 1 to N of the available video inputs, providing a means to gracefully handle scenarios where camera inputs decrease or increase over time. The algorithm identifies and selects only the best quality surface areas from available reference images, thereby reducing perceptual errors in virtual view reconstruction. Experimental results are provided and verified using both objective (PSNR) and subjective comparisons and also the improvements over the traditional multiple image view synthesis approach of view-oriented weighting are presented.
computer vision and pattern recognition | 2005
Ciarán Ó Conaire; Eddie Cooke; Noel E. O'Connor; Noel Murphy; A. Smearson
In this paper, we present our approach to robust background modelling which combines visible and thermal infrared spectrum data. Our work is based on the non-parametric background model describe in 1. We use a pedestrian detection module to prevent erroneous data from becoming part of the background model and this allows us to initialise our bacjground model, even in the presence of foreground objects. Visible and infrared features are use to remove incorrectly detected foreground regions. Allowing our model to quickly recover from ghost regions and rapid lighting changes. An object-based shadow detector also improves our algorithms performance.
international conference on image processing | 2005
Eddie Cooke; Noel E. O'Connor
Interactive audio-visual (AV) applications such as free viewpoint video (FVV) aim to enable unrestricted spatio-temporal navigation within multiple camera environments. Current virtual viewpoint view synthesis solutions for FVV are either purely image-based implying large information redundancy; or involve reconstructing complex 3D models of the scene. In this paper we present a new multiple image view synthesis algorithm that only requires camera parameters and disparity maps. The multi-view synthesis (MVS) approach can be used in any multi-camera environment and is scalable as virtual views can be created given 1 to N of the available video inputs, providing a means to gracefully handle scenarios where camera inputs decrease or increase over time. The algorithm identifies and selects only the best quality surface areas from available reference images, thereby reducing perceptual errors in virtual view reconstruction. Experimental results are presented and verified using both objective (PSNR) and subjective comparisons.
international conference on image analysis and recognition | 2006
Philip Kelly; Eddie Cooke; Noel E. O’Connor; Alan F. Smeaton
A method for pedestrian detection from real world outdoor scenes is presented in this paper. The technique uses disparity information, ground plane estimation and biometric information based on the golden ratio. It can detect pedestrians even in the presence of severe occlusion or a lack of reliable disparity data. It also makes reliable choices in ambiguous areas since the pedestrian regions are initiated using the disparity of head regions. These are usually highly textured and unoccluded, and therefore more reliable in a disparity image than homogeneous or occluded regions.
international symposium on circuits and systems | 2006
N.A. O'Connor; Hyowon Lee; Alan F. Smeaton; Gareth J. F. Jones; Eddie Cooke; H. Le Borgne; Cathal Gurrin
The Fischlar-TRECVid-2004 system was developed for Dublin City Universitys participation in the 2004 TRECVid video information retrieval benchmarking activity. The system allows search and retrieval of video shots from over 60 hours of content. The shot retrieval engine employed is based on a combination of query text matched against spoken dialogue combined with image-image matching where a still image (sourced externally), or a keyframe (from within the video archive itself), is matched against all keyframes in the video archive. Three separate text retrieval engines are employed for closed caption text, automatic speech recognition and video OCR. Visual shot matching is primarily based on MPEG-7 low-level descriptors. The system supports relevance feedback at the shot level enabling augmentation and refinement using relevant shots located by the user. Two variants of the system were developed, one that supports both text- and image-based searching and one that supports image only search. A user evaluation experiment compared the use of the two systems. Results show that while the system combining text- and image-based searching achieves greater retrieval effectiveness, users make more varied and extensive queries with the image only based searching version
Ninth International Conference on Information Visualisation (IV'05) | 2005
Eddie Cooke; Noel E. O'Connor
One of the main aims of emerging audio-visual (AV) applications is to provide interactive navigation within a captured event or scene. This paper presents a view synthesis algorithm that provides a scalable and flexible approach to virtual viewpoint synthesis in multiple camera environments. The multi-view synthesis (MVS) process consists of four different phases that are described in detail: surface identification, surface selection, surface boundary blending and surface reconstruction. MVS view synthesis identifies and selects only the best quality surface areas from the set of available reference images, thereby reducing perceptual errors in virtual view reconstruction. The approach is camera setup independent and scalable as virtual views can be created given 1 to N of the available video inputs. Thus, MVS provides interactive AV applications with a means to handle scenarios where camera inputs increase or decrease over time.
Lecture Notes in Computer Science | 2003
Eddie Cooke; Peter Kauff; Oliver Schreer
Image-based rendering systems are designed to render a virtual view of a scene based on a set of images and correspondences between these images. This approach is attractive as it does not require explicit scene reconstruction. In this paper we identify that the level of realism of the virtual view is dependent on the camera set-up and the quality of the image analysis and synthesis processes. We explain how wide-baseline convergent camera set-ups and virtual view independent approaches to surface selection have led to the development of very system specific solutions. We then introduce a unique scalable and modular system solution. This scalable system is configured using building blocks defined as SCABs. These provide design flexibility and improve the image analysis process. Virtual view creation is modular in such that we can add or remove SCABs based on our particular requirements without having to modify the view synthesis algorithm.