Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Noel Murphy is active.

Publication


Featured researches published by Noel Murphy.


international conference on image processing | 1997

Context-based arithmetic encoding of 2D shape sequences

Noel Brady; Frank Bossen; Noel Murphy

A new method for shape coding in object-based video sequences is presented. Context-based arithmetic encoding, as used in JBIG, is utilised within a block-based framework and further extended in order to make efficient use of temporal prediction. It is shown to be a simple, efficient and elegant solution.


international conference on digital signal processing | 2002

Rhythm detection for speech-music discrimination in MPEG compressed domain

Roman Jarina; Noel E. O'Connor; Seán Marlow; Noel Murphy

A novel approach to speech-music discrimination based on rhythm (or beat) detection is introduced. Rhythmic pulses are detected by applying a long-term autocorrelation method on band-passed signals. This approach is combined with another, in which the features describe the energy peaks of the signal. The discriminator uses just three features that are computed from data directly taken from an MPEG-1 bitstream. The discriminator was tested on more than 3 hours of audio data. Average recognition rate is 97.7%.


Pattern Recognition | 2002

Automatic TV advertisement detection from MPEG bitstream

David A. Sadlier; Seán Marlow; Noel E. O'Connor; Noel Murphy

Abstract The Centre for Digital Video Processing at Dublin City University conducts concentrated research and development in the area of digital video management. The current stage of development is demonstrated on our Web-based digital video system called Fischlar (Proceedings of the Content based Multimedia Information Access, RIAO 2000, Vol. 2, Paris, France, 12–14 April 2000, p. 1390), which provides for efficient recording, analysing, browsing and viewing of digitally captured television programmes. Advertisement breaks during or between television programmes are typically recognised by a series of ‘black’ video frames simultaneously accompanying a depression in audio volume which separate each advertisement from one another by recurrently occurring before and after each individual advertisement. It is the regular prevalence of these flags that enables automatic differentiation between what is programme and what is a commercial break. This paper reports on the progress made in the development of this idea into an advertisement detector system that automatically detects the commercial breaks from the bitstream of digitally captured television broadcasts.


acm/ieee joint conference on digital libraries | 2001

The físchlár digital video system: a digital library of broadcast TV programmes

Alan F. Smeaton; Noel Murphy; Noel E. O'Connor; Seán Marlow; Hyowon Lee; Kieran McDonald; Paul Browne; Jiamin Ye

Físchl& acute;r is a system for recording, indexing, browsing and playback of broadcast TV programmes which has been operational on our University campus for almost 18 months. In this paper we give a brief overview of how the system operates, how TV programmes are organised for browse/playback and a short report on the system usage by over 900 users in our University.


human computer interaction with mobile devices and services | 2005

Mobile access to personal digital photograph archives

Cathal Gurrin; Gareth J. F. Jones; Hyowon Lee; Neil O'Hare; Alan F. Smeaton; Noel Murphy

Handheld computing devices are becoming highly connected devices with high capacity storage. This has resulted in their being able to support storage of, and access to, personal photo archives. However the only means for mobile device users to browse such archives is typically a simple one-by-one scroll through image thumbnails in the order that they were taken, or by manually organising them based on folders. In this paper we describe a system for context-based browsing of personal digital photo archives. Photos are labeled with the GPS location and time they are taken and this is used to derive other context-based metadata such as weather conditions and daylight conditions. We present our prototype system for mobile digital photo retrieval, and an experimental evaluation illustrating the utility of location information for effective personal photo retrieval.


acm multimedia | 2005

My digital photos: where and when?

Neil O'Hare; Cathal Gurrin; Hyowon Lee; Noel Murphy; Alan F. Smeaton; Gareth J. F. Jones

In recent years digital cameras have seen an enormous rise in popularity, leading to a huge increase in the quantity of digital photos being taken. This brings with it the challenge of organising these large collections. We preset work which organises personal digital photo collections based on date/time and GPS location, which we believe will become a key organisational methodology over the next few years as consumer digital cameras evolve to incorporate GPS and as cameras in mobile phones spread further. The accompanying video illustrates the results of our research into digital photo management tools which contains a series of screen and user interactions highlighting how a user utilises the tools we are developing to manage a personal archive of digital photos.


international conference on acoustics, speech, and signal processing | 2001

Fischlar: an on-line system for indexing and browsing broadcast television content

Noel E. O'Connor; Seán Marlow; Noel Murphy; Alan F. Smeaton; Paul Browne; Seán Deasy; Hyowon Lee; Kieran McDonald

This paper describes a demonstration system which automatically indexes broadcast television content for subsequent non-linear browsing. User-specified television programmes are captured in MPEG-1 format and analysed using a number of video indexing tools such as shot boundary detection, keyframe extraction, shot clustering and news story segmentation. A number of different interfaces have been developed which allow a user to browse the visual index created by these analysis tools. These interfaces are designed to facilitate users locating video content of particular interest. Once such content is located, the MPEG-1 bitstream can be streamed to the user in real-time. This paper describes both the high-level functionality of the system and the low-level indexing tools employed, as well as giving an overview of the different browsing mechanisms employed.


computer vision and pattern recognition | 2005

Background Modelling in Infrared and Visible Spectrum Video for People Tracking

Ciarán Ó Conaire; Eddie Cooke; Noel E. O'Connor; Noel Murphy; A. Smearson

In this paper, we present our approach to robust background modelling which combines visible and thermal infrared spectrum data. Our work is based on the non-parametric background model describe in 1. We use a pedestrian detection module to prevent erroneous data from becoming part of the background model and this allows us to initialise our bacjground model, even in the presence of foreground objects. Visible and infrared features are use to remove incorrectly detected foreground regions. Allowing our model to quickly recover from ghost regions and rapid lighting changes. An object-based shadow detector also improves our algorithms performance.


international conference on acoustics, speech, and signal processing | 2004

A generic news story segmentation system and its evaluation

Neil O'Hare; Alan F. Smeaton; Csaba Czirjek; Noel E. O'Connor; Noel Murphy

The paper presents an approach to segmenting broadcast TV news programmes automatically into individual news stories. We first segment the programme into individual shots, and then a number of analysis tools are run on the programme to extract features to represent each shot. The results of these feature extraction tools are then combined using a support vector machine trained to detect anchorperson shots. A news broadcast can then be segmented into individual stories based on the location of the anchorperson shots within the programme. We use one generic system to segment programmes from two different broadcasters, illustrating the robustness of our feature extraction process to the production styles of different broadcasters.


conference on image and video retrieval | 2005

Dialogue sequence detection in movies

Bart Lehane; Noel E. O’Connor; Noel Murphy

Dialogue sequences constitute an important part of any movie or television program and their successful detection is an essential step in any movie summarisation/indexing system. The focus of this paper is to detect sequences of dialogue, rather than complete scenes. We argue that these shorter sequences are more desirable as retrieval units than temporally long scenes. This paper combines various audiovisual features that reflect accepted and well know film making conventions using a selection of machine learning techniques in order to detect such sequences. Three systems for detecting dialogue sequences are proposed: one based primarily on audio analysis, one based primarily on visual analysis and one that combines the results of both. The performance of the three systems are compared using a manually marked-up test corpus drawn from a variety of movies of different genres. Results show that high precision and recall can be obtained using low-level features that are automatically extracted.

Collaboration


Dive into the Noel Murphy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bart Lehane

Dublin City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge