Noel E. O’Connor
Dublin City University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Noel E. O’Connor.
machine vision applications | 2008
Ciarán Ó Conaire; Noel E. O’Connor; Alan F. Smeaton
In this paper, we propose a framework that can efficiently combine features for robust tracking based on fusing the outputs of multiple spatiogram trackers. This is achieved without the exponential increase in storage and processing that other multimodal tracking approaches suffer from. The framework allows the features to be split arbitrarily between the trackers, as well as providing the flexibility to add, remove or dynamically weight features. We derive a mean-shift type algorithm for the framework that allows efficient object tracking with very low computational overhead. We especially target the fusion of thermal infrared and visible spectrum features as the most useful features for automated surveillance applications. Results are shown on multimodal video sequences clearly illustrating the benefits of combining multiple features using our framework.
international symposium on neural networks | 2006
Daniel Larkin; Andrew Kinane; Valentin Muresan; Noel E. O’Connor
This paper proposes an efficient hardware architecture for a function generator suitable for an artificial neural network (ANN). A spline-based approximation function is designed that provides a good trade-off between accuracy and silicon area, whilst also being inherently scalable and adaptable for numerous activation functions. This has been achieved by using a minimax polynomial and through optimal placement of the approximating polynomials based on the results of a genetic algorithm. The approximation error of the proposed method compares favourably to all related research in this field. Efficient hardware multiplication circuitry is used in the implementation, which reduces the area overhead and increases the throughput.
Journal of Biomechanics | 2014
Chris Richter; Noel E. O’Connor; Brendan Marshall; Kieran Moran
The aim of this study was to assess and compare the ability of discrete point analysis (DPA), functional principal component analysis (fPCA) and analysis of characterizing phases (ACP) to describe a dependent variable (jump height) using vertical ground reaction force curves captured during the propulsion phase of a countermovement jump. FPCA and ACP are continuous data analysis techniques that reduce the dimensionality of a data set by identifying phases of variation (key phases), which are used to generate subject scores that describe a subjects behavior. A stepwise multiple regression analysis was used to measure the ability to describe jump height of each data analysis technique. Findings indicated that the order of effectiveness (high to low) across the examined techniques was: ACP (99%), fPCA (78%) and DPA (21%). DPA was outperformed by fPCA and ACP because it can inadvertently compare unrelated features, does not analyze the whole data set and cannot examine important features that occur solely as a phase. ACP outperformed fPCA because it utilizes information within the combined magnitude-time domain, and identifies and examines key phases separately without the deleterious interaction of other key phases.
Science and Engineering Ethics | 2016
Fiachra O’Brolcháin; Tim Jacquemard; David S. Monaghan; Noel E. O’Connor; Peter Novitzky; Bert Gordijn
AbstractThe rapid evolution of information, communication and entertainment technologies will transform the lives of citizens and ultimately transform society. This paper focuses on ethical issues associated with the likely convergence of virtual realities (VR) and social networks (SNs), hereafter VRSNs. We examine a scenario in which a significant segment of the world’s population has a presence in a VRSN. Given the pace of technological development and the popularity of these new forms of social interaction, this scenario is plausible. However, it brings with it ethical problems. Two central ethical issues are addressed: those of privacy and those of autonomy. VRSNs pose threats to both privacy and autonomy. The threats to privacy can be broadly categorized as threats to informational privacy, threats to physical privacy, and threats to associational privacy. Each of these threats is further subdivided. The threats to autonomy can be broadly categorized as threats to freedom, to knowledge and to authenticity. Again, these three threats are divided into subcategories. Having categorized the main threats posed by VRSNs, a number of recommendations are provided so that policy-makers, developers, and users can make the best possible use of VRSNs.
european conference on computer vision | 2012
Cem Direkoǧlu; Noel E. O’Connor
We introduce a novel approach for team activity recognition in sports. Given the positions of team players from a plan view of the playing field at any given time, we solve a particular Poisson equation to generate a smooth distribution defined on whole playground, termed the position distribution of the team. Computing the position distribution for each frame provides a sequence of distributions, which we process to extract motion features for team activity recognition. The motion features are obtained at each frame using frame differencing and optical flow. We investigate the use of the proposed motion descriptors with Support Vector Machines (SVM) classification, and evaluate on a publicly available European handball dataset. Results show that our approach can classify six different team activities and performs better than a method that extracts features from the explicitly defined positions. Our method is new and different from other trajectory-based methods. These methods extract activity features using the explicitly defined trajectories, where the players have specific positions at any given time, and ignore the rest of the playground. In our work, on the other hand, given the specific positions of the team players at a frame, we construct a position distribution for the team on the whole playground and process the sequence of position distribution images to extract motion features for activity recognition. Results show that our approach is effective.
Talanta | 2015
Kevin Murphy; Brendan Heery; Timothy Sullivan; Dian Zhang; Lizandra Paludetti; King Tong Lau; Dermot Diamond; Ernane José Xavier Costa; Noel E. O’Connor; Fiona Regan
A low-cost optical sensor for monitoring the aquatic environment is presented, with the construction and design described in detail. The autonomous optical sensor is devised to be environmentally robust, easily deployable and simple to operate. It consists of a multi-wavelength light source with two photodiode detectors capable of measuring the transmission and side-scattering of the light in the detector head. This enables the sensor to give qualitative data on the changes in the optical opacity of the water. Laboratory tests to confirm colour and turbidity-related responses are described and the results given. The autonomous sensor underwent field deployments in an estuarine environment, and the results presented here show the sensors capacity to detect changes in opacity and colour relating to potential pollution events. The application of this low-cost optical sensor is in the area of environmental pollution alerts to support a water monitoring programme, where multiple such sensors could be deployed as part of a network.
conference on image and video retrieval | 2005
Bart Lehane; Noel E. O’Connor; Noel Murphy
Dialogue sequences constitute an important part of any movie or television program and their successful detection is an essential step in any movie summarisation/indexing system. The focus of this paper is to detect sequences of dialogue, rather than complete scenes. We argue that these shorter sequences are more desirable as retrieval units than temporally long scenes. This paper combines various audiovisual features that reflect accepted and well know film making conventions using a selection of machine learning techniques in order to detect such sequences. Three systems for detecting dialogue sequences are proposed: one based primarily on audio analysis, one based primarily on visual analysis and one that combines the results of both. The performance of the three systems are compared using a manually marked-up test corpus drawn from a variety of movies of different genres. Results show that high precision and recall can be obtained using low-level features that are automatically extracted.
international conference on image analysis and recognition | 2004
Saman H. Cooray; Noel E. O’Connor
A hybrid technique based on facial feature extraction and Principal Component Analysis (PCA) is presented for frontal face detection in color images. Facial features such as eyes and mouth are automatically detected based on properties of the associated image regions, which are extracted by RSST color segmentation. While mouth feature points are identified using the redness property of regions, a simple search strategy relative to the position of the mouth is carried out to identify eye feature points from a set of regions. Priority is given to regions which signal high intensity variance, thereby allowing the most probable eye regions to be selected. On detecting a mouth and two eyes, a face verification step based on Eigenface theory is applied to a normalized search space in the image relative to the distance between the eye feature points.
Journal of Real-time Image Processing | 2017
Rafal Kapela; Kevin McGuinness; Noel E. O’Connor
This paper presents a novel approach to recognize a scene presented in an image with specific application to scene classification in field sports video. We propose different variants of the algorithm ranging from bags of visual words to the simplified real-time implementation, that takes only the most important areas of similar colour into account. All the variants feature similar accuracy which is comparable to very well-known image indexing techniques like SIFT or HoGs. For the comparison purposes, we also developed a specific database which is now available online. The algorithm is suitable in scene recognition task thanks to changes in speed and robustness to the image resolution, thus, making it a good candidate in real-time video indexing systems. The procedure features high simplicity thanks to the fact that it is based on the very well-known Fourier transform.
advanced concepts for intelligent vision systems | 2005
Hervé Le Borgne; Noel E. O’Connor
This paper deals with knowledge extraction from visual data for content-based image retrieval of natural scenes. Images are analysed using a ridgelet transform that enhances information at different scales, orientations and spatial localizations. The main contribution of this work is to propose a method that reduces the size and the redundancy of this ridgelet representation, by defining both global and local signatures that are specifically designed for semantic classification and content-based retrieval. An effective recognition system can be built when these descriptors are used in conjunction with a support vector machine (SVM). Classification and retrieval experiments are conducted on natural scenes, to demonstrate the effectiveness of the approach.