Eamonn Newman
Dublin City University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eamonn Newman.
cross language evaluation forum | 2008
Martha Larson; Eamonn Newman; Gareth J. F. Jones
The VideoCLEF track, introduced in 2008, aims to develop and evaluate tasks related to analysis of and access to multilingual multimedia content. In its first year, VideoCLEF piloted the Vid2RSS task, whose main subtask was the classification of dual language video (Dutch-language television content featuring English-speaking experts and studio guests). The task offered two additional discretionary subtasks: feed translation and automatic keyframe extraction. Task participants were supplied with Dutch archival metadata, Dutch speech transcripts, English speech transcripts and ten thematic category labels, which they were required to assign to the test set videos. The videos were grouped by class label into topic-based RSS-feeds, displaying title, description and keyframe for each video. Five groups participated in the 2008 VideoCLEF track. Participants were required to collect their own training data; both Wikipedia and general web content were used. Groups deployed various classifiers (SVM, Naive Bayes and k-NN) or treated the problem as an information retrieval task. Both the Dutch speech transcripts and the archival metadata performed well as sources of indexing features, but no group succeeded in exploiting combinations of feature sources to significantly enhance performance. A small scale fluency/adequacy evaluation of the translation task output revealed the translation to be of sufficient quality to make it valuable to a non-Dutch speaking English speaker. For keyframe extraction, the strategy chosen was to select the keyframe from the shot with the most representative speech transcript content. The automatically selected shots were shown, with a small user study, to be competitive with manually selected shots. Future years of VideoCLEF will aim to expand the corpus and the class label list, as well as to extend the track to additional tasks.
european conference on information retrieval | 2004
Nicola Stokes; Eamonn Newman; Joe Carthy; Alan F. Smeaton
In this paper we describe an extractive method of creating very short summaries or gists that capture the essence of a news story using a linguistic technique called lexical chaining. The recent interest in robust gisting and title generation techniques originates from a need to improve the indexing and browsing capabilities of interactive digital multimedia systems. More specifically these systems deal with streams of continuous data, like a news programme, that require further annotation before they can be presented to the user in a meaningful way. We automatically evaluate the performance of our lexical chaining-based gister with respect to four baseline extractive gisting methods on a collection of closed caption material taken from a series of news broadcasts. We also report results of a human-based evaluation of summary quality. Our results show that our novel lexical chaining approach to this problem outperforms standard extractive gisting methods.
bioinformatics and biomedicine | 2014
Feiyan Hu; Alan F. Smeaton; Eamonn Newman
Lifelogging is the ambient, continuous digital recording of a persons everyday activities for a variety of possible applications. Much of the work to date in lifelogging has focused on developing sensors, capturing information, processing it into events and then supporting event-based access to the lifelog for applications like memory recall, behaviour analysis or similar. With the recent arrival of aggregating platforms such as Apples HealthKit, Microsofts HealthVault and Googles Fit, we are now able to collect and aggregate data from lifelog sensors, to centralize the management of data and in particular to search for and detect patterns of usage for individuals and across populations. In this paper, we present a framework that detects both low-level and high-level periodicity in lifelog data, detecting hidden patterns of which users would not otherwise be aware. We detect periodicities of time series using a combination of correlograms and periodograms, using various signal processing algorithms. Periodicity detection in lifelogs is particularly challenging because the lifelog data itself is not always continuous and can have gaps as users may use their lifelog devices intermittingly. To illustrate that periodicity can be detected from such data, we apply periodicity detection on three lifelog datasets with varying levels of completeness and accuracy.
content based multimedia indexing | 2008
James Carmichael; Martha Larson; Jennifer Marlow; Eamonn Newman; Paul D. Clough; Johan Oomen; Sorin Vasile Sav
This paper describes a multimedia multimodal information access sub-system (MIAS) for digital audio-visual documents, typically presented in streaming media format. The system is designed to provide both professional and general users with entry points into video documents that are relevant to their information needs. In this work, we focus on the information needs of multimedia specialists at a Dutch cultural heritage institution with a large multimedia archive. A quantitative and qualitative assessment is made of the efficiency of search operations using our multimodal system and it is demonstrated that MIAS significantly facilitates information retrieval operations when searching within a video document.
european conference on information retrieval | 2005
Ruichao Wang; Nicola Stokes; William P. Doran; Eamonn Newman; Joe Carthy; John Dunnion
In this paper we compare a number of Topiary-style headline generation systems. The Topiary system, developed at the University of Maryland with BBN, was the top performing headline generation system at DUC 2004. Topiary-style headlines consist of a number of general topic labels followed by a compressed version of the lead sentence of a news story. The Topiary system uses a statistical learning approach to finding topic labels for headlines, while our approach, the LexTrim system, identifies key summary words by analysing the lexical cohesive structure of a text. The performance of these systems is evaluated using the ROUGE evaluation suite on the DUC 2004 news stories collection. The results of these experiments show that a baseline system that identifies topic descriptors for headlines using term frequency counts outperforms the LexTrim and Topiary systems. A manual evaluation of the headlines also confirms this result.
BioMed Research International | 2016
Matthew P. Buman; Feiyan Hu; Eamonn Newman; Alan F. Smeaton; Dana R. Epstein
Periodicities (repeating patterns) are observed in many human behaviors. Their strength may capture untapped patterns that incorporate sleep, sedentary, and active behaviors into a single metric indicative of better health. We present a framework to detect periodicities from longitudinal wrist-worn accelerometry data. GENEActiv accelerometer data were collected from 20 participants (17 men, 3 women, aged 35–65) continuously for 64.4 ± 26.2 (range: 13.9 to 102.0) consecutive days. Cardiometabolic risk biomarkers and health-related quality of life metrics were assessed at baseline. Periodograms were constructed to determine patterns emergent from the accelerometer data. Periodicity strength was calculated using circular autocorrelations for time-lagged windows. The most notable periodicity was at 24 h, indicating a circadian rest-activity cycle; however, its strength varied significantly across participants. Periodicity strength was most consistently associated with LDL-cholesterol (rs = 0.40–0.79, Ps < 0.05) and triglycerides (rs = 0.68–0.86, Ps < 0.05) but also associated with hs-CRP and health-related quality of life, even after adjusting for demographics and self-rated physical activity and insomnia symptoms. Our framework demonstrates a new method for characterizing behavior patterns longitudinally which captures relationships between 24 h accelerometry data and health outcomes.
international symposium on wearable computers | 2015
Feiyan Hu; Alan F. Smeaton; Eamonn Newman; Matthew P. Buman
This paper introduces a new way to analyse and visualize quantified-self or lifelog data captured from any lifelogging device over an extended period of time. The mechanism works on the raw, unstructured lifelog data by detecting periodicities, those repeating patters that occur within our lifestyles at different frequencies including daily, weekly, seasonal, etc. Focusing on the 24 hour cycle, we calculate the strength of the 24-hour periodicity at 24-hour intervals over an extended period of a lifelog. Changes in this strength of the 24-hour cycle can illustrate changes or shifts in underlying human behavior. We have performed this analysis on several lifelog datasets of durations from several weeks to almost a decade, from recordings of training distances to sleep data. In this paper we use 24 hour accelerometer data to illustrate the technique, showing how changes in human behavior can be identified.
ambient intelligence | 2013
Meggan King; Feiyan Hu; Joanna E. McHugh; Emma Murphy; Eamonn Newman; Kate Irving; Alan F. Smeaton
Sensor technologies can enable independent living for people with dementia by monitoring their behaviour and identifying points where support may be required. Wearable sensors can provide such support but may constitute a source of stigma for the user if they are perceived as visible and therefore obtrusive. This paper presents an initial empirical investigation exploring the extent to which wearable sensors are perceived as visible. 23 Participants wore eye tracking glasses, which superimposed the location of their gaze onto video data of their panorama. Participants were led to believe that the research entailed a subjective evaluation of the eye tracking glasses. A researcher wore one of two wearable sensors during the evaluation enabling us to measure the extent to which participants fixated on the sensor during a one-on-one meeting. Results are presented on the general visibility and potential fixations on two wearable sensors, a wrist-work actigraph and a lifelogging camera, during normal conversation between two people. Further investigation is merited according to the results of this pilot study.
cross language evaluation forum | 2008
Eamonn Newman; Gareth J. F. Jones
We describe a baseline system for the VideoCLEF Vid2RSS task in which videos are to be classified into thematic categories based on their content. The system uses an off-the-shelf Information Retrieval system. Speech transcripts generated using automated speech recognition are indexed using default stemming and stopping methods. The categories are populated by using the category theme (or label) as a query on the collection, and assigning the retrieved items to that particular category. Run 4 of our system achieved the highest f-score in the task by maximising recall. We discuss this in terms of the primary aims of the task, i.e., automating video classification.
international conference on machine learning | 2005
Eamonn Newman; Nicola Stokes; John Dunnion; Joe Carthy
In this paper we present a classifier for Recognising Textual Entailment (RTE) and Semantic Equivalence. We evaluate the performance of this classifier using an evaluation framework provided by the PASCAL RTE Challenge Workshop. Sentence–pairs are represented as a set of features, which are used by our decision tree classifier to determine if an entailment relationship exisits between each sentence–pair in the RTE test corpus.