Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Niall Fox is active.

Publication


Featured researches published by Niall Fox.


BMC Musculoskeletal Disorders | 2009

Objective and subjective assessment of sleep in chronic low back pain patients compared with healthy age and gender matched controls: a pilot study.

Grainne O'Donoghue; Niall Fox; Conor Heneghan; Deirdre A. Hurley

BackgroundWhile approximately 70% of chronic low back pain (CLBP) sufferers complain of sleep disturbance, current literature is based on self report measures which can be prone to bias and no objective data of sleep quality, based exclusively on CLBP are available. In accordance with the recommendations of The American Sleep Academy, when measuring sleep, both subjective and objective assessments should be considered as the two are only modestly correlated, suggesting that each modality assesses different aspects of an individuals sleep experience. Therefore, the purpose of this study was to expand previous research into sleep disturbance in CLBP by comparing objective and subjective sleep quality in participants with CLBP and healthy age and gender matched controls, to identify correlates of poor sleep and to test logistics and gather information prior to a larger study.Methods15 CLBP participants (mean age = 43.8 years (SD = 11.5), 53% female) and 15 healthy controls (mean age = 41.5 years (SD = 10.6), 53% female) consented. All participants completed the Pittsburgh Sleep Quality Index, Insomnia Severity Index, Pittsburgh Sleep Diary and the SF36v2. CLBP participants also completed the Oswestry Disability Index. Sleep patterns were assessed over three consecutive nights using actigraphy. Total sleep time (TST), sleep efficiency (SE), sleep latency onset (SL) and number of awakenings after sleep onset (WASO) were derived. Statistical analysis was conducted using unrelated t-tests and Pearsons product moment correlation co-efficients.ResultsCLBP participants demonstrated significantly poorer overall sleep both objectively and subjectively. They demonstrated lower actigraphic SE (p = .002) and increased WASO (p = .027) but no significant differences were found in TST (p = .43) or SL (p = .97). Subjectively, they reported increased insomnia (p =< .001), lower SE (p =< .001) and increased SL (p =< .001) but no difference between TST (p = .827) and WASO (p = .055). Statistically significant associations were found between low back pain (p = .021, r = -.589), physical health (p = .003, r = -.713), disability levels (p = .025, r = .576), and subjective sleep quality in the CLBP participants but not with actigraphy.ConclusionCLBP participants demonstrated poorer overall sleep, increased insomnia symptoms and less efficient sleep. Further investigation using a larger sample size and a longer period of sleep monitoring is ongoing.


Journal of Sleep Research | 2011

Sleep/wake measurement using a non-contact biomotion sensor

Philip de Chazal; Niall Fox; Emer O’Hare; Conor Heneghan; Alberto Zaffaroni; Patricia Boyle; Stephanie Smith; Caroline O’Connell; Walter T. McNicholas

We studied a novel non‐contact biomotion sensor, which has been developed for identifying sleep/wake patterns in adult humans. The biomotion sensor uses ultra low‐power reflected radiofrequency waves to determine the movement of a subject during sleep. An automated classification algorithm has been developed to recognize sleep/wake states on a 30‐s epoch basis based on the measured movement signal. The sensor and software were evaluated against gold‐standard polysomnography on a database of 113 subjects [94 male, 19 female, age 53 ± 13 years, apnoea–hypopnea index (AHI) 22 ± 24] being assessed for sleep‐disordered breathing at a hospital‐based sleep laboratory. The overall per‐subject accuracy was 78%, with a Cohen’s kappa of 0.38. Lower accuracy was seen in a high AHI group (AHI >15, 63 subjects) than in a low AHI group (74.8% versus 81.3%); however, most of the change in accuracy can be explained by the lower sleep efficiency of the high AHI group. Averaged across subjects, the overall sleep sensitivity was 87.3% and the wake sensitivity was 50.1%. The automated algorithm slightly overestimated sleep efficiency (bias of +4.8%) and total sleep time (TST; bias of +19 min on an average TST of 288 min). We conclude that the non‐contact biomotion sensor can provide a valid means of measuring sleep–wake patterns in this patient population, and also allows direct visualization of respiratory movement signals.


IEEE Transactions on Multimedia | 2007

Robust Biometric Person Identification Using Automatic Classifier Fusion of Speech, Mouth, and Face Experts

Niall Fox; Ralph Gross; Jeffrey F. Cohn; Richard B. Reilly

Information about person identity is multimodal. Yet, most person-recognition systems limit themselves to only a single modality, such as facial appearance. With a view to exploiting the complementary nature of different modes of information and increasing pattern recognition robustness to test signal degradation, we developed a multiple expert biometric person identification system that combines information from three experts: audio, visual speech, and face. The system uses multimodal fusion in an automatic unsupervised manner, adapting to the local performance (at the transaction level) and output reliability of each of the three experts. The expert weightings are chosen automatically such that the reliability measure of the combined scores is maximized. To test system robustness to train/test mismatch, we used a broad range of acoustic babble noise and JPEG compression to degrade the audio and visual signals, respectively. Identification experiments were carried out on a 248-subject subset of the XM2VTS database. The multimodal expert system outperformed each of the single experts in all comparisons. At severe audio and visual mismatch levels tested, the audio, mouth, face, and tri-expert fusion accuracies were 16.1%, 48%, 75%, and 89.9%, respectively, representing a relative improvement of 19.9% over the best performing expert


Proceedings of the 2003 ACM SIGMM workshop on Biometrics methods and applications | 2003

Person identification using automatic integration of speech, lip, and face experts

Niall Fox; Ralph Gross; Philip de Chazal; Jeffery F. Cohn; Richard B. Reilly

This paper presents a multi-expert person identification system based on the integration of three separate systems employing audio features, static face images and lip motion features respectively. Audio person identification was carried out using a text dependent Hidden Markov Model methodology. Modeling of the lip motion was carried out using Gaussian probability density functions. The static image based identification was carried out using the FaceIt system. Experiments were conducted with 251 subjects from the XM2VTS audio-visual database. Late integration using automatic weights was employed to combine the three experts. The integration strategy adapts automatically to the audio noise conditions. It was found that the integration of the three experts improved the person identification accuracies for both clean and noisy audio conditions compared with the audio only case. For audio, FaceIt, lip motion, and tri-expert identification, maximum accuracies achieved were 98%, 93.22%, 86.37% and 100% respectively. Maximum bi-expert integration of the two visual experts achieved an identification accuracy of 96.8% which is comparable to the best audio accuracy of 98%.


international conference of the ieee engineering in medicine and biology society | 2008

Assessment of sleep/wake patterns using a non-contact biomotion sensor

Philip de Chazal; Emer O'Hare; Niall Fox; Conor Heneghan

We evaluate a contact-less continuous measuring system measuring respiration and activity patterns system for identifying sleep/wake patterns in adult humans. The system is based on the use of a novel non-contact biomotion sensor, and an automated signal analysis and classification system. The sleep/wake detection algorithm combines information from respiratory frequency, magnitude, and movement to assign 30 s epochs to either wake or sleep. Comparison to a standard polysomnogram system utilizing manual sleep stage classification indicates excellent results. It has been validated on overnight studies from 12 subjects. Wake state was correctly identified 69% and sleep with 88%. Due to its ease-of-use and good performance, the device is an excellent tool for long term monitoring of sleep patterns in the home environment in an ultraconvenient fashion.


Lecture Notes in Computer Science | 2005

VALID: a new practical audio-visual database, and comparative results

Niall Fox; Brian O'Mullane; Richard B. Reilly

The performance of deployed audio, face, and multi-modal person recognition systems in non-controlled scenarios, is typically lower than systems developed in highly controlled environments. With the aim to facilitate the development of robust audio, face, and multi-modal person recognition systems, the new large and realistic multi-modal (audio-visual) VALID database was acquired in a noisy “real world” office scenario with no control on illumination or acoustic noise. In this paper we describe the acquisition and content of the VALID database, consisting of five recording sessions of 106 subjects over a period of one month. Speaker identification experiments using visual speech features extracted from the mouth region are reported. The performance based on the uncontrolled VALID database is compared with that of the controlled XM2VTS database. The best VALID and XM2VTS based accuracies are 63.21% and 97.17% respectively. This highlights the degrading effect of an uncontrolled illumination environment and the importance of this database for deploying real world applications. The VALID database is available to the academic community through http://ee.ucd.ie/validdb/.


Lecture Notes in Computer Science | 2003

Audio-visual speaker identification based on the use of dynamic audio and visual features

Niall Fox; Richard B. Reilly

This paper presents a speaker identification system based on dynamical features of both the audio and visual modes. Speakers are modeled using a text dependent HMM methodology. Early and late audio-visual integration are investigated. Experiments are carried out for 252 speakers from the XM2VTS database. From our experimental results, it has been shown that the addition of the dynamical visual information improves the speaker identification accuracies for both clean and noisy audio conditions compared to the audio only case. The best audio, visual and audio-visual identification accuracies achieved were 86.91%, 57.14% and 94.05% respectively.


international conference of the ieee engineering in medicine and biology society | 2007

An Evaluation of a Non-contact Biomotion Sensor with Actimetry

Niall Fox; Conor Heneghan; M. Gonzalez; Redmond Shouldice; P. de Chazal

Actimetry is a widely accepted technology for the diagnosis and monitoring of sleep disorders such as insomnia, circadian sleep/wake disturbance, and periodic leg movement. In this study we investigate a very sensitive non-contact biomotion sensor to measure actimetry and compare its performance to wrist-actimetry. A data corpus consisting of twenty subjects (ten normals, ten with sleep disorders) was collected in the unconstrained home environment with simultaneous non-contact sensor and ActiWatch actimetry recordings. The aggregated length of the data is 151 hours. The non-contact sensor signal was mapped to actimetry using 30 second epochs and the level of agreement with the ActiWatch actimetry determined. Across all twenty subjects, the sensitivity and specificity was 79% and 75% respectively. In addition, it was shown that the non-contact sensor can also measure breathing and breathing modulations. The results of this study indicate that the non-contact sensor may be a highly convenient alternative to wrist-actimetry as a diagnosis and screening tool for sleep studies. Furthermore, as the non- contact sensor measures breathing modulations, it can additionally be used to screen for respiratory disturbances in sleep caused by sleep apnea and COPD.


systems, man and cybernetics | 2004

Robust multi-modal person identification with tolerance of facial expression

Niall Fox; Richard B. Reilly

The research presented in This work describes audio-visual speaker identification experiments carried out on a large data set of 251 subjects. Both the audio and visual modeling is carried out using hidden Markov models. The visual modality uses the speakers lip information. The audio and visual modalities are both degraded to emulate a train/test mismatch. The fusion method employed adapts automatically by using classifier score reliability estimates of both modalities to give improved audio-visual accuracies at all tested levels of audio and visual degradation, compared to the individual audio or visual modality accuracies. A maximum visual identification accuracy of 86% was achieved. This result is comparable to the performance of systems using the entire face, and suggests the hypothesis that the system described would be tolerant to varying facial expression, since only the information around the speakers lips is employed.


Lecture Notes in Computer Science | 2005

Audio-Visual speaker identification via adaptive fusion using reliability estimates of both modalities

Niall Fox; Brian O'Mullane; Richard B. Reilly

An audio-visual speaker identification system is described, where the audio and visual speech modalities are fused by an automatic unsupervised process that adapts to local classifier performance, by taking into account the output score based reliability estimates of both modalities. Previously reported methods do not consider that both the audio and the visual modalities can be degraded. The visual modality uses the speakers lip information. To test the robustness of the system, the audio and visual modalities are degraded to emulate various levels of train/test mismatch; employing additive white Gaussian noise for the audio and JPEG compression for the visual signals. Experiments are carried out on a large augmented data set from the XM2VTS database. The results show improved audio-visual accuracies at all tested levels of audio and visual degradation, compared to the individual audio or visual modality accuracies. For high mismatch levels, the audio, visual, and auto-adapted audio-visual accuracies are 37.1%, 48%, and 71.4% respectively.

Collaboration


Dive into the Niall Fox's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph Gross

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Brian O'Mullane

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emer O'Hare

University College Dublin

View shared research outputs
Researchain Logo
Decentralizing Knowledge