Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Kelly is active.

Publication


Featured researches published by Daniel Kelly.


Pattern Recognition Letters | 2010

A person independent system for recognition of hand postures used in sign language

Daniel Kelly; John McDonald; Charles Markham

We present a novel user independent framework for representing and recognizing hand postures used in sign language. We propose a novel hand posture feature, an eigenspace Size Function, which is robust to classifying hand postures independent of the person performing them. An analysis of the discriminatory properties of our proposed eigenspace Size Function shows a significant improvement in performance when compared to the original unmodified Size Function. We describe our support vector machine based recognition framework which uses a combination of our eigenspace Size Function and Hu moments features to classify different hand postures. Experiments, based on two different hand posture data sets, show that our method is robust at recognizing hand postures independent of the person performing them. Our method also performs well compared to other user independent hand posture recognition systems.


international conference of the ieee engineering in medicine and biology society | 2007

Development of a wearable motion capture suit and virtual reality biofeedback system for the instruction and analysis of sports rehabilitation exercises

Diarmaid Fitzgerald; J. Foody; Daniel Kelly; Tomas E. Ward; Charles Markham; John McDonald; Brian Caulfield

This paper describes the design and development of a computer game for instructing an athlete through a series of prescribed rehabilitation exercises. In an attempt to prevent or treat musculoskeletal type injuries along with trying to improve physical performance, athletes are prescribed exercise programmes by appropriately trained specialists. Typically athletes are shown how to perform each exercise in the clinic following examination but they often have no way of knowing if their technique is correct while they are performing their home exercise programme. We describe a system that allows an automatic audit of this activity. Our system utilises ten inertial motion tracking sensors incorporated in a wearable body suit which allows a Bluetooth connection from a root hub to a laptop/computer. Using our specifically designed software programme, the athlete can be instructed and analysed as he/she performs the individually tailored exercise programme and a log is recorded of the time and performance level of each exercise completed. We describe a case study that illustrates how a clinician can at a later date review the athletes progress and subsequently alter the exercise programme as they see fit.


systems man and cybernetics | 2011

Weakly Supervised Training of a Sign Language Recognition System Using Multiple Instance Learning Density Matrices

Daniel Kelly; John McDonald; Charles Markham

A system for automatically training and spotting signs from continuous sign language sentences is presented. We propose a novel multiple instance learning density matrix algorithm which automatically extracts isolated signs from full sentences using the weak and noisy supervision of text translations. The automatically extracted isolated samples are then utilized to train our spatiotemporal gesture and hand posture classifiers. The experiments were carried out to evaluate the performance of the automatic sign extraction, hand posture classification, and spatiotemporal gesture spotting systems. We then carry out a full evaluation of our overall sign spotting system which was automatically trained on 30 different signs.


international conference on multimodal interfaces | 2009

A framework for continuous multimodal sign language recognition

Daniel Kelly; Jane Reilly Delannoy; John Mc Donald; Charles Markham

We present a multimodal system for the recognition of manual signs and non-manual signals within continuous sign language sentences. In sign language, information is mainly conveyed through hand gestures (Manual Signs). Non-manual signals, such as facial expressions, head movements, body postures and torso movements, are used to express a large part of the grammar and some aspects of the syntax of sign language. In this paper we propose a multichannel HMM based system to recognize manual signs and non-manual signals. We choose a single non-manual signal, head movement, to evaluate our framework when recognizing non-manual signals. Manual signs and non-manual signals are processed independently using continuous multidimensional HMMs and a HMM threshold model. Experiments conducted demonstrate that our system achieved a detection ratio of 0.95 and a reliability measure of 0.93.


international conference on computer vision | 2009

Evaluation of threshold model HMMS and Conditional Random Fields for recognition of spatiotemporal gestures in sign language

Daniel Kelly; John Mc Donald; Charles Markham

In this paper we evaluate the performance of Conditional Random Fields (CRF) and Hidden Markov Models when recognizing motion based gestures in sign language. We implement CRF, Hidden CRF and Latent-Dynamic CRF based systems and compare these to a HMM based system when recognizing motion gestures and identifying inter gesture transitions. We implement a extension to the standard HMM model to develop a threshold HMM framework which is specifically designed to identify inter gesture transitions. We evaluate the performance of this system, and the different CRF systems, when recognizing gestures and identifying inter gesture transitions.


2009 13th International Machine Vision and Image Processing Conference | 2009

Recognizing Spatiotemporal Gestures and Movement Epenthesis in Sign Language

Daniel Kelly; John McDonald; Charles Markham

A novel system for the recognition of spatiotemporal hand gestures used in sign language is presented. While recognition of valid sign sequences is an important task in the overall goal of machine recognition of sign language, recognition of movement epenthesis is an important step towards continuous recognition of natural sign language. We propose a framework for recognizing valid sign segments and identifying movement epenthesis. Experiments show our proposed system performs well when classifying eight different signs and identifying 100 different types of movement epenthesis. A ROC analysis of the systems classifications performance showed an area under the curve measurement of 0.949.


international conference on computer vision | 2009

Continuous recognition of motion based gestures in sign language

Daniel Kelly; John Mc Donald; Charles Markham

We present a novel and robust system for recognizing two handed motion based gestures performed within continuous sequences of sign language. While recognition of valid sign sequences is an important task in the overall goal of machine recognition of sign language, detection of movement epenthesis is important in the task of continuous recognition of natural sign language. We propose a framework for recognizing valid sign segments and identifying movement epenthesis. Our system utilizes a single HMM threshold model, per hand, to detect movement epenthesis. Further to this, we develop a novel technique to utilize the threshold model and dedicated gesture HMMs to recognize gestures within continuous sign language sentences. Experiments show that our system has a gesture detection ratio of 0.956 and a reliability measure of 0.932 when spotting 8 different signs from 240 video clips.


2008 Virtual Rehabilitation | 2008

Usability evaluation of e-motion: A virtual rehabilitation system designed to demonstrate, instruct and monitor a therapeutic exercise programme

Diarmaid Fitzgerald; Daniel Kelly; Tomas E. Ward; Charles Markham; Brian Caulfield

The importance of systematic usability evaluation of virtual rehabilitation systems cannot be underestimated. We have developed a virtual rehabilitation system with the functionality to guide a user through a therapeutic exercise programme. Progression is determined by userspsila ability to replicate movements as demonstrated by an on-screen character. Visual and auditory corrective feedback is provided during exercise in order to improve the userpsilas postural control and biomechanical alignment. The objective of this study was to evaluate the usability of our system and subsequently implement modifications aimed at improving fidelity and ease of use. The first stage of our evaluation involved conducting an expert walkthrough with six experts currently researching in areas related to the system design. Following system refinement and modification we conducted a user evaluation study with twelve novice users using VRUSE, a computerised questionnaire-based usability evaluation tool for assessment of virtual environments. Results have provided a systematic evaluation of the system, provided information for guidance on system alterations and will allow comparison of usability levels with similar virtual rehabilitation systems tested with the same protocol.


Archive | 2011

Recognition of Spatiotemporal Gestures in Sign Language Using Gesture Threshold HMMs

Daniel Kelly; John McDonald; Charles Markham

In this paper, we propose a framework for the automatic recognition of spatiotemporal gestures in Sign Language. We implement an extension to the standard HMM model to develop a gesture threshold HMM (GT-HMM) framework which is specifically designed to identify inter gesture transitions. We evaluate the performance of this system, and different CRF systems, when recognizing gestures and identifying inter gesture transitions. The evaluation of the system included testing the performance of conditional random fields (CRF), hidden CRF (HCRF) and latent-dynamic CRF (LDCRF) based systems and comparing these to our GT-HMM based system when recognizing motion gestures and identifying inter gesture transitions.


2008 International Machine Vision and Image Processing Conference | 2008

Analysis of Sign Language Gestures Using Size Functions and Principal Component Analysis

Daniel Kelly; John McDonald; Thomas Lysaght; Charles Markham

This paper presents a computer vision based virtual learning environment for teaching communicative hand gestures used in Sign Language. A virtual learning environment was developed to demonstrate signs to the user. The system then gives real time feedback to the user on their performance of the demonstrated sign. Gesture features are extracted from a standard web-cam video stream and shape and trajectory matching techniques are applied to these features to determine the feedback given to the user.

Collaboration


Dive into the Daniel Kelly's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Caulfield

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Foody

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge