Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David S. Monaghan is active.

Publication


Featured researches published by David S. Monaghan.


Sensors | 2013

Classification of Sporting Activities Using Smartphone Accelerometers

Edmond Mitchell; David S. Monaghan; Noel E. O'Connor

In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT). Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in todays society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach.


international conference on acoustics, speech, and signal processing | 2012

An advanced virtual dance performance evaluator

Slim Essid; Dimitrios S. Alexiadis; Robin Tournemenne; Marc Gowing; Philip Kelly; David S. Monaghan; Petros Daras; Angélique Drémeau; Noel E. O'Connor

The ever increasing availability of high speed Internet access has led to a leap in technologies that support real-time realistic interaction between humans in online virtual environments. In the context of this work, we wish to realise the vision of an online dance studio where a dance class is to be provided by an expert dance teacher and to be delivered to online students via the web. In this paper we study some of the technical issues that need to be addressed in this challenging scenario. In particular, we describe an automatic dance analysis tool that would be used to evaluate a students performance and provide him/her with meaningful feedback to aid improvement.


Science and Engineering Ethics | 2016

The Convergence of Virtual Reality and Social Networks: Threats to Privacy and Autonomy

Fiachra O’Brolcháin; Tim Jacquemard; David S. Monaghan; Noel E. O’Connor; Peter Novitzky; Bert Gordijn

AbstractThe rapid evolution of information, communication and entertainment technologies will transform the lives of citizens and ultimately transform society. This paper focuses on ethical issues associated with the likely convergence of virtual realities (VR) and social networks (SNs), hereafter VRSNs. We examine a scenario in which a significant segment of the world’s population has a presence in a VRSN. Given the pace of technological development and the popularity of these new forms of social interaction, this scenario is plausible. However, it brings with it ethical problems. Two central ethical issues are addressed: those of privacy and those of autonomy. VRSNs pose threats to both privacy and autonomy. The threats to privacy can be broadly categorized as threats to informational privacy, threats to physical privacy, and threats to associational privacy. Each of these threats is further subdivided. The threats to autonomy can be broadly categorized as threats to freedom, to knowledge and to authenticity. Again, these three threats are divided into subcategories. Having categorized the main threats posed by VRSNs, a number of recommendations are provided so that policy-makers, developers, and users can make the best possible use of VRSNs.


conference on multimedia modeling | 2014

Kinect vs. Low-cost Inertial Sensing for Gesture Recognition

Marc Gowing; Amin Ahmadi; Francois Destelle; David S. Monaghan; Noel E. O'Connor; Kieran Moran

In this paper, we investigate efficient recognition of human gestures / movements from multimedia and multimodal data, including the Microsoft Kinect and translational and rotational acceleration and velocity from wearable inertial sensors. We firstly present a system that automatically classifies a large range of activities (17 different gestures) using a random forest decision tree. Our system can achieve near real time recognition by appropriately selecting the sensors that led to the greatest contributing factor for a particular task. Features extracted from multimodal sensor data were used to train and evaluate a customized classifier. This novel technique is capable of successfully classifying various gestures with up to 91 % overall accuracy on a publicly available data set. Secondly we investigate a wide range of different motion capture modalities and compare their results in terms of gesture recognition accuracy using our proposed approach. We conclude that gesture recognition can be effectively performed by considering an approach that overcomes many of the limitations associated with the Kinect and potentially paves the way for low-cost gesture recognition in unconstrained environments.


Procedia Computer Science | 2015

HeartHealth: A Cardiovascular Disease Home-based Rehabilitation System

Anargyros Chatzitofis; David S. Monaghan; Edmond Mitchell; Freddie Honohan; Dimitrios Zarpalas; Noel E. O’Connor; Petros Daras

Abstract The increasing pressure on medical institutions around the world requires health care professionals to be prescribing home- based exercise rehabilitation treatments to empower patients to self-monitor their rehabilitation journey. Home-based exercise rehabilitation has shown to be highly effective in treating conditions such as Cardiovascular Disease (CVD). However, adherence to home-based exercise rehabilitation remains low. Possible causes for this are that patients are not monitored, they cannot be con- fident that they are performing the exercise correctly or accurately and they receive no feedback. This paper proposes HeartHealth, a novel patient-centric gamified exercise rehabilitation platform that can help address the issue of adherence to these programmes. The key functionality is the ability to record the patient movements and compare them against the exercises that have been pre- scribed in order to return feedback to the patient and to the health care professional, as well. In order to synthesize a compact fully operational system able to work in real life scenarios, tools and services from FI-PPP projects, FIWARE 1 and FI-STAR 2, were exploited and a new FI-STAR component, Motion Evaluation Specific Enabler (SE), was designed and developed. The HeartHealth system brings together real-time cloud-based motion evaluation coupled with accurate low-cost motion capture, a personalised ex- ercise rehabilitation programme and an intuitive and fun serious game interface, designed specifically with a Cardiac Rehabilitation population in mind.


computer vision computer graphics collaboration techniques | 2013

A framework for realistic 3D tele-immersion

Philipp Fechteler; Anna Hilsmann; Peter Eisert; Sigurd Van Broeck; Christoph Stevens; Julie A. Wall; Michele Sanna; Davide A. Mauro; Fons Kuijk; Rufael Mekuria; Pablo Cesar; David S. Monaghan; Noel E. O'Connor; Petros Daras; Dimitrios S. Alexiadis; Theodore B. Zahariadis

Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite different from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experience of talking in person. Several causes for these differences have been identified and we propose inspiring and innovative solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational experience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems.


conference on multimedia modeling | 2014

Real-Time Gaze Estimation Using a Kinect and a HD Webcam

Yingbo Li; David S. Monaghan; Noel E. O'Connor

In human-computer interaction, gaze orientation is an important and promising source of information to demonstrate the attention and focus of users. Gaze detection can also be an extremely useful metric for analysing human mood and affect. Furthermore, gaze can be used as an input method for human-computer interaction. However, currently real-time and accurate gaze estimation is still an open problem. In this paper, we propose a simple and novel estimation model of the real-time gaze direction of a user on a computer screen. This method utilises cheap capturing devices, a HD webcam and a Microsoft Kinect. We consider that the gaze motion from a user facing forwards is composed of the local gaze motion shifted by eye motion and the global gaze motion driven by face motion. We validate our proposed model of gaze estimation and provide experimental evaluation of the reliability and the precision of the method.


workshop on image analysis for multimedia interactive services | 2013

Real-time head nod and shake detection for continuous human affect recognition

Haolin Wei; Patricia Scanlon; Yingbo Li; David S. Monaghan; Noel E. O'Connor

Human affect recognition is the field of study associated with using automatic techniques to identify human emotion or human affective state. A persons affective states is often communicated non-verbally through body language. A large part of human body language communication is the use of head gestures. Almost all cultures use subtle head movements to convey meaning. Two of the most common and distinct head gestures are the head nod and the head shake gestures. In this paper we present a robust system to automatically detect head nod and shakes. We employ the Microsoft Kinect and utilise discrete Hidden Markov Models (HMMs) as the backbone to a machine learning based classifier within the system. The system achieves 86% accuracy on test datasets and results are provided.


international conference on 3d web technology | 2015

Autonomous agents and avatars in REVERIE's virtual environment

Fons Kuijk; Konstantinos C. Apostolakis; Petros Daras; Brian Ravenet; Haolin Wei; David S. Monaghan

In this paper, we describe the enactment of autonomous agents and avatars in the web-based social collaborative virtual environment of REVERIE that supports natural, human-like behavior, physical interaction and engagement. Represented by avatars, users feel immersed in this virtual world in which they can meet and share experiences as in real life. Like the avatars, autonomous agents that may act in this world are capable of demonstrating human-like non-verbal behavior and facilitate social interaction. We describe how reasoning components of the REVERIE system connect and cooperatively control autonomous agents and avatars representing a user.


international conference on image processing | 2014

REVERIE: Natural human interaction in virtual immersive environments

Julie A. Wall; Ebroul Izquierdo; Lemonia Argyriou; David S. Monaghan; Noel E. O'Connor; Steven Poulakos; Aljoscha Smolic; Rufael Mekuria

REVERIE (REal and Virtual Engagement in Realistic Immersive Environments [1]) targets novel research to address the demanding challenges involved with developing state-of-the-art technologies for online human interaction. The REVERIE framework enables users to meet, socialise and share experiences online by integrating cutting-edge technologies for 3D data acquisition and processing, networking, autonomy and real-time rendering. In this paper, we describe the innovative research that is showcased through the REVERIE integrated framework through richly defined use-cases which demonstrate the validity and potential for natural interaction in a virtual immersive and safe environment. Previews of the REVERIE demo and its key research components can be viewed at www.youtube.com/user/REVERIEFP7.

Collaboration


Dive into the David S. Monaghan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Petros Daras

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haolin Wei

Dublin City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amin Ahmadi

Dublin City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ebroul Izquierdo

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Dimitrios S. Alexiadis

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge