Thanassis Rikakis
Arizona State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thanassis Rikakis.
international conference on multimedia and expo | 2004
Gang Qian; Feng Guo; Todd Ingalls; Loren Olson; Jodi James; Thanassis Rikakis
In this paper, we report a real-time gesture driven interactive system with multimodal feedback for performing arts, especially dance. The system consists of two major parts., a gesture recognition engine and a multimodal feedback engine. The gesture recognition engine provides real-time recognition of the performers gesture based on the 3D marker coordinates from a marker-based motion capture system. According to the recognition results, the multimodal feedback engine produces associated visual and audio feedback to the performer. This interactive system is simple to implement and robust to errors in 3D marker data. Satisfactory interactive dance performances have been successfully created and presented using the reported system
acm multimedia | 2006
Jodi James; Todd Ingalls; Gang Qian; Loren Olsen; Daniel Whiteley; Siew Wong; Thanassis Rikakis
Movement-based interactive dance has recently attracted great interest in the performing arts. While utilizing motion capture technology, the goal of this project was to design the necessary real-time motion analysis engine, staging, and communication systems for the completion of a movement-based interactive multimedia dance performance. The movement analysis engine measured the correlation of dance movement between three people wearing similar sets of retro-reflective markers in a motion capture volume. This analysis provided the framework for the creation of an interactive dance piece, Lucidity, which will be described in detail. Staging such a work also presented additional challenges. These challenges and our proposed solutions will be discussed. We conclude with a description of the final work and a summary of our future research objectives.
2008 Virtual Rehabilitation | 2008
Suneth Attygalle; Margaret Duff; Thanassis Rikakis; Jiping He
We present an at-home assessment system for upper extremity rehabilitation with simple motion capture that is functional and affordable. This system uses lighted targets to initiate a reach to grasp (or touch, if the patient is unable to grasp) to three touch- and force-sensitive cones. During the reach, end-point reach trajectory is captured using a low-cost, custom-built infrared motion capture system using two Wii remotes. An embedded computer collects data for tracking patientpsilas progress over time. The system is a low cost way to track reaching trajectory, reaching time, reaction time and relative grasp forces. It requires minimal setup and instruction.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2010
Margaret Duff; Yinpeng Chen; Suneth Attygalle; Janice Herman; Hari Sundaram; Gang Qian; Jiping He; Thanassis Rikakis
This paper presents a novel mixed reality rehabilitation system used to help improve the reaching movements of people who have hemiparesis from stroke. The system provides real-time, multimodal, customizable, and adaptive feedback generated from the movement patterns of the subjects affected arm and torso during reaching to grasp. The feedback is provided via innovative visual and musical forms that present a stimulating, enriched environment in which to train the subjects and promote multimodal sensory-motor integration. A pilot study was conducted to test the system function, adaptation protocol and its feasibility for stroke rehabilitation. Three chronic stroke survivors underwent training using our system for six 75-min sessions over two weeks. After this relatively short time, all three subjects showed significant improvements in the movement parameters that were targeted during training. Improvements included faster and smoother reaches, increased joint coordination and reduced compensatory use of the torso and shoulder. The system was accepted by the subjects and shows promise as a useful tool for physical and occupational therapists to enhance stroke rehabilitation.
Journal of Neuroengineering and Rehabilitation | 2011
Nicole Lehrer; Suneth Attygalle; Steven L. Wolf; Thanassis Rikakis
BackgroundAlthough principles based in motor learning, rehabilitation, and human-computer interfaces can guide the design of effective interactive systems for rehabilitation, a unified approach that connects these key principles into an integrated design, and can form a methodology that can be generalized to interactive stroke rehabilitation, is presently unavailable.ResultsThis paper integrates phenomenological approaches to interaction and embodied knowledge with rehabilitation practices and theories to achieve the basis for a methodology that can support effective adaptive, interactive rehabilitation. Our resulting methodology provides guidelines for the development of an action representation, quantification of action, and the design of interactive feedback. As Part I of a two-part series, this paper presents key principles of the unified approach. Part II then describes the application of this approach within the implementation of the Adaptive Mixed Reality Rehabilitation (AMRR) system for stroke rehabilitation.ConclusionsThe accompanying principles for composing novel mixed reality environments for stroke rehabilitation can advance the design and implementation of effective mixed reality systems for the clinical setting, and ultimately be adapted for home-based application. They furthermore can be applied to other rehabilitation needs beyond stroke.
Journal of Neuroengineering and Rehabilitation | 2011
Nicole Lehrer; Yinpeng Chen; Margaret Duff; Steven L. Wolf; Thanassis Rikakis
BackgroundFew existing interactive rehabilitation systems can effectively communicate multiple aspects of movement performance simultaneously, in a manner that appropriately adapts across various training scenarios. In order to address the need for such systems within stroke rehabilitation training, a unified approach for designing interactive systems for upper limb rehabilitation of stroke survivors has been developed and applied for the implementation of an Adaptive Mixed Reality Rehabilitation (AMRR) System.ResultsThe AMRR system provides computational evaluation and multimedia feedback for the upper limb rehabilitation of stroke survivors. A participants movements are tracked by motion capture technology and evaluated by computational means. The resulting data are used to generate interactive media-based feedback that communicates to the participant detailed, intuitive evaluations of his performance. This article describes how the AMRR systems interactive feedback is designed to address specific movement challenges faced by stroke survivors. Multimedia examples are provided to illustrate each feedback component. Supportive data are provided for three participants of varying impairment levels to demonstrate the systems ability to train both targeted and integrated aspects of movement.ConclusionsThe AMRR system supports training of multiple movement aspects together or in isolation, within adaptable sequences, through cohesive feedback that is based on formalized compositional design principles. From preliminary analysis of the data, we infer that the systems ability to train multiple foci together or in isolation in adaptable sequences, utilizing appropriately designed feedback, can lead to functional improvement. The evaluation and feedback frameworks established within the AMRR system will be applied to the development of a novel home-based system to provide an engaging yet low-cost extension of training for longer periods of time.
ACM Transactions on Multimedia Computing, Communications, and Applications | 2008
Yinpeng Chen; Weiwei Xu; Hari Sundaram; Thanassis Rikakis; Sheng Min Liu
In this article, we present a media adaptation framework for an immersive biofeedback system for stroke patient rehabilitation. In our biofeedback system, media adaptation refers to changes in audio/visual feedback as well as changes in physical environment. Effective media adaptation frameworks help patients recover generative plans for arm movement with potential for significantly shortened therapeutic time. The media adaptation problem has significant challenges—(a) high dimensionality of adaptation parameter space; (b) variability in the patient performance across and within sessions; (c) the actual rehabilitation plan is typically a non-first-order Markov process, making the learning task hard. Our key insight is to understand media adaptation as a real-time feedback control problem. We use a mixture-of-experts based Dynamic Decision Network (DDN) for online media adaptation. We train DDN mixtures per patient, per session. The mixture models address two basic questions—(a) given a specific adaptation suggested by the domain experts, predict the patient performance, and (b) given the expected performance, determine the optimal adaptation decision. The questions are answered through an optimality criterion based search on DDN models trained in previous sessions. We have also developed new validation metrics and have very good results for both questions on actual stroke rehabilitation data.
computer vision and pattern recognition | 2013
Vinay Venkataraman; Pavan K. Turaga; Nicole Lehrer; Michael Baran; Thanassis Rikakis; Steven L. Wolf
In this paper, we propose a novel shape-theoretic framework for dynamical analysis of human movement from 3D data. The key idea we propose is the use of global descriptors of the shape of the dynamical attractor as a feature for modeling actions. We apply this approach to the novel application scenario of estimation of movement quality from a single-marker for future usage in home-based stroke rehabilitation. Using a dataset collected from 15 stroke survivors performing repetitive task therapy, we demonstrate that the proposed method outperforms traditional methods, such as kinematic analysis and use of chaotic invariants, in estimation of movement quality. In addition, we demonstrate that the proposed framework is sufficiently general for the application of action and gesture recognition as well. Our experimental results reflect improved action recognition results on two publicly available 3D human activity databases.
2011 INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL MODELS FOR LIFE SCIENCES (CMLS-11) | 2011
Yinpeng Chen; Margaret Duff; Nicole Lehrer; Hari Sundaram; Jiping He; Steven L. Wolf; Thanassis Rikakis
This paper presents a novel generalized computational framework for quantitative kinematic evaluation of movement in a rehabilitation clinic setting. The framework integrates clinical knowledge and computational data‐driven analysis together in a systematic manner. The framework provides three key benefits to rehabilitation: (a) the resulting continuous normalized measure allows the clinician to monitor movement quality on a fine scale and easily compare impairments across participants, (b) the framework reveals the effect of individual movement components on the composite movement performance helping the clinician decide the training foci, and (c) the evaluation runs in real‐time, which allows the clinician to constantly track a patient’s progress and make appropriate adaptations to the therapy protocol. The creation of such an evaluation is difficult because of the sparse amount of recorded clinical observations, the high dimensionality of movement and high variations in subject’s performance. We addres...
acm sigmm workshop on experiential telepresence | 2003
Harini Sridharan; Hari Sundaram; Thanassis Rikakis
In this paper, we develop formal computational models for three aspects of experiential systems for browsing media -- (a) context (b) interactivity through hyper-mediation and (c) context evolution using a memory model. Experiential systems deal with the problem of developing context adaptive mechanisms for knowledge acquisition and insight. Context is modeled as a union of graphs whose nodes represent concepts and where the edges represent the semantic relationships. The system context is the union of the contexts of the user, the environment and the media being accessed. We also develop a novel concept dissimilarity. We then develop algorithms to determine the optimal hyperlink for each media element by determining the relationship between the user context and the media. As the user navigates through the hyper-linked sources, the memory model captures the interaction of the user with the hyper-linked sources and updates the user context. Finally, this results in new hyper-links for the media. Our pilot user studies show excellent results, validating our framework.